title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
44 values
text
stringlengths
0
8.58M
Systemic sclerosis sine scleroderma: clinical and serological features and relationship with other cutaneous subsets in a large series of patients from the national registry ‘SPRING’ of the Italian Society for Rheumatology
53655d6f-055e-45e7-90f7-f30f3afbb51c
9990652
Internal Medicine[mh]
Currently, the literature is conflicting concerning demographics and clinico-laboratory hallmark of systemic sclerosis (SSc) sine scleroderma (ssSSc), a quite rare SSc subset without distinctive cutaneous signs, generating diagnostic uncertainties. The analysis of the large SSc population from Systemic sclerosis PRogression INvestiGation Italian national registry allowed for an updated description of the ssSSc phenotype, mainly characterised by a longer Raynaud’s phenomenon duration at diagnosis, reduced frequencies of peripheral vascular involvement, less microcirculatory abnormalities and anticentromere positivity. The comparative analysis with other subsets revealed that ssSSc visceral involvement was nearly similar to limited cutaneous SSc and significantly milder than diffuse cutaneous SSc. Our findings may provide some important suggestions for future investigations on the biological bases of the variable distribution of both skin/visceral fibrosis and microangiopathy through the whole scleroderma spectrum, as well as on the complex etiopathogenesis of the SSc, which may lead to a novel disease subsetting. Systemic sclerosis (SSc) is a connective tissue disease affecting skin and internal organs, characterised by autoimmunity, microvascular injury and collagen deposition. In SSc, widespread skin and visceral fibrosis are associated with a reduction of quality of life, poor patients’ outcomes and increased mortality. The hallmark of the disease is the remarkable heterogeneity of clinical manifestations. The disease is clinically classified according to the extension of skin involvement in two main subsets, limited cutaneous SSc (lcSSc) and diffuse cutaneous (dcSSc), that also include SSc-specific autoantibodies, nailfold capillaroscopic patterns and fibrosis of internal organ. The two subsets have well-recognised differences with respect to disease severity and prognosis. Furthermore, SSc sine scleroderma (ssSSc) is considered as a separate subset first described in detail by Rodnan and Fennell. Its clinical presentation can be misleading, generating diagnostic uncertainties because of the lack of skin involvement, although it may have the involvement of lung, heart and gastrointestinal (GI) system. Currently, the literature is conflicting concerning the real prevalence of ssSSc, the female/male ratio, and the presence/severity of both visceral organ and peripheral vascular involvement, mostly depending on the characteristics of the studied population. In 2014, the Italian Society for Rheumatology promoted the development of the national SPRING (Systemic sclerosis PRogression INvestiGation) registry, which includes the clinical conditions preceding the onset of definite SSc and the main disease subsets. The overall baseline data have already been published while the assessment of more than 2400 consecutive patients is still in progress. The aim of the present work was to analyse the main demographic, clinical and laboratory features of patients with ssSSc in comparison with lcSSc and dcSSc subsets within the SPRING registry. Moreover, the observed findings were compared with other similar studies present in the literature. Patients and methods The non-profit national multicentre SPRING registry, involving 37 tertiary referral centres, collects more than 150 disease variables, such as demographic, clinical and imaging investigations, as well as ongoing treatments. Data were collected and handled using the tool REDCap (Research Electronic Data Capture), a web-based application for assistance in data collection. Since multicentre registries are greatly heterogeneous in collecting and entering data, we minimised this issue by introducing clear-cut definitions of all registry variables; moreover, periodic quality checks were performed by the coordinating centre. Definitions For the current study, data concerning patients with definite SSc aged >18, enrolled up to June 2022, were taken into account. The SPRING database has been previously described, consisting of patients classified into four different cohorts: (1) primary Raynaud’s phenomenon (RP); (2) suspected secondary RP; (3) Very Early Diagnosis of Systemic Sclerosis; (4) definite SSc according to ACR/EULAR 2013 classification criteria. A thorough medical chart review for all consecutive patients with definite SSc was made and cutaneous subsets were classified as dcSSc, lcSSc and ssSSc. In particular, the ssSSc was classified based on the absence of puffy fingers and skin thickening, in any skin areas, including fingers (sclerodactyly), hands, limbs and trunk. All ssSSc patients had a modified Rodnan skin score=0. Information collected at registration included age of disease onset, that is, that of the first non-RP sign(s)/ symptom(s), time from SSc onset to diagnosis, time from RP onset to SSc diagnosis, as well the following clinical variables: oesophageal dysfunction symptoms (dysphagia, reflux), cardiopulmonary signs and symptoms (dyspnoea, arrhythmias, heart failure), sicca syndrome (dry eyes/mouth), renal crisis (sudden onset of severe arterial hypertension with acute renal failure), skin signs (sclerodactyly, puffy fingers, calcinosis, telangiectasia), peripheral vascular signs (fingertip pitting scars (DPS), digital ulcers (DUs), gangrene) and musculoskeletal (tenosynovitis, arthritis defined as inflammatory changes observed in more than two joints, joint contractures, tendon friction rubs, osteomyelitis, carpal tunnel syndrome, myositis). Capillaroscopic patterns at nailfold videocapillaroscopy (NVC) were classified according to the current guidelines as normal (N), early (E), active (A) and late (L). Laboratory findings included antinuclear antibodies (ANA), antiextractable nuclear antigens, particularly the SSc-related antibodies (anticentromere/CENP-B, antitopoisomerase I/Scl-70 and anti-RNA polymerase III), as earlier described. Non-invasive cardiac diagnostic testing was performed by trans-thoracic Doppler echocardiography, collecting the following data: systolic pulmonary arterial pressure (sPAP), left ventricular ejection fraction (LVEF), anomalous diastolic function, pericardial effusion. The current algorithm was used to screen SSc patients and identify those with a high-risk of pulmonary arterial hypertension (PAH). Those with a high PAH probability underwent right hearth catheterisation (RHC). Investigations for lung involvement consisted of pulmonary function tests (predicted value of total lung capacity (TLC), forced vital capacity (FVC)), diffusion capacity for carbon monoxide (DLCO) and high-resolution CT (HRCT) (ground glass fibrosis, reticulation, honeycombing). Finally, information about previous/current treatments included both vasoactive/vasodilating drugs (bosentan, sildenafil, vardenafil, tadalafil, iloprost, PGE1, inhaled-INN iloprost, epoprostenol, riociguat, nifedipine, nicardipine, amlodipine, felodipine, diltiazem) and immunosuppressants (cyclophosphamide, methotrexate, leflunomide, aziatoprine, micophenolic acid, cyclosporine, rituximab, imatinib, anti-TNF-alpha, tocilizumab, abatacept) was collected. Statistical analysis Descriptive analyses were reported as absolute and relative frequencies for categorical variables, mean and SD for continuous ones. Median (IQR) has been provided in place of mean (SD) when significant asymmetry of distributions was present. To evaluate the differences among groups either the Pearson’s χ 2 test or the Fisher’s exact test were employed, while quantitative variables were examined using the non-parametric Mann-Whitney test or the t-test, as appropriate. To avoid family-wise error rate the Simes-Benjamini-Hochberg correction was applied. Multivariable statistical analysis was also performed by using a logistic regression model. P values <0.05 were considered statistically significant. All analyses were carried out using R statistical software (Foundation for Statistical Computing, V.4.2). The non-profit national multicentre SPRING registry, involving 37 tertiary referral centres, collects more than 150 disease variables, such as demographic, clinical and imaging investigations, as well as ongoing treatments. Data were collected and handled using the tool REDCap (Research Electronic Data Capture), a web-based application for assistance in data collection. Since multicentre registries are greatly heterogeneous in collecting and entering data, we minimised this issue by introducing clear-cut definitions of all registry variables; moreover, periodic quality checks were performed by the coordinating centre. Definitions For the current study, data concerning patients with definite SSc aged >18, enrolled up to June 2022, were taken into account. The SPRING database has been previously described, consisting of patients classified into four different cohorts: (1) primary Raynaud’s phenomenon (RP); (2) suspected secondary RP; (3) Very Early Diagnosis of Systemic Sclerosis; (4) definite SSc according to ACR/EULAR 2013 classification criteria. A thorough medical chart review for all consecutive patients with definite SSc was made and cutaneous subsets were classified as dcSSc, lcSSc and ssSSc. In particular, the ssSSc was classified based on the absence of puffy fingers and skin thickening, in any skin areas, including fingers (sclerodactyly), hands, limbs and trunk. All ssSSc patients had a modified Rodnan skin score=0. Information collected at registration included age of disease onset, that is, that of the first non-RP sign(s)/ symptom(s), time from SSc onset to diagnosis, time from RP onset to SSc diagnosis, as well the following clinical variables: oesophageal dysfunction symptoms (dysphagia, reflux), cardiopulmonary signs and symptoms (dyspnoea, arrhythmias, heart failure), sicca syndrome (dry eyes/mouth), renal crisis (sudden onset of severe arterial hypertension with acute renal failure), skin signs (sclerodactyly, puffy fingers, calcinosis, telangiectasia), peripheral vascular signs (fingertip pitting scars (DPS), digital ulcers (DUs), gangrene) and musculoskeletal (tenosynovitis, arthritis defined as inflammatory changes observed in more than two joints, joint contractures, tendon friction rubs, osteomyelitis, carpal tunnel syndrome, myositis). Capillaroscopic patterns at nailfold videocapillaroscopy (NVC) were classified according to the current guidelines as normal (N), early (E), active (A) and late (L). Laboratory findings included antinuclear antibodies (ANA), antiextractable nuclear antigens, particularly the SSc-related antibodies (anticentromere/CENP-B, antitopoisomerase I/Scl-70 and anti-RNA polymerase III), as earlier described. Non-invasive cardiac diagnostic testing was performed by trans-thoracic Doppler echocardiography, collecting the following data: systolic pulmonary arterial pressure (sPAP), left ventricular ejection fraction (LVEF), anomalous diastolic function, pericardial effusion. The current algorithm was used to screen SSc patients and identify those with a high-risk of pulmonary arterial hypertension (PAH). Those with a high PAH probability underwent right hearth catheterisation (RHC). Investigations for lung involvement consisted of pulmonary function tests (predicted value of total lung capacity (TLC), forced vital capacity (FVC)), diffusion capacity for carbon monoxide (DLCO) and high-resolution CT (HRCT) (ground glass fibrosis, reticulation, honeycombing). Finally, information about previous/current treatments included both vasoactive/vasodilating drugs (bosentan, sildenafil, vardenafil, tadalafil, iloprost, PGE1, inhaled-INN iloprost, epoprostenol, riociguat, nifedipine, nicardipine, amlodipine, felodipine, diltiazem) and immunosuppressants (cyclophosphamide, methotrexate, leflunomide, aziatoprine, micophenolic acid, cyclosporine, rituximab, imatinib, anti-TNF-alpha, tocilizumab, abatacept) was collected. For the current study, data concerning patients with definite SSc aged >18, enrolled up to June 2022, were taken into account. The SPRING database has been previously described, consisting of patients classified into four different cohorts: (1) primary Raynaud’s phenomenon (RP); (2) suspected secondary RP; (3) Very Early Diagnosis of Systemic Sclerosis; (4) definite SSc according to ACR/EULAR 2013 classification criteria. A thorough medical chart review for all consecutive patients with definite SSc was made and cutaneous subsets were classified as dcSSc, lcSSc and ssSSc. In particular, the ssSSc was classified based on the absence of puffy fingers and skin thickening, in any skin areas, including fingers (sclerodactyly), hands, limbs and trunk. All ssSSc patients had a modified Rodnan skin score=0. Information collected at registration included age of disease onset, that is, that of the first non-RP sign(s)/ symptom(s), time from SSc onset to diagnosis, time from RP onset to SSc diagnosis, as well the following clinical variables: oesophageal dysfunction symptoms (dysphagia, reflux), cardiopulmonary signs and symptoms (dyspnoea, arrhythmias, heart failure), sicca syndrome (dry eyes/mouth), renal crisis (sudden onset of severe arterial hypertension with acute renal failure), skin signs (sclerodactyly, puffy fingers, calcinosis, telangiectasia), peripheral vascular signs (fingertip pitting scars (DPS), digital ulcers (DUs), gangrene) and musculoskeletal (tenosynovitis, arthritis defined as inflammatory changes observed in more than two joints, joint contractures, tendon friction rubs, osteomyelitis, carpal tunnel syndrome, myositis). Capillaroscopic patterns at nailfold videocapillaroscopy (NVC) were classified according to the current guidelines as normal (N), early (E), active (A) and late (L). Laboratory findings included antinuclear antibodies (ANA), antiextractable nuclear antigens, particularly the SSc-related antibodies (anticentromere/CENP-B, antitopoisomerase I/Scl-70 and anti-RNA polymerase III), as earlier described. Non-invasive cardiac diagnostic testing was performed by trans-thoracic Doppler echocardiography, collecting the following data: systolic pulmonary arterial pressure (sPAP), left ventricular ejection fraction (LVEF), anomalous diastolic function, pericardial effusion. The current algorithm was used to screen SSc patients and identify those with a high-risk of pulmonary arterial hypertension (PAH). Those with a high PAH probability underwent right hearth catheterisation (RHC). Investigations for lung involvement consisted of pulmonary function tests (predicted value of total lung capacity (TLC), forced vital capacity (FVC)), diffusion capacity for carbon monoxide (DLCO) and high-resolution CT (HRCT) (ground glass fibrosis, reticulation, honeycombing). Finally, information about previous/current treatments included both vasoactive/vasodilating drugs (bosentan, sildenafil, vardenafil, tadalafil, iloprost, PGE1, inhaled-INN iloprost, epoprostenol, riociguat, nifedipine, nicardipine, amlodipine, felodipine, diltiazem) and immunosuppressants (cyclophosphamide, methotrexate, leflunomide, aziatoprine, micophenolic acid, cyclosporine, rituximab, imatinib, anti-TNF-alpha, tocilizumab, abatacept) was collected. Descriptive analyses were reported as absolute and relative frequencies for categorical variables, mean and SD for continuous ones. Median (IQR) has been provided in place of mean (SD) when significant asymmetry of distributions was present. To evaluate the differences among groups either the Pearson’s χ 2 test or the Fisher’s exact test were employed, while quantitative variables were examined using the non-parametric Mann-Whitney test or the t-test, as appropriate. To avoid family-wise error rate the Simes-Benjamini-Hochberg correction was applied. Multivariable statistical analysis was also performed by using a logistic regression model. P values <0.05 were considered statistically significant. All analyses were carried out using R statistical software (Foundation for Statistical Computing, V.4.2). Demographic, clinical and laboratory findings of the whole SSc series and cutaneous subsets are provided in , whereas data regarding internal organ involvement, peripheral microcirculation abnormalities and previous/current treatments are given in . Moreover, gives a comprehensive depiction of the similarities and difference between the three cutaneous subsets. Finally, summarises the main cohort and multicentre studies on ssSSc available in the world literature. Whole SSc series: demographic features and subsetting Up to 30 June 2022, among the whole 1808 patients’ series with definite SSc included in the study, 61 (3.4%) were classified as ssSSc that were characterised by the absence of cutaneous involvement but fulfilling the classification criteria of SSc. All patients with ssSSc reached the cut-off of ≥9 that satisfied the subitem scores, excluding scleroderma skin involvement, in accordance with the point score system of ACR/EULAR 2013 criteria. In particular, 6 (9.8%) had a total score of 9, and 33 (54.1%) a total score of 10. Next, patients (8.2%) reached the total score of 11 and 10 (16.4%) the total score of 12. Finally, seven patients (11.4%) had a score ≥13. These ssSSc patients had a mean age at disease onset of 52.8±14.7 years, with a 95.1% (F/M ratio 19/1) of females. The lcSSc subset consisted of 1377 patients, accounting for 76.2% of the whole cohort (F/M ratio 8.5:1), while 370 patients (20.4%) had dcSSc variant (F/M ratio 4.9:1) . ssSSc: clinical variables and autoantibodies ssSSc patients showed a variable percentage of other SSc signs/symptoms. Namely, telangiectasias (63.9%), oesophageal involvement (42.6%) and sicca syndrome (44.3%) were common, while DPS and DUs were less represented (19.7% and 6.6%, respectively). Musculoskeletal involvement was globally present in around one-third of patients (tenosynovitis 4.9%, arthritis 11.9% and myositis 11.9%). Joint contractures and tendon friction rubs were only anecdotally reported (only two and one case). In all ssSSc patients, serum ANA were present. Among SSc-specific autoantibodies, anticentromere were detected in 40%, followed by antitopoisomerase I in 18.3% and anti-RNA polymerase III in 2.6% of ssSSc . ssSSc: internal organ and microcirculation abnormalities Hearth involvement was observed in 13 out of total ssSSc patients (21.3%). Doppler echocardiography examination revealed diastolic dysfunction (22%), pericardial effusion (5.9%), mean sPAP of 25.8±17 mm Hg, and mean LVEF % of 61.7±4, while PAH at RHC was found in 5.9% of assessed individuals . More than one-third of ssSSc patients (37.7%) had ILD at HRCT. The mean values of % predicted DLCO, FVC and TLC were 72.2±19.6, 105.6±21.7 and 103.9±19.1, respectively. Among capillaroscopic findings, a normal or early pattern was more frequent and was found in almost 50% of ssSSc patients (12.1% and 36.2%, respectively), whereas late pattern was uncommon (8.6%). In ssSSc patients, vasoactive/vasodilating treatments were frequently used (62.3%), while immunosuppressants in around 24.6% of patients . ssSSc versus limited and diffuse cutaneous subsets The results of comparative analysis among the three SSc subsets are shown in and . The ssSSc and lcSSc exhibit several similarities as regards both demographic and clinical parameters, except for DPS (ssSSc 19.7% vs lcSSc 42 %, p=0.01). Conversely, the ssSSc and the dcSSc subsets markedly differ for the rate of female sex (95.1% vs 83%, p=0.001), the age of disease onset (52.8±14.7 vs 45.4±13.4 years; p=0.003), as well as time interval (years) from RP onset to SSc diagnosis (median, IQR=3, 1–16.5 vs median, IQR=1, 0–3; p<0.001) . The oesophageal involvement and sicca syndrome were significantly lower in ssSSc than dcSSc (p=0.009, for both), as well as DPS (p<0.001), DU (p<0.001) and calcinosis (p=0.02). Among SSc-specific autoantibodies, anticentromere were more frequently detected in ssSSc (40%) compared with dcSSc (8.6%, p<0.001), while an opposite distribution was observed for antitopoisomerase I antibodies (18.3% vs 67.4%, p<0.001). The frequency of ILD was similar in ssSSc and lcSSc (37.7% and 36.8%, respectively), but higher in dcSSc (62.7%). ssSSc and dcSSc differed for DLCO (mean DLCO 72.2±19.6 vs 62.4±22.8, p=0.009) and other functional tests (mean FVC predicted 105.6±21.7 vs 89.2±20.9, TLC 103.9±19.1 vs 87.6±19.5, p<0.0001 for both) . The proportion of normal and early patterns were more frequent in ssSSc (12.1% and 36.2%) compared with both lcSSc (7.6% and 21.8%) and dcSSc (1.8% and 14.9%) (p=0.003 and 0.001, respectively), whereas the late pattern was uncommon (8.6%) in ssSSc, with an increasing prevalence from lcSSc (21.2%) to dcSSc (47.6%, p<0.001). Finally, both vasoactive/vasodilating and immunosuppressive therapies were more frequently used in dcSSc (p=0.001, ). Multivariable logistic regression analysis, after adjustment for sex and age at onset, indicates that longer time from RP onset to diagnosis (OR 1.031; 95% CI 1.004 to 1.057; p=0.016) and higher prevalence of DPS (OR 0.394; 95% CI 0.188 to 0.767; p=0.009) may distinguish ssSSc from lcSSc patients, whereas longer time from RP onset to diagnosis (OR 1.062; 95% CI 1.024 to 1.105; p=0.002), higher DPS (OR 0.158; 95% CI 0.067 to 0.346, p<0.001), anticentromere positivity (OR 2.486; 95% CI 1.038 to 5.972; p=0.04) and antitopoisomerase 1 negativity (OR 0.219; 95% CI 0.086 to 0.515; p=0.001) may distinguish ssSSc from dcSSc patients. As expected, lcSSc and dcSSc significantly differed in several clinical and laboratory findings (oesophageal involvement, renal crisis, DPS and DU, telangiectasias, calcinosis, arthritis, and myositis), including cardio-pulmonary involvement (pericardial effusion and ILD in all items), with a relevant higher frequency of antitopoisomerase I and late capillaroscopic pattern in dcSSc. Taken together these differences showed a significant higher prevalence of worse clinical-prognostic parameters in the dcSSc compared with lcSSc . Up to 30 June 2022, among the whole 1808 patients’ series with definite SSc included in the study, 61 (3.4%) were classified as ssSSc that were characterised by the absence of cutaneous involvement but fulfilling the classification criteria of SSc. All patients with ssSSc reached the cut-off of ≥9 that satisfied the subitem scores, excluding scleroderma skin involvement, in accordance with the point score system of ACR/EULAR 2013 criteria. In particular, 6 (9.8%) had a total score of 9, and 33 (54.1%) a total score of 10. Next, patients (8.2%) reached the total score of 11 and 10 (16.4%) the total score of 12. Finally, seven patients (11.4%) had a score ≥13. These ssSSc patients had a mean age at disease onset of 52.8±14.7 years, with a 95.1% (F/M ratio 19/1) of females. The lcSSc subset consisted of 1377 patients, accounting for 76.2% of the whole cohort (F/M ratio 8.5:1), while 370 patients (20.4%) had dcSSc variant (F/M ratio 4.9:1) . ssSSc patients showed a variable percentage of other SSc signs/symptoms. Namely, telangiectasias (63.9%), oesophageal involvement (42.6%) and sicca syndrome (44.3%) were common, while DPS and DUs were less represented (19.7% and 6.6%, respectively). Musculoskeletal involvement was globally present in around one-third of patients (tenosynovitis 4.9%, arthritis 11.9% and myositis 11.9%). Joint contractures and tendon friction rubs were only anecdotally reported (only two and one case). In all ssSSc patients, serum ANA were present. Among SSc-specific autoantibodies, anticentromere were detected in 40%, followed by antitopoisomerase I in 18.3% and anti-RNA polymerase III in 2.6% of ssSSc . Hearth involvement was observed in 13 out of total ssSSc patients (21.3%). Doppler echocardiography examination revealed diastolic dysfunction (22%), pericardial effusion (5.9%), mean sPAP of 25.8±17 mm Hg, and mean LVEF % of 61.7±4, while PAH at RHC was found in 5.9% of assessed individuals . More than one-third of ssSSc patients (37.7%) had ILD at HRCT. The mean values of % predicted DLCO, FVC and TLC were 72.2±19.6, 105.6±21.7 and 103.9±19.1, respectively. Among capillaroscopic findings, a normal or early pattern was more frequent and was found in almost 50% of ssSSc patients (12.1% and 36.2%, respectively), whereas late pattern was uncommon (8.6%). In ssSSc patients, vasoactive/vasodilating treatments were frequently used (62.3%), while immunosuppressants in around 24.6% of patients . The results of comparative analysis among the three SSc subsets are shown in and . The ssSSc and lcSSc exhibit several similarities as regards both demographic and clinical parameters, except for DPS (ssSSc 19.7% vs lcSSc 42 %, p=0.01). Conversely, the ssSSc and the dcSSc subsets markedly differ for the rate of female sex (95.1% vs 83%, p=0.001), the age of disease onset (52.8±14.7 vs 45.4±13.4 years; p=0.003), as well as time interval (years) from RP onset to SSc diagnosis (median, IQR=3, 1–16.5 vs median, IQR=1, 0–3; p<0.001) . The oesophageal involvement and sicca syndrome were significantly lower in ssSSc than dcSSc (p=0.009, for both), as well as DPS (p<0.001), DU (p<0.001) and calcinosis (p=0.02). Among SSc-specific autoantibodies, anticentromere were more frequently detected in ssSSc (40%) compared with dcSSc (8.6%, p<0.001), while an opposite distribution was observed for antitopoisomerase I antibodies (18.3% vs 67.4%, p<0.001). The frequency of ILD was similar in ssSSc and lcSSc (37.7% and 36.8%, respectively), but higher in dcSSc (62.7%). ssSSc and dcSSc differed for DLCO (mean DLCO 72.2±19.6 vs 62.4±22.8, p=0.009) and other functional tests (mean FVC predicted 105.6±21.7 vs 89.2±20.9, TLC 103.9±19.1 vs 87.6±19.5, p<0.0001 for both) . The proportion of normal and early patterns were more frequent in ssSSc (12.1% and 36.2%) compared with both lcSSc (7.6% and 21.8%) and dcSSc (1.8% and 14.9%) (p=0.003 and 0.001, respectively), whereas the late pattern was uncommon (8.6%) in ssSSc, with an increasing prevalence from lcSSc (21.2%) to dcSSc (47.6%, p<0.001). Finally, both vasoactive/vasodilating and immunosuppressive therapies were more frequently used in dcSSc (p=0.001, ). Multivariable logistic regression analysis, after adjustment for sex and age at onset, indicates that longer time from RP onset to diagnosis (OR 1.031; 95% CI 1.004 to 1.057; p=0.016) and higher prevalence of DPS (OR 0.394; 95% CI 0.188 to 0.767; p=0.009) may distinguish ssSSc from lcSSc patients, whereas longer time from RP onset to diagnosis (OR 1.062; 95% CI 1.024 to 1.105; p=0.002), higher DPS (OR 0.158; 95% CI 0.067 to 0.346, p<0.001), anticentromere positivity (OR 2.486; 95% CI 1.038 to 5.972; p=0.04) and antitopoisomerase 1 negativity (OR 0.219; 95% CI 0.086 to 0.515; p=0.001) may distinguish ssSSc from dcSSc patients. As expected, lcSSc and dcSSc significantly differed in several clinical and laboratory findings (oesophageal involvement, renal crisis, DPS and DU, telangiectasias, calcinosis, arthritis, and myositis), including cardio-pulmonary involvement (pericardial effusion and ILD in all items), with a relevant higher frequency of antitopoisomerase I and late capillaroscopic pattern in dcSSc. Taken together these differences showed a significant higher prevalence of worse clinical-prognostic parameters in the dcSSc compared with lcSSc . This cross-sectional study indicates that ssSSc subset accounts for approximately 3% of the SSc patients’ population recorded in the Italian SPRING registry. Except for cutaneous involvement, this subset fulfils the current classification criteria of SSc by exhibiting the typical disease manifestations, including the main visceral organ damages. Our national registry study, focusing on the largest SSc population so far investigated, allows for valuable comparative analysis between the three skin subgroups. In particular, the data show that the clinical features and the autoantibodies of the ssSSc subset overlap with those of the lcSSc subset, while both differ significantly from the dcSSc subset, which is characterised by more severe microvascular and fibrotic organ involvement and increased antitopoisomerase I and anti-RNA polymerase rates. Of note, a significantly longer time interval from RP onset to SSc diagnosis was observed either in ssSSc and lcSSc compared with dcSSc, as well as an increasing trend in DU rates through the three subsets (ssSSc<lcSSc<dcSSc). Overall, a longer RP duration at diagnosis, reduced DPS frequency, less microcirculatory abnormalities and anticentromere positivity were the main features of the ssSSc subset. Demographic and clinical hallmarks of the present ssSSc series and those previously published are summarised in . The papers are characterised by a significant heterogeneity for what concerns the number, the modalities of patients’ recruitment (mono/multicentre), and the classification criteria adopted. This condition may account for the variability in the ssSSc prevalence (from 1.4% to 8.9%), as well as in the clinical phenotype, namely peripheral vascular, heart, lung, renal, oesophageal and musculoskeletal involvement. Whereas, similar data are reported concerning the higher occurrence of anticentromere antibodies, which exceeds that of antitopoisomerase in the majority of the ssSSc series. Given the large time interval of more than 20 years from the first to the last study, it may be noted that only the last three reports, including ours, used the 2013 ACR/EULAR classification criteria . Overall, the findings reported in the world literature suggested a few considerations that are addressed and developed in the discussion. The occurrence of ssSSc might be underestimated in clinical practice, being its identification and diagnosis difficult in some cases, due to the absence of any skin involvement paralleled by mild disease manifestations. Moreover, the recognising of SSc, particularly in the early stage, is based on some cardinal signs, namely Raynaud’s phenomenon, DPS, puffy fingers, cutaneous sclerosis, sclerodactyly and/or capillaroscopic/autoantibody alterations. The ACR/EULAR 2013 classification criteria for SSc have improved the sensitivity and specificity of previous 1980 ACR criteria, even in the absence of oedematous/fibrotic skin involvement. However, the variable prevalence of ssSSc among the main studies of the literature might be explained by the use of different classification criteria. It is supposable that these differences are real and may reflect the variable contribution of genetic and/or geographical/environmental factors among SSc populations from different ethnic groups or geographical areas. Furthermore, the low rate of ssSSc observed in some reports, including the present study, could also be related to an inadequate network of specialised tertiary referral centres of some geographic areas where a number of ssSSc may be diagnosed very late or completely overlooked. Our ssSSc subgroup showed a female/male ratio comparable to that of the Brazilian study and of our earlier reports, but significantly higher than that observed in other studies. According to previous observation for the whole SSc population in Italy, the longer time from RP onset to diagnosis seems to characterise the ssSSc subset, a finding also reported by other authors in national registries. It may represent a useful prognostic factor at SSc diagnosis in individual patient, suggesting a rather slow progression of the microangiopathic dysfunction that characterises the SSc pathogenesis. The present findings confirmed the lower rate of peripheral vascular complications in ssSSc, namely the DPS and/or DU, compared with other subsets. In particular, our ssSSc patients showed the lowest rate of DU among the three subsets, and a significantly lower percentage of DPS than the lcSSc subset. In this respect, a recent study found that DPS were associated with a severe disease course and worse outcomes. In ssSSc, our data indicate a milder peripheral small vessel vasculopathy as shown by the rarity of major capillaroscopic modifications and the low rate use of vasoactive/vasodilating drugs. In SSc, lung involvement includes ILD and PAH. The recognition of more than 37% of ssSSc patients presenting ILD on HRCT (with a DLCO close to 70%), with a percentage comparable to lcSSc, is in agreement with previous studies, but still higher than others. Noteworthy, in our ssSSc cohort, the percentage of honeycombing, which correspond to the most advanced and severe sign of lung injury, was very low. However, our findings demonstrate that, also in ssSSc, ILD should be a concern that should not be neglected. In this respect, additional data from national registries are needed to verify the real occurrence and severity of lung involvement in ssSSc, as well as of other organs manifestations (ie, cardiac, GI and musculoskeletal) that showed a wide range of variability among previously published studies . In our ssSSc patients, the autoantibody profile was similar to the data previously reported ; namely, anticentromere were detected in 40% or more, with antitopoisomerase I usually belonging 20% or less of patients. This specific autoantibody dichotomy seems to be the distinctive immunological marker of the ssSSc subset. The comparison between ssSSc and lcSSc revealed several similarities concerning demographic, clinical and immunological features. On the contrary, clear-cut differences were found in both ssSSc and lcSSc when compared with dcSSc, being the latter characterised by higher proportion of oesophageal, peripheral vascular (DPS, DU, calcinosis), pulmonary (functional alterations), worse NVC microvascular involvement and higher serum antitopoisomerase I autoantibodies. These findings were consistent with data formerly described. A thorough examination of the literature shows a substantial disagreement about including ssSSc within the scleroderma spectrum. Some authors recommend that ssSSc should be a separate condition to avoid misdiagnosis, while others consider ssSSc as a mild subvariant of lcSSc. In this scenario, several data suggested that the investigation of SSc subsets might help to better understand the disease aetiopathogenesis, and to shape the prognosis predicting the severity of organ complications. The fact that ssSSc and lcSSc share a similar clinical picture, the autoantibody profile and the peripheral microangiopathy may suggest that these subsets are strongly related, although with a different skin phenotype. The heterogeneity of skin involvement remains a matter of debate in SSc, as well as the variable combination and the severity of microangiopathy/fibrosis-related manifestations in internal organs. The strengths and limitations of SSc registry-based multicentric studies have been previously addressed. Although our SSc population is the largest reported among national registries, the present data are not conclusive. Possibly, long-term follow-up studies may verify the natural course and outcome of ssSSc patients in comparison to the other cutaneous subsets. The aim of our study was to provide an overall assessment of the ssSSc subset recruited at tertiary referral centres in our Italian SPRING registry. First, a relatively low proportion of ssSSc was observed within a large population of definite SSc, a finding that greatly varied among the few reports of the existing literature. Apart from skin involvement, the signs and symptoms of the ssSSc subset were mostly comparable with that of lcSSc. Both subsets were characterised by less frequent and less severe organ involvement, scarce NVC alterations, absence of anti-topoisomerase I positivity and a significantly different clinical pattern respect to dcSSc. A number of issues still remain unclear: the absence of cutaneous sclerosis, and the clinical overlap between the ssSSc and lcSSc raise the question whether ssSSc represents a distinct SSc subset or a simple phenotypic variant of lcSSc. Overall, future investigations on the biological origin of the different distribution of skin fibrosis among SSc patients may provide useful insights on the complex etiopathogenesis of the disease, likewise a novel disease sub-setting.
Artificial intelligence as a diagnostic aid in cross-sectional radiological imaging of surgical pathology in the abdominopelvic cavity: a systematic review
895b3786-db53-456a-b1d3-5967865aecdc
9990659
Pathology[mh]
The widespread adoption of digital healthcare provides vast data to enable the application of artificial intelligence (AI) in pattern recognition. This can alleviate the burden of, or replace, tasks traditionally dependent on clinicians. Examples include the interpretation of medical images for diagnostic, prognostic, surveillance and management decisions, which otherwise rely on a limited number of interpreters and human resources. There has been a surge of research into the use of AI in diagnostic imaging, exploring how it can support clinicians and provide greater efficacy and efficiency in clinical care. Systematic reviews on the diagnostic accuracy of AI in medical imaging (including respiratory medicine, ophthalmology and breast cancer), gastroenterology, neurosurgery and vascular surgery have demonstrated the diverse application of AI models to detect many pathologies. A variety of imaging modalities have been explored (eg, CT, MR and positron emission tomography (PET)). AI models can demonstrate diagnostic performances equivalent to that of experts and with greater efficiency, for example, in the time taken to diagnose childhood cataracts (5.7 min quicker than senior consultants). However, while AI technologies offer to markedly reduce the clinical workload, ‘black box’ techniques (AI algorithms) can be difficult or impossible to interpret, which can be a barrier to adopting these techniques into clinical practice. Furthermore, many AI studies are proof-of-concept and poorly reported, including limited details on participants, making it difficult to replicate or interpret the study findings. A systematic review of the diagnostic accuracy of AI models in cross-sectional radiological imaging of the abdominopelvic cavity is lacking. Synthesis of the current AI research in this area could benefit several different surgical specialities which image this region, such as endocrine surgery, gastrointestinal surgery, obstetrics and gynaecology, urology and vascular surgery to guide their clinical decision-making. This study aimed to conduct a systematic review to examine and critically appraise the application of AI models to identify surgical pathology from cross-sectional radiological images, including CT, MR, CT-PET and bone scans, of the abdominopelvic cavity, to identify current limitations and inform future research efforts. Protocol and registration This systematic review was registered with the International Prospective Register of Systematic Reviews. A study protocol has previously been published. It is reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses of Diagnostic Test Accuracy Studies. Information sources Electronic searches of OVID SP versions of Medline, EMBASE and the Cochrane Central Register of Controlled Trials databases were conducted to identify all potentially relevant studies. Date limitations between 1 January 2012 and 31 July 2021 were applied, to account for advancements in machine learning performance and the development of deep learning approaches since 2012, in line with existing reviews. Reference lists of included articles were screened to identify further relevant studies. Search strategy and study selection A comprehensive search syntax was developed with adaptation from three existing search strategies and guidance from an information specialist using text words and medical subject headings related to three domains: ‘artificial intelligence’, ‘diagnostic imaging’ and the ‘abdominopelvic cavity’ . Database search results were imported into reference management software (EndNote V.X9, Clarivate Analytics, USA) and duplicates were removed. 10.1136/bmjopen-2022-064739.supp1 Supplementary data Assessment of study eligibility was performed in two stages. First, titles and abstracts were screened for inclusion by two independent reviewers (GEF and CH). Any conflicts were resolved through discussion, referring to the wider study team if required. Final eligibility was assessed by a full-text review of potentially eligible studies by the same process. Management of the screening process was aided by Rayyan software (Rayyan Systems, Cambridge, Massachusetts). Eligibility criteria Primary research studies were considered for eligibility using the PIRT (participants, index test(s), reference standard and target condition) framework. Participants were adults with pathology within the abdominopelvic cavity diagnosed using the following radiological modalities: CT, MR, CT-PET or bone scans. Diagnostic endoscopy was excluded as existing reviews have explored the performance of AI models in this area. The index test was studies considering AI models as an intervention with the aim to provide a diagnosis. The reference standard was ‘standard practice’ to allow for variation across the included studies. The target condition was abdominopelvic cavity pathology which has had, or may warrant, an invasive procedure for therapeutic intent. Excluded were secondary research studies (eg, systematic reviews), case reports and case series, absence of full text (eg, conference abstracts), animal studies and non-English articles. Data extraction and management Data extraction from the included articles was independently performed by two reviewers (GEF and CH). Data management software (REDCap V.9.5.23, Vanderbilt University, USA) and a predesigned standardised form were used. Data were extracted under the following three subheadings: Study characteristics Extracted data included the name of the first author, their affiliated country, composition of the study team (eg, software engineers, radiologists and surgeons who routinely operate on surgical pathology within the abdominopelvic cavity (eg, gastrointestinal surgeons, urologists and gynaecologists)), year of publication, study aim and design (ie, ‘prospective’ or ‘retrospective’), surgical subspeciality and pathology studied (ie, benign, malignant tumours, multiple or other). Information on the reporting of ethics and/or regulatory approval (eg, Medicines and Healthcare products Regulatory Agency), patient and publication involvement and authors’ mention of using a reporting guideline (eg, Standards for Reporting of Diagnostic Accuracy Studies) was recorded. Training data Extracted data on the input features (data used to develop the AI model) included the modality of cross-sectional radiological imaging (CT, MR, CT-PET and bone scans), the AI model used, the reference standard and the reporting and size of the training and test sets. Information on whether the training data came from the studies dataet or publicly available datasets was recorded. Outcomes The performance of the AI models and human comparator (where applicable) was extracted. Diagnostic measures of accuracy included reported sensitivity, specificity, positive predictive values and the area under the receiver operating characteristic curve (AUC). The interpretation time (seconds) for studies comparing the performance of the AI model with a human comparator (where reported) was extracted. Risk of bias and applicability Risk of bias was assessed independently by two reviewers (GEF and RM) using the Q uality A ssessment of D iagnostic A ccuracy S tudies-2 (QUADAS-2) tool. A version of the QUADAS-2 tool for AI studies was still in development and not available at the time of conducting the current review. The generic QUADAS-2 tool with the pre-existing modified signalling questions was used to assess four domains including patient selection, index test, reference standard and flow and timing. An overall judgement of ‘at risk of bias’ or ‘concerns regarding applicability’ was assigned if more one or more domains were judged as ‘high’ or ‘unclear’. Judgements of applicability assessed whether the study matched the review question. Data synthesis A narrative synthesis was conducted according to the Synthesis Without Meta-analysis guidelines. The synthesis was planned to focus on the primary outcome with studies grouped by the modality of radiological imaging, surgical subspeciality and pathology studied, as outlined in the protocol. A broader approach was, however, adopted due to the small number of included studies and their heterogeneity. A meta-analysis was not performed due to the broad nature of the included studies. Patient and public involvement As part of the wider programme of work (Bristol Biomedical Research Centre, National Institute for Health Research Bristol BRC), patients and the public were consulted on their views of AI being used to guide doctors to make decisions about treatment. Overall, it was perceived positively, and they were supportive of its adoption in healthcare. This systematic review was registered with the International Prospective Register of Systematic Reviews. A study protocol has previously been published. It is reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses of Diagnostic Test Accuracy Studies. Electronic searches of OVID SP versions of Medline, EMBASE and the Cochrane Central Register of Controlled Trials databases were conducted to identify all potentially relevant studies. Date limitations between 1 January 2012 and 31 July 2021 were applied, to account for advancements in machine learning performance and the development of deep learning approaches since 2012, in line with existing reviews. Reference lists of included articles were screened to identify further relevant studies. A comprehensive search syntax was developed with adaptation from three existing search strategies and guidance from an information specialist using text words and medical subject headings related to three domains: ‘artificial intelligence’, ‘diagnostic imaging’ and the ‘abdominopelvic cavity’ . Database search results were imported into reference management software (EndNote V.X9, Clarivate Analytics, USA) and duplicates were removed. 10.1136/bmjopen-2022-064739.supp1 Supplementary data Assessment of study eligibility was performed in two stages. First, titles and abstracts were screened for inclusion by two independent reviewers (GEF and CH). Any conflicts were resolved through discussion, referring to the wider study team if required. Final eligibility was assessed by a full-text review of potentially eligible studies by the same process. Management of the screening process was aided by Rayyan software (Rayyan Systems, Cambridge, Massachusetts). Primary research studies were considered for eligibility using the PIRT (participants, index test(s), reference standard and target condition) framework. Participants were adults with pathology within the abdominopelvic cavity diagnosed using the following radiological modalities: CT, MR, CT-PET or bone scans. Diagnostic endoscopy was excluded as existing reviews have explored the performance of AI models in this area. The index test was studies considering AI models as an intervention with the aim to provide a diagnosis. The reference standard was ‘standard practice’ to allow for variation across the included studies. The target condition was abdominopelvic cavity pathology which has had, or may warrant, an invasive procedure for therapeutic intent. Excluded were secondary research studies (eg, systematic reviews), case reports and case series, absence of full text (eg, conference abstracts), animal studies and non-English articles. Data extraction from the included articles was independently performed by two reviewers (GEF and CH). Data management software (REDCap V.9.5.23, Vanderbilt University, USA) and a predesigned standardised form were used. Data were extracted under the following three subheadings: Study characteristics Extracted data included the name of the first author, their affiliated country, composition of the study team (eg, software engineers, radiologists and surgeons who routinely operate on surgical pathology within the abdominopelvic cavity (eg, gastrointestinal surgeons, urologists and gynaecologists)), year of publication, study aim and design (ie, ‘prospective’ or ‘retrospective’), surgical subspeciality and pathology studied (ie, benign, malignant tumours, multiple or other). Information on the reporting of ethics and/or regulatory approval (eg, Medicines and Healthcare products Regulatory Agency), patient and publication involvement and authors’ mention of using a reporting guideline (eg, Standards for Reporting of Diagnostic Accuracy Studies) was recorded. Training data Extracted data on the input features (data used to develop the AI model) included the modality of cross-sectional radiological imaging (CT, MR, CT-PET and bone scans), the AI model used, the reference standard and the reporting and size of the training and test sets. Information on whether the training data came from the studies dataet or publicly available datasets was recorded. Outcomes The performance of the AI models and human comparator (where applicable) was extracted. Diagnostic measures of accuracy included reported sensitivity, specificity, positive predictive values and the area under the receiver operating characteristic curve (AUC). The interpretation time (seconds) for studies comparing the performance of the AI model with a human comparator (where reported) was extracted. Extracted data included the name of the first author, their affiliated country, composition of the study team (eg, software engineers, radiologists and surgeons who routinely operate on surgical pathology within the abdominopelvic cavity (eg, gastrointestinal surgeons, urologists and gynaecologists)), year of publication, study aim and design (ie, ‘prospective’ or ‘retrospective’), surgical subspeciality and pathology studied (ie, benign, malignant tumours, multiple or other). Information on the reporting of ethics and/or regulatory approval (eg, Medicines and Healthcare products Regulatory Agency), patient and publication involvement and authors’ mention of using a reporting guideline (eg, Standards for Reporting of Diagnostic Accuracy Studies) was recorded. Extracted data on the input features (data used to develop the AI model) included the modality of cross-sectional radiological imaging (CT, MR, CT-PET and bone scans), the AI model used, the reference standard and the reporting and size of the training and test sets. Information on whether the training data came from the studies dataet or publicly available datasets was recorded. The performance of the AI models and human comparator (where applicable) was extracted. Diagnostic measures of accuracy included reported sensitivity, specificity, positive predictive values and the area under the receiver operating characteristic curve (AUC). The interpretation time (seconds) for studies comparing the performance of the AI model with a human comparator (where reported) was extracted. Risk of bias was assessed independently by two reviewers (GEF and RM) using the Q uality A ssessment of D iagnostic A ccuracy S tudies-2 (QUADAS-2) tool. A version of the QUADAS-2 tool for AI studies was still in development and not available at the time of conducting the current review. The generic QUADAS-2 tool with the pre-existing modified signalling questions was used to assess four domains including patient selection, index test, reference standard and flow and timing. An overall judgement of ‘at risk of bias’ or ‘concerns regarding applicability’ was assigned if more one or more domains were judged as ‘high’ or ‘unclear’. Judgements of applicability assessed whether the study matched the review question. A narrative synthesis was conducted according to the Synthesis Without Meta-analysis guidelines. The synthesis was planned to focus on the primary outcome with studies grouped by the modality of radiological imaging, surgical subspeciality and pathology studied, as outlined in the protocol. A broader approach was, however, adopted due to the small number of included studies and their heterogeneity. A meta-analysis was not performed due to the broad nature of the included studies. As part of the wider programme of work (Bristol Biomedical Research Centre, National Institute for Health Research Bristol BRC), patients and the public were consulted on their views of AI being used to guide doctors to make decisions about treatment. Overall, it was perceived positively, and they were supportive of its adoption in healthcare. Database searching identified 628 records, with a further five studies identified through reference lists of included articles. After the removal of duplicates, 580 were screened and, 52 full-text articles were assessed for eligibility. Fifteen studies were finally included . Study characteristics Characteristics of the included studies and details of the AI models are summarised . All were retrospective studies conducted in six different countries: Japan (n=4), USA (n=4), China (n=3), India (n=2), Turkey (n=1) and South Korea (n=1). Studies were all proof-of-concept (ie, not applied in a clinical setting), from four surgical specialities: urology (n=6), gastrointestinal surgery (n=6), endocrine surgery (n=1) and gynaecology (n=1). One study was not specific to a single specialty and involved whole-body CT-PET across four anatomical regions (head-and-neck, chest, abdomen and pelvis). Most studies focused on malignant tumours (n=11) using CT as the imaging modality (n=8). Most studies (n=14) included an ethical approval statement. No studies, however, mentioned patient and public involvement or the use of a reporting guideline. The composition and expertise within each study team varied. Three studies (n=3) comprised teams including software engineers, radiologists and surgeons. A small number of studies were comprised of only radiologists (n=2) or software engineers alone (n=1). Four study teams were comprised software engineers and radiologists together. Of the remaining five studies, three had no apparent radiological team and two were comprised of radiologists with either a team of software engineers, surgeons or physicians. Training data AI training and test sets within the included studies comprised a median of 130 (range: 5–2440) and 37 (range: 10–1045) patients, respectively . This information, however, was not always available (n=6 studies). All training data came from the studies collected data and not from pre-existing readily available data sets. There was variability in the reference standard, including the number of clinicians (range: 1–3) for either a radiological (n=9) or histological (n=4) diagnosis or both (n=1). Only one study had an unclear reference standard . Outcomes The intent of the AI applications in the included studies varied, with the majority focusing on diagnosing advanced or recurrent cancer (n=6 studies) and four studies classifying the pathology (ie, normal or abnormal, or benign or malignant) . Diagnostic performance of the AI models ranged between 70.0% and 95.0% sensitivity, and 52.9% and 98.0% specificity . The reporting of the diagnostic measures of accuracy was unstandardised, for example, there were different output measures across the studies, and three studies did not report all of their outcome measures . 10.1136/bmjopen-2022-064739.supp2 Supplementary data Following model development with a training and tuning set, four studies had an external validation test set and compared the performance of the AI model with a human comparator (a radiologist) . The diagnostic performance of the radiologists ranged between 57.4% and 62.8% sensitivity and 0.89 and 0.97 AUC . In these studies, there was variability in both the number of patients (range 50–414; from one to six different centres) and number of radiologists (range 2–4) and their years of experience. These studies reported the diagnostic performance of the AI model as either superior (n=2; in rectal and advanced gastrointestinal cancer) or comparable (n=2; in metastatic gastrointestinal pathology and a gynaecology study distinguishing between malignant tumours and benign pathology) to radiologists. Two of these studies included a comparison of the interpretation time of the AI model with the radiologists. AI models outperformed the radiologists in both studies (1–2 s vs 200 s per case, p<0.05 and 20 s vs 600 s for an average of 100 MRIs ). Risk of bias and applicability With the exception of one study, all the included studies had an overall judgement of ‘at risk of bias’ and ‘concerns regarding applicability’ . This was predominantly due to comparisons based on either a single clinician’s assessment (n=10) or an unclear number of clinicians for the assessment (n=5), from either a small (eg, 10 patients in one study) or unclear size of test set, and most models were developed only from internal validations (n=11) ( and ). Characteristics of the included studies and details of the AI models are summarised . All were retrospective studies conducted in six different countries: Japan (n=4), USA (n=4), China (n=3), India (n=2), Turkey (n=1) and South Korea (n=1). Studies were all proof-of-concept (ie, not applied in a clinical setting), from four surgical specialities: urology (n=6), gastrointestinal surgery (n=6), endocrine surgery (n=1) and gynaecology (n=1). One study was not specific to a single specialty and involved whole-body CT-PET across four anatomical regions (head-and-neck, chest, abdomen and pelvis). Most studies focused on malignant tumours (n=11) using CT as the imaging modality (n=8). Most studies (n=14) included an ethical approval statement. No studies, however, mentioned patient and public involvement or the use of a reporting guideline. The composition and expertise within each study team varied. Three studies (n=3) comprised teams including software engineers, radiologists and surgeons. A small number of studies were comprised of only radiologists (n=2) or software engineers alone (n=1). Four study teams were comprised software engineers and radiologists together. Of the remaining five studies, three had no apparent radiological team and two were comprised of radiologists with either a team of software engineers, surgeons or physicians. AI training and test sets within the included studies comprised a median of 130 (range: 5–2440) and 37 (range: 10–1045) patients, respectively . This information, however, was not always available (n=6 studies). All training data came from the studies collected data and not from pre-existing readily available data sets. There was variability in the reference standard, including the number of clinicians (range: 1–3) for either a radiological (n=9) or histological (n=4) diagnosis or both (n=1). Only one study had an unclear reference standard . The intent of the AI applications in the included studies varied, with the majority focusing on diagnosing advanced or recurrent cancer (n=6 studies) and four studies classifying the pathology (ie, normal or abnormal, or benign or malignant) . Diagnostic performance of the AI models ranged between 70.0% and 95.0% sensitivity, and 52.9% and 98.0% specificity . The reporting of the diagnostic measures of accuracy was unstandardised, for example, there were different output measures across the studies, and three studies did not report all of their outcome measures . 10.1136/bmjopen-2022-064739.supp2 Supplementary data Following model development with a training and tuning set, four studies had an external validation test set and compared the performance of the AI model with a human comparator (a radiologist) . The diagnostic performance of the radiologists ranged between 57.4% and 62.8% sensitivity and 0.89 and 0.97 AUC . In these studies, there was variability in both the number of patients (range 50–414; from one to six different centres) and number of radiologists (range 2–4) and their years of experience. These studies reported the diagnostic performance of the AI model as either superior (n=2; in rectal and advanced gastrointestinal cancer) or comparable (n=2; in metastatic gastrointestinal pathology and a gynaecology study distinguishing between malignant tumours and benign pathology) to radiologists. Two of these studies included a comparison of the interpretation time of the AI model with the radiologists. AI models outperformed the radiologists in both studies (1–2 s vs 200 s per case, p<0.05 and 20 s vs 600 s for an average of 100 MRIs ). With the exception of one study, all the included studies had an overall judgement of ‘at risk of bias’ and ‘concerns regarding applicability’ . This was predominantly due to comparisons based on either a single clinician’s assessment (n=10) or an unclear number of clinicians for the assessment (n=5), from either a small (eg, 10 patients in one study) or unclear size of test set, and most models were developed only from internal validations (n=11) ( and ). The major finding of this review was the heterogeneity in the AI applications across the included studies regardless of the surgical specialty or pathology. Early phase studies of AI innovation, particularly focusing on advanced or recurrent malignancy, were identified with promising diagnostic accuracies to support clinical decision-making. Future AI research could benefit from targeting areas where radiological expertise is in high demand or the data are complex to interpret; for example, adrenal incidentalomas and images from virtual colonoscopy. Attention should also be directed to the governance of AI, particularly on where the responsibility lies if the AI model misses a lesion. In this review, several reporting issues were identified, including for the reference standard and training data. Poor adherence to reporting guidelines is a common finding in the existing literature for diagnostic accuracy studies assessing AI interventions. The Standards for Reporting of Diagnostic Accuracy-AI (STARD-AI) Steering Group are developing an AI-specific extension to the STARD statement, which aims to improve reporting of AI diagnostic accuracy studies. This steering group highlighted three pitfalls, which are also reflected in this review: (1) unclear methodological interpretation (eg, methods of validation and comparison to human performance), (2) unstandardised nomenclature (eg, varying definition of the term ‘validation’) and (3) heterogeneity of the outcome measures (eg, sensitivity, specificity, predictive values and AUC). Endeavours to address this include the development of specific reporting guidelines for authors of AI studies, including protocols (SPIRIT-AI), reports (CONSORT-AI) and proposals ( MIN imum I nformation for M edical A I R eporting (MINIMAR)). These efforts should improve both the reporting quality and make it easier to interpret and compare AI studies. A minority of the included studies compared the diagnostic performance of the AI model with a clinician’s diagnosis. These studies reported a faster and superior or equivalent diagnostic performance with the AI model. A recent review found only 51 studies worldwide reporting the implementation and evaluation of AI applications in clinical practice. While many AI studies are currently retrospective and proof-of-concept, which may be appropriate for early phase surgical research, future efforts should evaluate the role of AI in a clinical setting. This should adopt a multidisciplinary team of all relevant stakeholders (eg, software engineers, radiologists and surgeons) to ensure that the diverse and relevant skill sets can work together to produce both high-quality and clinically relevant AI research. This review included a robust methodology, comprehensive search strategy and a multidisciplinary team. Some limitations, however, are acknowledged. Despite having a broad search strategy, relevant studies may have been missed by excluding articles that were not published in the English language. It did not encompass all diagnostic applications of AI in this region, such as diagnostic imaging for prognostic, surveillance and management decisions meaning findings are not generalisable to these wider contexts. However, recommendations for prioritising future endeavours on clinical need, adhering to reporting guidelines and standardised and transparent reporting can be considered appropriate for all studies assessing AI interventions in healthcare. This review identified a diverse application of AI innovation in this field. Most studies were proof-of-concept and more ‘comparator’ studies in the clinical setting are needed. Future AI research could build on existing studies with translation to clinical practice, adopting a multidisciplinary approach, including patient and public involvement, which was lacking in the studies of this review. This could target areas of clinical need. Adherence to existing and developing guidelines for reporting AI studies, such as SPIRIT-AI, CONSORT-AI, STARD-AI and DECIDE-AI, is warranted. Reviewer comments Author's manuscript
Admission of kidney patients to a closed staff nephrology department results in a better short-term survival
e955119d-2c3a-4082-951e-32b9e41c78bf
9990939
Internal Medicine[mh]
A number of studies have shown that it is remarkably difficult to ameliorate the outcome of kidney patients using medical or technical measures for both acute kidney injury (AKI) and end stage renal disease (ESRD). In contrast, the outcome of CKD can be improved with novel drugs. On the other hand, administrative steps facilitating kidney patient-Nephrologist interaction may improve their outcome, e.g. earlier out-patient referral to a Nephrologist can reduce mortality and hospitalizations . Thus, specialist involvement may be beneficial in the out-patient Nephrology setting However, in the context of in-patients, whether admission to a specialized Nephrology department improves survival is yet to be determined. Health care utilization among adult CKD patients is high and 47% of the patients are hospitalized at least once per year . When kidney patients are hospitalized their outcome is worse than patients with intact renal function . Kidney patients are often hospitalized in general Medicine wards where Nephrologist consultation may be requested. These patients may be regarded as ‘outliers’ of the Nephrology ward with a substantially lower degree of specialist involvement. Outliers in general may have an increased length of hospitalization , as shown by a study from another field (Neurology) that found a significantly shorter median length of stay in a specialist unit compared to the general wards (9 days vs 13 days respectively) . In the field of Nephrology, Fagugli et al investigated the outcome of patients with acute kidney injury (AKI) requiring dialysis who were admitted to either a Nephrology ward or to general medical wards. The study showed reduced in-hospital mortality in the Nephrology ward (20% versus 52%), thereby suggesting that for the most severe AKI patients requiring dialysis, specialty care may result in better outcomes. Other studies demonstrated that early Nephrologist involvement in patients with AKI may reduce the risk of further decrease in kidney function . Moreover, delayed Nephrology consultation was associated with increased dialysis dependence rates in critically ill AKI patients on hospital discharge . In the current in-patient study, we retrospectively examined whether the outcome of hospitalized kidney patients, i.e. AKI (AKIN classification stages 1–3) not requiring dialysis and CKD (stages G3-G5) patients, was improved following admission to a closed-staff Nephrology ward (see below classification). To the best of our knowledge these patient populations have not been examined in this regard previously. Study design This was a population-based retrospective cohort study comparing two cohorts of kidney patients (either AKI or CKD patients) admitted to general Medicine wards with Nephrology consultation vs. care in a closed-staff Nephrology ward. Short-term (< = 90 days) and long-term (>90 days) outcomes were recorded for mortality, renal outcome (RRT (dialysis or kidney transplantation)) and AV shunt surgery, composite dialysis complication score (CDCs), CREDENCE composite outcome [see below]), and cardiovascular outcomes [MACE, see below]. Of note, AV shunt surgery differs from the other outcomes in predicting a better prognosis in dialysis patients . Setting Soroka University Medical Center is the 4 th largest hospital in Israel and the only one in the Negev district providing medical services to ~ 1 million residents. Because all the kidney patients in the Negev district are referred to Soroka University Medical Center, admissions to Soroka hospital were considered to reflect all hospitalization events. The Medicine wards are based on an open-staff structure, i.e., both the attending senior physician in the medical wards and the consulting Nephrologist are rotating, the former monthly and the latter daily. In contrast, in the closed-staff Nephrology department the staff is unchanged and board-certified in Nephrology. Daily morning meetings of 6–8 Nephrologists are conducted to guide patient care. The Nephrology floor comprises a 12-bed ward dedicated entirely to kidney in-patients, in addition to peritoneal dialysis outpatient unit, hemodialysis outpatient and in-patient unit, and a kidney transplantation service. The medical staff comprises 7 board-certified Nephrologists and one resident. The nurses all passed a 1-yr Nephrology and dialysis nursing course. In-house dietitian and social worker guide the relevant aspects of therapy. Study participants and data sources The two kidney patient cohorts were defined as AKI or CKD. All patients were adults (>18 years) with renal dysfunction admitted either to the Nephrology Ward or to the General Medicine wards (in the latter only patients with Nephrology consultation were included). The dates of admission were from 21 July 2016 through 31 December 2018 (exclusion criteria common to both cohorts were absence of Nephrologist consultation, need for urgent dialysis on admission, admission to ICU or surgery and ESRD (on chronic dialysis or with a kidney transplant) . The specific AKI study exclusion criterion was serum creatinine rise below 50% compared to baseline. The latter was calculated as the mean of the available serum creatinine levels measured during the last year before admission. Additional specific CKD study exclusion criteria were eGFR >60 ml/min. The data collection ended on 31.12.2019; thus, all patients have had at least one year of follow-up.For each patient we calculated the relevant AKIN/CKD KDIGO scores based on their creatinine level and rekevant demographic data. Because the serum creatinine alone does not accurately reflect the kidney function, these data were convertedninto the AKIN stage /CKD as the unit of analysis. The AKIN classification of AKI was used; AKI patients were classified into 3 stages [1.5-fold≤Serum creatinine (Scr) ≤2-fold, 2-fold <Scr≤ 3-fold, Scr> 3-fold] . For CKD, The KDIGO classification was used; [G3a (45≤eGFR<60), G3b (30≤eGFR<45), G4 (15≤eGFR<30) and G5nd (eGFR<15, G5 CKD patients not receiving RRT) . Data collection The study was based on two computerized datasets: a Nephrology consultation database, which consists of records of hospitalized patients, from all the hospital wards requesting Nephrology consultation. The second is Soroka’s Chameleon electronic medical records database, which comprises records of all patients treated in Soroka hospital. Based on previous power calculations, two-thirds of the patients were randomly selected using an arbitrary digit of their ID number, as reported before . The study was investigator-initiated and was approved by the Soroka University Medical Center institutional review board (IRB). All diagnoses were classified by the international classification of disease (ICD-9). Statistical analysis For each kidney patient cohort (AKI/CKD), the sociodemographic and medical characteristics of Nephrology and General Ward patients were assessed using appropriate univariate statistics. Next, we assessed the association between admission type and clinical outcomes using appropriate univariate statistics. Categorical variables were assessed using Chi-Square test. Continuous variables were assessed using either T-test (for normal distribution) or Mann-Whitney test (in all other cases). To assess the independent association between admission type and clinical outcomes, we conducted a multivariate analysis using either logistic regression (for dichotomous variables) or negative binomial regression (for counting variables) adjusted for the sociodemographic variables, which showed significant association with ward type (age, ethnicity, and number of children). In addition, to mitigate the potential admittance bias to each ward, a propensity score (PS) was created using a logistic regression assessing the effect of all medical background variables on the hospitalization ward as a dependent variable. The resulting PS was added as an independent variable to all the multivariate regression models . Details concerning the specific statistical tests conducted for each variable can be seen at the footnote of each table. All analyses were conducted using SPSS Statistics V. 25 and R software. A two-sided test significance level of 0.05 was used throughout the entire study. The association between admission type and outcome was studied for the following parameters: long- and short-term all-cause mortality,4-point MACE (major adverse cardiac event: nonfatal stroke, nonfatal MI, congestive heart failure (CHF), cardiovascular death), need for dialysis during the first hospitalization, need for chronic RRT (measured as RRT after discharge from first hospitalization), recurrent hospitalization and AV shunt surgery. For long-term Nephrology outcome, we used the CREDENCE composite CKD progression score index, comprising either ESRD, doubling of the serum creatinine level, renal or cardiovascular death . To assess a specific dialysis quality index, we also tested a composite dialysis complication score (CDCs), comprising any of the following: CLABSI (catheter-induced bacteremia), pulmonary edema, hyperkalemia requiring urgent hemodialysis, need for any acute dialysis during first hospitalization and all-cause mortality. This was a population-based retrospective cohort study comparing two cohorts of kidney patients (either AKI or CKD patients) admitted to general Medicine wards with Nephrology consultation vs. care in a closed-staff Nephrology ward. Short-term (< = 90 days) and long-term (>90 days) outcomes were recorded for mortality, renal outcome (RRT (dialysis or kidney transplantation)) and AV shunt surgery, composite dialysis complication score (CDCs), CREDENCE composite outcome [see below]), and cardiovascular outcomes [MACE, see below]. Of note, AV shunt surgery differs from the other outcomes in predicting a better prognosis in dialysis patients . Soroka University Medical Center is the 4 th largest hospital in Israel and the only one in the Negev district providing medical services to ~ 1 million residents. Because all the kidney patients in the Negev district are referred to Soroka University Medical Center, admissions to Soroka hospital were considered to reflect all hospitalization events. The Medicine wards are based on an open-staff structure, i.e., both the attending senior physician in the medical wards and the consulting Nephrologist are rotating, the former monthly and the latter daily. In contrast, in the closed-staff Nephrology department the staff is unchanged and board-certified in Nephrology. Daily morning meetings of 6–8 Nephrologists are conducted to guide patient care. The Nephrology floor comprises a 12-bed ward dedicated entirely to kidney in-patients, in addition to peritoneal dialysis outpatient unit, hemodialysis outpatient and in-patient unit, and a kidney transplantation service. The medical staff comprises 7 board-certified Nephrologists and one resident. The nurses all passed a 1-yr Nephrology and dialysis nursing course. In-house dietitian and social worker guide the relevant aspects of therapy. The two kidney patient cohorts were defined as AKI or CKD. All patients were adults (>18 years) with renal dysfunction admitted either to the Nephrology Ward or to the General Medicine wards (in the latter only patients with Nephrology consultation were included). The dates of admission were from 21 July 2016 through 31 December 2018 (exclusion criteria common to both cohorts were absence of Nephrologist consultation, need for urgent dialysis on admission, admission to ICU or surgery and ESRD (on chronic dialysis or with a kidney transplant) . The specific AKI study exclusion criterion was serum creatinine rise below 50% compared to baseline. The latter was calculated as the mean of the available serum creatinine levels measured during the last year before admission. Additional specific CKD study exclusion criteria were eGFR >60 ml/min. The data collection ended on 31.12.2019; thus, all patients have had at least one year of follow-up.For each patient we calculated the relevant AKIN/CKD KDIGO scores based on their creatinine level and rekevant demographic data. Because the serum creatinine alone does not accurately reflect the kidney function, these data were convertedninto the AKIN stage /CKD as the unit of analysis. The AKIN classification of AKI was used; AKI patients were classified into 3 stages [1.5-fold≤Serum creatinine (Scr) ≤2-fold, 2-fold <Scr≤ 3-fold, Scr> 3-fold] . For CKD, The KDIGO classification was used; [G3a (45≤eGFR<60), G3b (30≤eGFR<45), G4 (15≤eGFR<30) and G5nd (eGFR<15, G5 CKD patients not receiving RRT) . The study was based on two computerized datasets: a Nephrology consultation database, which consists of records of hospitalized patients, from all the hospital wards requesting Nephrology consultation. The second is Soroka’s Chameleon electronic medical records database, which comprises records of all patients treated in Soroka hospital. Based on previous power calculations, two-thirds of the patients were randomly selected using an arbitrary digit of their ID number, as reported before . The study was investigator-initiated and was approved by the Soroka University Medical Center institutional review board (IRB). All diagnoses were classified by the international classification of disease (ICD-9). For each kidney patient cohort (AKI/CKD), the sociodemographic and medical characteristics of Nephrology and General Ward patients were assessed using appropriate univariate statistics. Next, we assessed the association between admission type and clinical outcomes using appropriate univariate statistics. Categorical variables were assessed using Chi-Square test. Continuous variables were assessed using either T-test (for normal distribution) or Mann-Whitney test (in all other cases). To assess the independent association between admission type and clinical outcomes, we conducted a multivariate analysis using either logistic regression (for dichotomous variables) or negative binomial regression (for counting variables) adjusted for the sociodemographic variables, which showed significant association with ward type (age, ethnicity, and number of children). In addition, to mitigate the potential admittance bias to each ward, a propensity score (PS) was created using a logistic regression assessing the effect of all medical background variables on the hospitalization ward as a dependent variable. The resulting PS was added as an independent variable to all the multivariate regression models . Details concerning the specific statistical tests conducted for each variable can be seen at the footnote of each table. All analyses were conducted using SPSS Statistics V. 25 and R software. A two-sided test significance level of 0.05 was used throughout the entire study. The association between admission type and outcome was studied for the following parameters: long- and short-term all-cause mortality,4-point MACE (major adverse cardiac event: nonfatal stroke, nonfatal MI, congestive heart failure (CHF), cardiovascular death), need for dialysis during the first hospitalization, need for chronic RRT (measured as RRT after discharge from first hospitalization), recurrent hospitalization and AV shunt surgery. For long-term Nephrology outcome, we used the CREDENCE composite CKD progression score index, comprising either ESRD, doubling of the serum creatinine level, renal or cardiovascular death . To assess a specific dialysis quality index, we also tested a composite dialysis complication score (CDCs), comprising any of the following: CLABSI (catheter-induced bacteremia), pulmonary edema, hyperkalemia requiring urgent hemodialysis, need for any acute dialysis during first hospitalization and all-cause mortality. Baseline characteristics depicts the sociodemographic and comorbidities of kidney patients admitted to the Nephrology ward or to general Medicine wards (of whom only patients with Nephrology consultation were included). The impact of the admitting department on the outcome of kidney patients was studied for 2 groups, i.e. AKI and CKD. Only when significant for both AKI and CKD, the difference between the admitting departments deemed to reflect clinically meaningful difference. Thus, at baseline, age, and the prevalence of cardiovascular disease (composite coronary vascular disease, peripheral vascular disease, acute coronary syndrome, cerebrovascular event), and congestive heart failure (CHF), were all significantly higher in kidney patients admitted to Medicine wards . On the other hand, patients admitted to the Nephrology department manifested a more advanced stage of either AKI or CKD . To address a potential admittance bias, a propensity score analysis was performed in addition to standard multivariate analysis (adjusted OR, ). Mortality In univariate analysis, a significantly higher rate of short-term all-cause mortality was found among the two groups of kidney patients admitted to the open-staff Medicine wards compared to the closed-staff Nephrology ward . Next, univariate analysis showed a significantly lower mortality rate for kidney patients admitted to Nephrology floor for both CKD (OR = 0.19, CI = 0.11–0.34, p<0.001) and AKI (OR = 0.21, CI = 0.13–0.35, p<0.001). Next, using multivariate analysis, adjusted to potential confounders as well as to the propensity score analysis, the Nephrology ward related relative reduction rate in short term mortality was 72% and 75%, for CKD and AKI patients respectively (CKD: OR = 0.28, CI = 0.14–0.58, p = 0.001; AKI: OR = 0.25, CI = 0.12–0.48, p<0.001). However, the long-term all-cause mortality was not affected by the type of admitting department . Remarkably, the propensity score analysis reiterated the protective effect of Nephrology ward admission, relevant for both kidney patient populations. Intermediate outcomes To evaluate intermediate outcomes, we next tested for composite dialysis complication score (CDCs), CREDENCE, MACE and RRT. Although univariate analysis initially suggested that in AKI patients there could be less acute complications (short term CDCs) for patients admitted to Nephrology , further multivariate analyses indicated that this score did not differ significantly between the Nephrology and Medicine departments . Similarly, univariate analysis initially suggested that CKD patients had borderline higher long term CREDENCE score and lower MACE score for patients admitted to Nephrology . However, in multivariate analysis , these differences were no longer significant. Admission to Nephrology was associated with higher rates of renal replacement therapy (RRT) both in and after the first hospitalization, and consequently AV shunt surgery, observed in all kidney cohorts . depicts the sociodemographic and comorbidities of kidney patients admitted to the Nephrology ward or to general Medicine wards (of whom only patients with Nephrology consultation were included). The impact of the admitting department on the outcome of kidney patients was studied for 2 groups, i.e. AKI and CKD. Only when significant for both AKI and CKD, the difference between the admitting departments deemed to reflect clinically meaningful difference. Thus, at baseline, age, and the prevalence of cardiovascular disease (composite coronary vascular disease, peripheral vascular disease, acute coronary syndrome, cerebrovascular event), and congestive heart failure (CHF), were all significantly higher in kidney patients admitted to Medicine wards . On the other hand, patients admitted to the Nephrology department manifested a more advanced stage of either AKI or CKD . To address a potential admittance bias, a propensity score analysis was performed in addition to standard multivariate analysis (adjusted OR, ). In univariate analysis, a significantly higher rate of short-term all-cause mortality was found among the two groups of kidney patients admitted to the open-staff Medicine wards compared to the closed-staff Nephrology ward . Next, univariate analysis showed a significantly lower mortality rate for kidney patients admitted to Nephrology floor for both CKD (OR = 0.19, CI = 0.11–0.34, p<0.001) and AKI (OR = 0.21, CI = 0.13–0.35, p<0.001). Next, using multivariate analysis, adjusted to potential confounders as well as to the propensity score analysis, the Nephrology ward related relative reduction rate in short term mortality was 72% and 75%, for CKD and AKI patients respectively (CKD: OR = 0.28, CI = 0.14–0.58, p = 0.001; AKI: OR = 0.25, CI = 0.12–0.48, p<0.001). However, the long-term all-cause mortality was not affected by the type of admitting department . Remarkably, the propensity score analysis reiterated the protective effect of Nephrology ward admission, relevant for both kidney patient populations. To evaluate intermediate outcomes, we next tested for composite dialysis complication score (CDCs), CREDENCE, MACE and RRT. Although univariate analysis initially suggested that in AKI patients there could be less acute complications (short term CDCs) for patients admitted to Nephrology , further multivariate analyses indicated that this score did not differ significantly between the Nephrology and Medicine departments . Similarly, univariate analysis initially suggested that CKD patients had borderline higher long term CREDENCE score and lower MACE score for patients admitted to Nephrology . However, in multivariate analysis , these differences were no longer significant. Admission to Nephrology was associated with higher rates of renal replacement therapy (RRT) both in and after the first hospitalization, and consequently AV shunt surgery, observed in all kidney cohorts . In this retrospective cohort study, we found that admission to a closed-staff Nephrology ward was associated in two groups of kidney patients with reduced short-term mortality. These findings for kidney patients are consistent with the benefits of specialized care units in other fields . Fagugli et al have demonstrated the value of a closed-staff Nephrology department in a selected group of AKI patients requiring acute dialysis. In the current study, we found that the value of closed-staff Nephrology department care may further extend to hospitalization of two major kidney patient groups, i.e., AKI and CKD stage 3A-5. The present study reveals that for these two groups of kidney patients, short-term all-cause mortality rate was significantly lower in the Nephrology department. While the mortality difference in the range of 72–75% in favor of Nephrology floor care is very high, we acknowledge it may be partially exaggerated by admittance bias. In contrast, the mortality risk was not totally skewed in favor of the Nephrology department as more severe baseline renal dysfunction, known to predict mortality , was observed in patients admitted to Nephrology in both groups. We thus employed a propensity score analysis, adjusted to comorbidities and demographic parameters. Since a randomized controlled trial of kidney patient admission is unlikely due to ethical considerations, this limited study may still be of importance in regard to the structure of Department of Medicine and its sub-specialties. The possibility that early kidney patient transfer to an empowered Nephrology Department may substantially increase short term survival cannot be ruled out. The relative reduction in the mortality rate may be possibly explained by several conjectures. Nephrology ward is a specialized department managed 24/7 by the same group of Nephrologists. In contrast, rotating attending Nephrologist consultation is requested by the general Medicine wards every few days. Furthermore, the patient to staff ratios in the Nephrology floor are lower, allowing more attention to patients. Third, facilities such as dialysis unit and transplantation clinic are part of the Nephrology Department as are specialist dietitian, transplantation nurse and social worker. It thus appears that a major factor affecting kidney patient short term survival is the human factor where a trained, closed-staff Nephrology team may improve AKI and CKD patient survival. This structure may also account for our finding of higher rate of RRT and AV shunt surgery in AKI and CKD patients admitted to Nephrology department. In contrast to short-term mortality, long term all-cause mortality was not associated with the ward type. The following factors may account for this observation. First, in the AKI cohort, long-term mortality may have been underestimated. In 16 studies AKI was associated with increased long-term mortality (up to 83% 5-yr mortality risk.) . Because our maximum follow-up time was 2.4 years, the long-term effect of Nephrology department on AKI and CKD care may only be realized after a longer period. Second, our finding that admission to the Nephrology Department was beneficial only for short-term mortality may in fact reflect the critical need for expertise in the management of kidney patients, whereas long-term mortality is multi-factorial involving general practitioners, pharmacists, nurses, and dietitians who are not always acquainted with the subtleties of CKD care. In this regard, outpatient Nephrology clinic visit were not accounted for after discharge. The benefit of in-patient Nephrology care was previously reported primarily for AKI patients. Meier et al found that hospital-acquired AKI patients who had been referred early (within 5 days after development of AKI) to a Nephrologist were at lower risk for in-hospital morbidity and mortality compared with non-Nephrologist referral and late (> 5 days) Nephrology referral . A similar difference in short-term mortality in a subgroup of 296 non critically ill AKI patients requiring acute dialysis was reported between Nephrology and Medicine wards , where admission to a closed-staff Nephology department resulted in a 20% mortality rate vs. a 52% mortality rate in the medical wards. Our study focused on 2 different cohorts of non-critically ill kidney patients and further extends the potential benefit of closed-staffed Nephrology care to all AKI stages and to CKD. Thus, we raise the hypothesis that a better outcome for hospitalized kidney patients is possible, not necessarily via advanced sophisticated technology, but rather involving hospital organizational steps promoting closed-staff Nephrology departments. Limitations This study has some limitations. First, this is a single-center study and may not reflect other institutions. Nevertheless, since our single center is the only referral medical center in our region, this may be an advantage as missing data are unlikely. Second, the baseline characteristics of the Nephrology ward patients were more favorable as they were younger and had fewer cardiovascular comorbidities. In contrast, advanced CKD, and AKI stages, also known as major risk factors, were more prevalent in the Nephrology group. Although we used a robust propensity analysis to account for the difference in patients’ comorbidities, we cannot rule out an overlooked baseline risk factor that could have skewed our results. Third, we did not differ between hospitalization conferring bad prognosis, e.g., sepsis or MI, and hospitalization conferring a good prognosis, e.g., AV shunt surgery. Fourth, we don’t have the data regarding Nephology clinic visits after hospital discharge. However, this would seem to affect mostly the long-term outcomes rather than the short-term. Strengths First, our more extensive study supports the results of a previous smaller study in a subgroup of AKI requiring acute dialysis where admission to a Nephrology ward was also associated with reduced short-term mortality. Our study extends this finding to all stages of AKI and to CKD. Second, because missing data are unlikely in a single referral center, our study population appears to accurately reflect the kidney patient population in our area. Conclusion Both AKI and CKD patients admitted to the Nephrology Department demonstrate significantly reduced short-term mortality, when compared to general Medicine departments. These findings highlight the human factor in kidney patient outcome, and support the role of highly trained, closed-staff Nephrology departments for specialized kidney patient care. This study has some limitations. First, this is a single-center study and may not reflect other institutions. Nevertheless, since our single center is the only referral medical center in our region, this may be an advantage as missing data are unlikely. Second, the baseline characteristics of the Nephrology ward patients were more favorable as they were younger and had fewer cardiovascular comorbidities. In contrast, advanced CKD, and AKI stages, also known as major risk factors, were more prevalent in the Nephrology group. Although we used a robust propensity analysis to account for the difference in patients’ comorbidities, we cannot rule out an overlooked baseline risk factor that could have skewed our results. Third, we did not differ between hospitalization conferring bad prognosis, e.g., sepsis or MI, and hospitalization conferring a good prognosis, e.g., AV shunt surgery. Fourth, we don’t have the data regarding Nephology clinic visits after hospital discharge. However, this would seem to affect mostly the long-term outcomes rather than the short-term. First, our more extensive study supports the results of a previous smaller study in a subgroup of AKI requiring acute dialysis where admission to a Nephrology ward was also associated with reduced short-term mortality. Our study extends this finding to all stages of AKI and to CKD. Second, because missing data are unlikely in a single referral center, our study population appears to accurately reflect the kidney patient population in our area. Both AKI and CKD patients admitted to the Nephrology Department demonstrate significantly reduced short-term mortality, when compared to general Medicine departments. These findings highlight the human factor in kidney patient outcome, and support the role of highly trained, closed-staff Nephrology departments for specialized kidney patient care. S1 Table Exclusion criteria for CKD and AKI patients. (DOCX) Click here for additional data file. S1 Raw data (SAV) Click here for additional data file. S2 Raw data (XLSX) Click here for additional data file. S3 Raw data (SAV) Click here for additional data file. S4 Raw data (XLSX) Click here for additional data file.
Integrating clinical genetics in cardiology: Current practices and recommendations for education
c4b9303e-fb2d-468a-b75a-ee69e99e68fa
9991006
Internal Medicine[mh]
Cardiovascular disorders have a high degree of heritability. , Genotype-driven assessments suggest that genetically-mediated syndromes are more prevalent than clinical disease estimates. , Recent advancements in cardiovascular genetics have facilitated the early diagnosis of cardiovascular disease and identification of at-risk individuals. Genetic testing can inform risk status, diagnosis, and management for multiple cardiovascular disorders. Cardiac disorders with established genetic testing include cardiomyopathy and heart failure; arrhythmia syndromes such as long QT syndrome, early onset atrial fibrillation, and Brugada syndrome; the aortopathies such as Marfan syndrome and Loeys Dietz syndrome; familial hypercholesterolemia; congenital heart disease; and neuromuscular disorders. The Heart Rhythm Society and other cardiac professional societies recommend genetic testing as part of risk stratification for managing a number of arrhythmia syndromes, including risks for sudden cardiac death (SCD). – Genetic testing for inherited cardiovascular disorders provides valuable information for diagnosis and family cascade testing; the latter presents unique opportunities for early intervention through screening and risk reduction and reduction in health care costs for unaffected family members. , The correct identification of a genetic condition has been found to reduce morbidity and mortality by predicting those with the highest risk of adverse outcomes and altering medical management earlier in the disease process and ultimately saving health care costs. – At the same time, the incorrect attribution of causation to a variant can be psychosocially and financially costly with respect to diagnosis, prevention/treatment, family risk assessment, and reproductive advice. Therefore, ensuring cardiovascular physicians and nurses are knowledgeable and prepared to accurately incorporate genetic testing in practice is essential to improving patient outcomes. Previous studies in primary care settings found that providers hold positive views about the importance of genetics but lack adequate preparation to implement genetic testing and use genetic test results to inform patients’ medical management. – As part of a large study to develop and implement an educational program about genetic advances in SCD, qualitative interviews with cardiovascular providers were conducted to explore the extent to which genetics (eg, genetic testing) is currently integrated in their practice, to explore practitioners’ motivations or interest in using genetics in cardiac care, and to explore their preferences for cardiovascular genetic education. Participants and recruitment After the approval by the Institutional Review Board at Northwestern University, participants were recruited using purposive sampling from cardiology practices in the Midwest and the Northeast. Potential participants were identified by an expert group of health care providers, researchers, and genetic counselors involved with the study. Interested participants responded to a recruitment email and were screened for eligibility; eligibility criteria included the following: (1) employed by an accredited hospital or clinic in the United States, (2) affiliated with cardiology across the lifespan, (3) involved in the care of patients at risk for SCD, (4) a physician (MD) or advanced practice nurse (APN), and (5) able to read and speak English. After establishing eligibility, potential participants received a Research Electronic Data Capture (Vanderbilt University) link to provide electronic informed consent and complete a demographic survey. Data collection tools and procedure The interview guide included questions about participants’ current clinical use of genetics, barriers and facilitators to the integration of genetics into clinical care, motivations for using genetics, and preferences for receiving additional education about cardiac-related genetics. Two investigators trained in qualitative research methods conducted phone interviews between December 2019 and November 2020. Each interview lasted approximately 30 minutes. Participants could receive a $25 gift card in appreciation for their time, or could donate the $25 to the Sudden Arrhythmia Death Syndromes Foundation. Data saturation was achieved around the 35th interview. Data analysis Audio files were professionally transcribed, de-identified, checked for accuracy, and transferred into MAXQDA version 20 (VERBI GmbH) for thematic analysis. , First, 2 investigators read and discussed the transcripts and identified codes on the basis of the overarching research questions and interview guide. Subsequently, they closely examined 3 transcripts by applying the codes and identifying emergent themes through iterative discussion leading to the development of a final codebook. They independently applied the codebook to 5 additional transcripts (12%) and achieved acceptable intercoder reliability (α = 83.0). The remainder were divided and coded separately. In total, 4 investigators collaboratively contextualized the codes through discussion and identified overarching themes. After the approval by the Institutional Review Board at Northwestern University, participants were recruited using purposive sampling from cardiology practices in the Midwest and the Northeast. Potential participants were identified by an expert group of health care providers, researchers, and genetic counselors involved with the study. Interested participants responded to a recruitment email and were screened for eligibility; eligibility criteria included the following: (1) employed by an accredited hospital or clinic in the United States, (2) affiliated with cardiology across the lifespan, (3) involved in the care of patients at risk for SCD, (4) a physician (MD) or advanced practice nurse (APN), and (5) able to read and speak English. After establishing eligibility, potential participants received a Research Electronic Data Capture (Vanderbilt University) link to provide electronic informed consent and complete a demographic survey. The interview guide included questions about participants’ current clinical use of genetics, barriers and facilitators to the integration of genetics into clinical care, motivations for using genetics, and preferences for receiving additional education about cardiac-related genetics. Two investigators trained in qualitative research methods conducted phone interviews between December 2019 and November 2020. Each interview lasted approximately 30 minutes. Participants could receive a $25 gift card in appreciation for their time, or could donate the $25 to the Sudden Arrhythmia Death Syndromes Foundation. Data saturation was achieved around the 35th interview. Audio files were professionally transcribed, de-identified, checked for accuracy, and transferred into MAXQDA version 20 (VERBI GmbH) for thematic analysis. , First, 2 investigators read and discussed the transcripts and identified codes on the basis of the overarching research questions and interview guide. Subsequently, they closely examined 3 transcripts by applying the codes and identifying emergent themes through iterative discussion leading to the development of a final codebook. They independently applied the codebook to 5 additional transcripts (12%) and achieved acceptable intercoder reliability (α = 83.0). The remainder were divided and coded separately. In total, 4 investigators collaboratively contextualized the codes through discussion and identified overarching themes. Participant characteristics A total of 43 participants completed interviews. Most participants were female ( n = 26; 60.5%), White ( n =36; 83.7%), and non-Hispanic ( n = 42; 97.7%), with mean age 40.7 (SD = 10.85) years. In total, 27 participants were MDs (62.8%) and 16 were APNs (37.2%). Participants were fairly evenly split between adult care ( n = 21; 48.8%) and pediatric care ( n = 19; 44.2%). Most participants ( n = 30; 69.8%) reported spending 76% to 100% of their time in direct patient care, had been practicing medicine for 5 to 10 years ( n = 24; 57.1%), and had <5 years of experience working in cardiology ( n = 19; 44.2%). Participants’ work environment was categorized according to bed size (medium vs large) using numbers provided by the American Hospital Directory and as a teaching vs nonteaching institute on the basis of data from the Agency for Healthcare Research and Quality. Most participants worked in large hospitals ( n = 27; 62.8%) and teaching hospitals ( n = 38; 83.7%); all of them were in urban environments ( ). Qualitative results This investigation examined 3 broad categories, including (1) the current use of genetic testing in practice, (2) motivations to integrate genetic testing, and (3) desired education about cardiac genetics. Two themes emerged across all 3 categories: (1) the rapid advancements in genetic science and (2) the importance of a team-based approach to care ( ). Nearly all participants recognized genetics as a rapidly evolving field—exemplified by the perceived speed in which genetics moves from research to clinical application. This subsequently influenced the amount of genetics expertise that the participants desired to gain. In other words, keeping up to date with cardiac genetic science was viewed as outside their scope of practice; this belief was reflected in the type of genetic information they wanted and the frequency with which they wanted to receive it. These views motivated a team-based approach to patient care. A team-based approach refers to the integration of various professionals with complementary expertise in patient care. Sometimes this approach was enacted through referrals to electrophysiologists, genetics experts, or informal conversations with genetics experts (eg, genetic counselors). In other cases, organizational processes and practices enabled formal regular collaborations via case conferences or interdepartmental meetings. Beyond the genetic counselor, participants described including electrophysiologists, APNs, and cardiology fellows in the identification, referral, and management of patients with a possible genetic condition. Participants who infrequently used genetic testing (including referrals) in practice often described the lack of available genetic expertise in their institution as a barrier. Themes are presented later in further detail. Each participant quote is immediately followed by provider type, bed size, and setting in parentheses. Use of genetics in practice All participants used cardiovascular genetic testing but with varying regularity. When ordering testing, nearly all consulted with or referred to a genetic counselor or genetics expert. Those in pediatrics and electrophysiology viewed genetic testing as standard of care and explained that they used genetics more extensively than adult general cardiologists. As a participant shared, ordering genetics in pediatrics is similar to “a knee jerk reflex” (MD, pediatric, medium bed size, academic). Motivation to use genetics is driven by clinical relevance Participants described how the potential for genetic test results to inform diagnosis or treatment drove a recent shift to incorporate genetic counseling referrals and testing in their practice. As a participant explained, “when the labs, the clinical labs became more available, it took a little time. I think we were still treating patients based on their clinical situation, but now more and more we’re relying more on the genetic test results” (MD, pediatrics, medium bed size, academic). A positive genetic test, participants explained, could save lives, inform therapies, or identify other at-risk family members. A negative genetic test in at-risk family members could reduce screening burden and the associated costs and alleviate worry and anxiety. Despite general optimism toward the clinical utility of genetics in cardiac care, participants’ motivation was often dampened by perceived limitations in their current knowledge and rapid changes in genetic science. Variants of uncertain significance, inconclusive, or inconsistent results were a challenge when interpreting results and determining patient management. Although genetic testing was viewed as useful, it was noted that genetic causes of cardiac conditions and diseases are rare. Nevertheless, genetics is a quickly evolving field that led many to imagine cardiovascular genetics’ utility for a wider segment of their patient population in the future. A couple participant specifically indicated that recent guideline changes, which include clinical genetic testing, can motivate genetic testing on a larger scale. Participants also explained that increased implementation of genetics was facilitated by changes within their organization, including simplified administrative processes associated with referrals to genetics and genetic test ordering, and improvements to external processes such as insurance coverage for patients and reductions in cost and time associated with genetic testing. Access to an expert Most participants in our study referred to a genetics expert to order testing or integrated genetics using a team approach. Genetics was viewed as a specialized field in knowledge and practice that many felt was outside the scope of their own specialty. Despite recognizing the promise of cardiovascular genetics, participants indicated that it was challenging to stay abreast of emerging evidence. In particular, participants felt responsible for communicating genetic information to their patients, but some did not feel prepared. For example: “I feel like getting the results, there’s a certain responsibility when you’re the ordering provider in getting the results and then having to communicate that to the patient when you really don’t feel as prepared to do that. So, I’ve tried to just facilitate it getting done rather than being the ordering provider. I feel like I know how to treat heart failure and part of my role is really trying to keep people out of the hospital. But as far as the nitty-gritty, how to interpret genetics results, I don’t feel prepared for and thus wouldn’t feel comfortable communicating that back to the patient” (#38, APN, adult, large bed size, academic). Access to a genetics expert was pivotal in motivating genetic testing, in particular, when a genetic counselor who specializes in cardiology has a physical presence in-clinic. Participants with direct access to genetic counselors found it easy to reach them with questions, to provide educational information, and to participate in meetings and case conferences. Genetic counselors’ visibility increased participants’ awareness and kept genetics at the forefront. Those who did not have direct access to a genetic counselor described thinking about genetic testing or referring for genetic testing less often. Those at institutions with initiatives related to genetics were more likely to consider genetic testing because they had access to genetic counselors who specialized in cardiology. For example, a new organizational initiative to test all patients using a genomic platform generated increased awareness of genetics at the organizational level and generated support for the role of genetic experts within the institution. “The moment the organization got on board with genetics at our organization at a higher level, that’s when it filtered down and it got [a genetic counselor] into the other departments” (APN, adult, large, academic). Barriers to using genetics in practice The main barriers were insurance coverage and/or out-of-pocket costs; however, many noted that these barriers were diminishing. Participants described having difficulty in achieving understanding and addressing concerns about the effect of genetic test results on insurance coverage and discrimination when talking with the patient’s family in pediatric settings. Similarly, those who were treating adult patients also cited difficulty in discussing genetic risk with patients as a barrier. Some participants felt uncomfortable ordering or referring for genetic testing because of the lack of knowledge about when to order a test, what test to order, how to interpret the results, and what management recommendations to provide. Organizational barriers existed for some participants with respect to referring patients—particularly if the referral was outside their health care system—placing the order, and knowing where genetics fit within the clinical workflow. As a participant noted, “I don’t know if there’s one thing that’s really prevented me [from using genetics]. It’s just not part of my routine when it’s trying to follow up with patients… on what they have going on at that exact moment. It’s usually kind of in the background, so it’s not at the top of my list of things to check off” (APN, adults, large bed size, non-academic). Education When asked about what could improve participants’ use of genetics in practice, additional education was often referenced: “I think, actually, it would be interesting just to have a little more education…I guess, what we could be doing better, what’s new in genetics research” (APN, adult, large, academic). Participants’ opinions about their level of genetics knowledge varied, but most felt they could learn more, particularly because genetics was constantly evolving. Rapid advancements in the field of genetics led to both optimism and uncertainty regarding use of genetics in practice and motivated participants’ interest in additional and ongoing education. Almost no participants received formalized genetics training beyond one-off conference presentations and/or single didactic lectures within their department. Many participants received genetic information through mini-lectures or presentations by genetics experts at their institution. Most often, participants described learning “on the fly” from colleagues, partners, or bosses and by searching the internet, reading publications or review articles, looking for guidelines, and talking with their local genetics expert(s) when that resource existed. This type of education and research occurred when they encountered a patient whom they believed was appropriate for genetic testing or if they provided care for a patient with a positive genetic test result. One participant gave the following description: “There’s a handful of other times where there’s a bell that goes off in your head and you know that you are supposed to pursue genetic testing for this or that, but I think that’s not a super nuanced understanding of what testing to do… there’s probably a lot more stuff for cardiology that I should know. I think that it is not built into the educational curriculum in an obvious or formalized way. We see it all the time and so you learn it on the fly, but that, I’m sure, affects the learning curve for us” (MD, pediatric, medium, academic). Participants do not want to become experts Most participants said they were motivated to learn more about genetics, but they did not want to become an expert because genetics was seen as outside the scope of their practice. Participants desired education about broader questions related to testing. “I think having a good understanding of which patients need to be referred for genetic testing and what type of test they should get. Which diagnoses do we recommend genetic testing for, and then particularly, interpretation of the results and understanding how that impacts the further care of my patient, would be helpful” (MD, pediatric, medium bed size, academic). Participants were interested in understanding who might be at risk for genetic conditions and appropriate surveillance, prevention, or treatment recommendations. Some participants described the value of the genetic counselor’s interpretation notes, but others did not have access to view them because of the electronic medical record limitations and were left to interpret the report on their own. A few participants wanted information about ordering genetic testing because not all laboratories have the same testing options or processes. Topics relevant to their personal specialty within cardiology were of greatest interest. For example, some participants were interested in genetics related to hypertrophic cardiomyopathies, channelopathies, or lipidology, whereas others were interested in genetics related to SCD or the genetics associated with aortic aneurysms or rupture. Participants preferred short lectures delivered by experts, with some preferring in person to virtual. Many participants wanted these to be presented by a genetics expert within their own clinic or organization who could discuss relevant processes such as referring and testing within their specific health care system. There was no real consensus on whether synchronous or asynchronous education was prefered because both had advantages and disadvantages. They felt simply seeing and hearing from the local genetics expert would serve as a reminder about their services. Receiving education or brief updates more frequently, such as every 3 to 6 months to a year, was deemed necessary owing to the rapid advancements in genetics and to keep genetics at the forefront when they see patients. Finally, participants reasoned that because patients who are appropriate for genetic testing are uncommon, the training they receive for genetics is often not immediately applicable to practice. Therefore, they wanted quick reference materials for when a patient presented with a phenotype indicative of testing or when a patient received a positive genetic test result. “…when I’m seeing patients myself in a small clinic, when something comes up I need to know about it, right? So, if I see a patient with Marfan syndrome then I need to find out what the latest is on Marfan syndrome. If I’m in a meeting and they say, we’re going to review everything about Marfan syndrome, it doesn’t, at this stage of my career, doesn’t necessarily rule my interest. So it’s more a place where I can go to find out what the latest is on a syndrome once a positive genetic test comes back and some question comes up” (MD, pediatric, large, academic). Recorded mini lectures that they could rewatch, hand-outs, and emails that reviewed the latest findings/discoveries with links to relevant articles/publications were suggested as possible solutions. A desire for guidelines was also mentioned by some participants, and a participant wanted an application that they could consult on their phone while moving through clinic. A total of 43 participants completed interviews. Most participants were female ( n = 26; 60.5%), White ( n =36; 83.7%), and non-Hispanic ( n = 42; 97.7%), with mean age 40.7 (SD = 10.85) years. In total, 27 participants were MDs (62.8%) and 16 were APNs (37.2%). Participants were fairly evenly split between adult care ( n = 21; 48.8%) and pediatric care ( n = 19; 44.2%). Most participants ( n = 30; 69.8%) reported spending 76% to 100% of their time in direct patient care, had been practicing medicine for 5 to 10 years ( n = 24; 57.1%), and had <5 years of experience working in cardiology ( n = 19; 44.2%). Participants’ work environment was categorized according to bed size (medium vs large) using numbers provided by the American Hospital Directory and as a teaching vs nonteaching institute on the basis of data from the Agency for Healthcare Research and Quality. Most participants worked in large hospitals ( n = 27; 62.8%) and teaching hospitals ( n = 38; 83.7%); all of them were in urban environments ( ). This investigation examined 3 broad categories, including (1) the current use of genetic testing in practice, (2) motivations to integrate genetic testing, and (3) desired education about cardiac genetics. Two themes emerged across all 3 categories: (1) the rapid advancements in genetic science and (2) the importance of a team-based approach to care ( ). Nearly all participants recognized genetics as a rapidly evolving field—exemplified by the perceived speed in which genetics moves from research to clinical application. This subsequently influenced the amount of genetics expertise that the participants desired to gain. In other words, keeping up to date with cardiac genetic science was viewed as outside their scope of practice; this belief was reflected in the type of genetic information they wanted and the frequency with which they wanted to receive it. These views motivated a team-based approach to patient care. A team-based approach refers to the integration of various professionals with complementary expertise in patient care. Sometimes this approach was enacted through referrals to electrophysiologists, genetics experts, or informal conversations with genetics experts (eg, genetic counselors). In other cases, organizational processes and practices enabled formal regular collaborations via case conferences or interdepartmental meetings. Beyond the genetic counselor, participants described including electrophysiologists, APNs, and cardiology fellows in the identification, referral, and management of patients with a possible genetic condition. Participants who infrequently used genetic testing (including referrals) in practice often described the lack of available genetic expertise in their institution as a barrier. Themes are presented later in further detail. Each participant quote is immediately followed by provider type, bed size, and setting in parentheses. All participants used cardiovascular genetic testing but with varying regularity. When ordering testing, nearly all consulted with or referred to a genetic counselor or genetics expert. Those in pediatrics and electrophysiology viewed genetic testing as standard of care and explained that they used genetics more extensively than adult general cardiologists. As a participant shared, ordering genetics in pediatrics is similar to “a knee jerk reflex” (MD, pediatric, medium bed size, academic). Motivation to use genetics is driven by clinical relevance Participants described how the potential for genetic test results to inform diagnosis or treatment drove a recent shift to incorporate genetic counseling referrals and testing in their practice. As a participant explained, “when the labs, the clinical labs became more available, it took a little time. I think we were still treating patients based on their clinical situation, but now more and more we’re relying more on the genetic test results” (MD, pediatrics, medium bed size, academic). A positive genetic test, participants explained, could save lives, inform therapies, or identify other at-risk family members. A negative genetic test in at-risk family members could reduce screening burden and the associated costs and alleviate worry and anxiety. Despite general optimism toward the clinical utility of genetics in cardiac care, participants’ motivation was often dampened by perceived limitations in their current knowledge and rapid changes in genetic science. Variants of uncertain significance, inconclusive, or inconsistent results were a challenge when interpreting results and determining patient management. Although genetic testing was viewed as useful, it was noted that genetic causes of cardiac conditions and diseases are rare. Nevertheless, genetics is a quickly evolving field that led many to imagine cardiovascular genetics’ utility for a wider segment of their patient population in the future. A couple participant specifically indicated that recent guideline changes, which include clinical genetic testing, can motivate genetic testing on a larger scale. Participants also explained that increased implementation of genetics was facilitated by changes within their organization, including simplified administrative processes associated with referrals to genetics and genetic test ordering, and improvements to external processes such as insurance coverage for patients and reductions in cost and time associated with genetic testing. Access to an expert Most participants in our study referred to a genetics expert to order testing or integrated genetics using a team approach. Genetics was viewed as a specialized field in knowledge and practice that many felt was outside the scope of their own specialty. Despite recognizing the promise of cardiovascular genetics, participants indicated that it was challenging to stay abreast of emerging evidence. In particular, participants felt responsible for communicating genetic information to their patients, but some did not feel prepared. For example: “I feel like getting the results, there’s a certain responsibility when you’re the ordering provider in getting the results and then having to communicate that to the patient when you really don’t feel as prepared to do that. So, I’ve tried to just facilitate it getting done rather than being the ordering provider. I feel like I know how to treat heart failure and part of my role is really trying to keep people out of the hospital. But as far as the nitty-gritty, how to interpret genetics results, I don’t feel prepared for and thus wouldn’t feel comfortable communicating that back to the patient” (#38, APN, adult, large bed size, academic). Access to a genetics expert was pivotal in motivating genetic testing, in particular, when a genetic counselor who specializes in cardiology has a physical presence in-clinic. Participants with direct access to genetic counselors found it easy to reach them with questions, to provide educational information, and to participate in meetings and case conferences. Genetic counselors’ visibility increased participants’ awareness and kept genetics at the forefront. Those who did not have direct access to a genetic counselor described thinking about genetic testing or referring for genetic testing less often. Those at institutions with initiatives related to genetics were more likely to consider genetic testing because they had access to genetic counselors who specialized in cardiology. For example, a new organizational initiative to test all patients using a genomic platform generated increased awareness of genetics at the organizational level and generated support for the role of genetic experts within the institution. “The moment the organization got on board with genetics at our organization at a higher level, that’s when it filtered down and it got [a genetic counselor] into the other departments” (APN, adult, large, academic). Barriers to using genetics in practice The main barriers were insurance coverage and/or out-of-pocket costs; however, many noted that these barriers were diminishing. Participants described having difficulty in achieving understanding and addressing concerns about the effect of genetic test results on insurance coverage and discrimination when talking with the patient’s family in pediatric settings. Similarly, those who were treating adult patients also cited difficulty in discussing genetic risk with patients as a barrier. Some participants felt uncomfortable ordering or referring for genetic testing because of the lack of knowledge about when to order a test, what test to order, how to interpret the results, and what management recommendations to provide. Organizational barriers existed for some participants with respect to referring patients—particularly if the referral was outside their health care system—placing the order, and knowing where genetics fit within the clinical workflow. As a participant noted, “I don’t know if there’s one thing that’s really prevented me [from using genetics]. It’s just not part of my routine when it’s trying to follow up with patients… on what they have going on at that exact moment. It’s usually kind of in the background, so it’s not at the top of my list of things to check off” (APN, adults, large bed size, non-academic). Participants described how the potential for genetic test results to inform diagnosis or treatment drove a recent shift to incorporate genetic counseling referrals and testing in their practice. As a participant explained, “when the labs, the clinical labs became more available, it took a little time. I think we were still treating patients based on their clinical situation, but now more and more we’re relying more on the genetic test results” (MD, pediatrics, medium bed size, academic). A positive genetic test, participants explained, could save lives, inform therapies, or identify other at-risk family members. A negative genetic test in at-risk family members could reduce screening burden and the associated costs and alleviate worry and anxiety. Despite general optimism toward the clinical utility of genetics in cardiac care, participants’ motivation was often dampened by perceived limitations in their current knowledge and rapid changes in genetic science. Variants of uncertain significance, inconclusive, or inconsistent results were a challenge when interpreting results and determining patient management. Although genetic testing was viewed as useful, it was noted that genetic causes of cardiac conditions and diseases are rare. Nevertheless, genetics is a quickly evolving field that led many to imagine cardiovascular genetics’ utility for a wider segment of their patient population in the future. A couple participant specifically indicated that recent guideline changes, which include clinical genetic testing, can motivate genetic testing on a larger scale. Participants also explained that increased implementation of genetics was facilitated by changes within their organization, including simplified administrative processes associated with referrals to genetics and genetic test ordering, and improvements to external processes such as insurance coverage for patients and reductions in cost and time associated with genetic testing. Most participants in our study referred to a genetics expert to order testing or integrated genetics using a team approach. Genetics was viewed as a specialized field in knowledge and practice that many felt was outside the scope of their own specialty. Despite recognizing the promise of cardiovascular genetics, participants indicated that it was challenging to stay abreast of emerging evidence. In particular, participants felt responsible for communicating genetic information to their patients, but some did not feel prepared. For example: “I feel like getting the results, there’s a certain responsibility when you’re the ordering provider in getting the results and then having to communicate that to the patient when you really don’t feel as prepared to do that. So, I’ve tried to just facilitate it getting done rather than being the ordering provider. I feel like I know how to treat heart failure and part of my role is really trying to keep people out of the hospital. But as far as the nitty-gritty, how to interpret genetics results, I don’t feel prepared for and thus wouldn’t feel comfortable communicating that back to the patient” (#38, APN, adult, large bed size, academic). Access to a genetics expert was pivotal in motivating genetic testing, in particular, when a genetic counselor who specializes in cardiology has a physical presence in-clinic. Participants with direct access to genetic counselors found it easy to reach them with questions, to provide educational information, and to participate in meetings and case conferences. Genetic counselors’ visibility increased participants’ awareness and kept genetics at the forefront. Those who did not have direct access to a genetic counselor described thinking about genetic testing or referring for genetic testing less often. Those at institutions with initiatives related to genetics were more likely to consider genetic testing because they had access to genetic counselors who specialized in cardiology. For example, a new organizational initiative to test all patients using a genomic platform generated increased awareness of genetics at the organizational level and generated support for the role of genetic experts within the institution. “The moment the organization got on board with genetics at our organization at a higher level, that’s when it filtered down and it got [a genetic counselor] into the other departments” (APN, adult, large, academic). The main barriers were insurance coverage and/or out-of-pocket costs; however, many noted that these barriers were diminishing. Participants described having difficulty in achieving understanding and addressing concerns about the effect of genetic test results on insurance coverage and discrimination when talking with the patient’s family in pediatric settings. Similarly, those who were treating adult patients also cited difficulty in discussing genetic risk with patients as a barrier. Some participants felt uncomfortable ordering or referring for genetic testing because of the lack of knowledge about when to order a test, what test to order, how to interpret the results, and what management recommendations to provide. Organizational barriers existed for some participants with respect to referring patients—particularly if the referral was outside their health care system—placing the order, and knowing where genetics fit within the clinical workflow. As a participant noted, “I don’t know if there’s one thing that’s really prevented me [from using genetics]. It’s just not part of my routine when it’s trying to follow up with patients… on what they have going on at that exact moment. It’s usually kind of in the background, so it’s not at the top of my list of things to check off” (APN, adults, large bed size, non-academic). When asked about what could improve participants’ use of genetics in practice, additional education was often referenced: “I think, actually, it would be interesting just to have a little more education…I guess, what we could be doing better, what’s new in genetics research” (APN, adult, large, academic). Participants’ opinions about their level of genetics knowledge varied, but most felt they could learn more, particularly because genetics was constantly evolving. Rapid advancements in the field of genetics led to both optimism and uncertainty regarding use of genetics in practice and motivated participants’ interest in additional and ongoing education. Almost no participants received formalized genetics training beyond one-off conference presentations and/or single didactic lectures within their department. Many participants received genetic information through mini-lectures or presentations by genetics experts at their institution. Most often, participants described learning “on the fly” from colleagues, partners, or bosses and by searching the internet, reading publications or review articles, looking for guidelines, and talking with their local genetics expert(s) when that resource existed. This type of education and research occurred when they encountered a patient whom they believed was appropriate for genetic testing or if they provided care for a patient with a positive genetic test result. One participant gave the following description: “There’s a handful of other times where there’s a bell that goes off in your head and you know that you are supposed to pursue genetic testing for this or that, but I think that’s not a super nuanced understanding of what testing to do… there’s probably a lot more stuff for cardiology that I should know. I think that it is not built into the educational curriculum in an obvious or formalized way. We see it all the time and so you learn it on the fly, but that, I’m sure, affects the learning curve for us” (MD, pediatric, medium, academic). Participants do not want to become experts Most participants said they were motivated to learn more about genetics, but they did not want to become an expert because genetics was seen as outside the scope of their practice. Participants desired education about broader questions related to testing. “I think having a good understanding of which patients need to be referred for genetic testing and what type of test they should get. Which diagnoses do we recommend genetic testing for, and then particularly, interpretation of the results and understanding how that impacts the further care of my patient, would be helpful” (MD, pediatric, medium bed size, academic). Participants were interested in understanding who might be at risk for genetic conditions and appropriate surveillance, prevention, or treatment recommendations. Some participants described the value of the genetic counselor’s interpretation notes, but others did not have access to view them because of the electronic medical record limitations and were left to interpret the report on their own. A few participants wanted information about ordering genetic testing because not all laboratories have the same testing options or processes. Topics relevant to their personal specialty within cardiology were of greatest interest. For example, some participants were interested in genetics related to hypertrophic cardiomyopathies, channelopathies, or lipidology, whereas others were interested in genetics related to SCD or the genetics associated with aortic aneurysms or rupture. Participants preferred short lectures delivered by experts, with some preferring in person to virtual. Many participants wanted these to be presented by a genetics expert within their own clinic or organization who could discuss relevant processes such as referring and testing within their specific health care system. There was no real consensus on whether synchronous or asynchronous education was prefered because both had advantages and disadvantages. They felt simply seeing and hearing from the local genetics expert would serve as a reminder about their services. Receiving education or brief updates more frequently, such as every 3 to 6 months to a year, was deemed necessary owing to the rapid advancements in genetics and to keep genetics at the forefront when they see patients. Finally, participants reasoned that because patients who are appropriate for genetic testing are uncommon, the training they receive for genetics is often not immediately applicable to practice. Therefore, they wanted quick reference materials for when a patient presented with a phenotype indicative of testing or when a patient received a positive genetic test result. “…when I’m seeing patients myself in a small clinic, when something comes up I need to know about it, right? So, if I see a patient with Marfan syndrome then I need to find out what the latest is on Marfan syndrome. If I’m in a meeting and they say, we’re going to review everything about Marfan syndrome, it doesn’t, at this stage of my career, doesn’t necessarily rule my interest. So it’s more a place where I can go to find out what the latest is on a syndrome once a positive genetic test comes back and some question comes up” (MD, pediatric, large, academic). Recorded mini lectures that they could rewatch, hand-outs, and emails that reviewed the latest findings/discoveries with links to relevant articles/publications were suggested as possible solutions. A desire for guidelines was also mentioned by some participants, and a participant wanted an application that they could consult on their phone while moving through clinic. Most participants said they were motivated to learn more about genetics, but they did not want to become an expert because genetics was seen as outside the scope of their practice. Participants desired education about broader questions related to testing. “I think having a good understanding of which patients need to be referred for genetic testing and what type of test they should get. Which diagnoses do we recommend genetic testing for, and then particularly, interpretation of the results and understanding how that impacts the further care of my patient, would be helpful” (MD, pediatric, medium bed size, academic). Participants were interested in understanding who might be at risk for genetic conditions and appropriate surveillance, prevention, or treatment recommendations. Some participants described the value of the genetic counselor’s interpretation notes, but others did not have access to view them because of the electronic medical record limitations and were left to interpret the report on their own. A few participants wanted information about ordering genetic testing because not all laboratories have the same testing options or processes. Topics relevant to their personal specialty within cardiology were of greatest interest. For example, some participants were interested in genetics related to hypertrophic cardiomyopathies, channelopathies, or lipidology, whereas others were interested in genetics related to SCD or the genetics associated with aortic aneurysms or rupture. Participants preferred short lectures delivered by experts, with some preferring in person to virtual. Many participants wanted these to be presented by a genetics expert within their own clinic or organization who could discuss relevant processes such as referring and testing within their specific health care system. There was no real consensus on whether synchronous or asynchronous education was prefered because both had advantages and disadvantages. They felt simply seeing and hearing from the local genetics expert would serve as a reminder about their services. Receiving education or brief updates more frequently, such as every 3 to 6 months to a year, was deemed necessary owing to the rapid advancements in genetics and to keep genetics at the forefront when they see patients. Finally, participants reasoned that because patients who are appropriate for genetic testing are uncommon, the training they receive for genetics is often not immediately applicable to practice. Therefore, they wanted quick reference materials for when a patient presented with a phenotype indicative of testing or when a patient received a positive genetic test result. “…when I’m seeing patients myself in a small clinic, when something comes up I need to know about it, right? So, if I see a patient with Marfan syndrome then I need to find out what the latest is on Marfan syndrome. If I’m in a meeting and they say, we’re going to review everything about Marfan syndrome, it doesn’t, at this stage of my career, doesn’t necessarily rule my interest. So it’s more a place where I can go to find out what the latest is on a syndrome once a positive genetic test comes back and some question comes up” (MD, pediatric, large, academic). Recorded mini lectures that they could rewatch, hand-outs, and emails that reviewed the latest findings/discoveries with links to relevant articles/publications were suggested as possible solutions. A desire for guidelines was also mentioned by some participants, and a participant wanted an application that they could consult on their phone while moving through clinic. Understanding how cardiac care providers think about and use genetic testing in practice can inform effective educational approaches. All of our participants integrated cardiovascular genetics in their practice with varying frequency, which often meant referring patients to a genetic counselor. Nearly all participants felt genetics could inform patient diagnosis and management and indicated support for the clinical use of genetics, particularly given the recent advancements in the field. This important finding indicates that cardiology providers recognize the value of genetic testing, and thus, efforts may not need to be expended on gaining support for the value of genetics in cardiology practice. However, as participants explained, the rapidly advancing nature of genetics is a double-edge sword. Innovations in genetics have reduced costs, increased accessibility, and offered patients promise for improved health outcomes. Nevertheless, the rapidly evolving field can overwhelm medical professionals who are not genetic specialists and who often struggle to keep up with the field. Educational materials and interventions should focus, in part, on resolving this tension, which will hopefully, in turn, expand the field of cardiovascular genetics by allowing those nongenetic providers a greater understanding of the role genetics can play in their patient population. Providers could identify and partner with local Cardiovascular Genetics Clinics that specialize in a range of cardiovascular genetic conditions or Centers of Excellence that exist for some cardiac specific diseases. These specialized clinics often have genetics as part of their program and can even provide interpretation of the genetic test results to the referring MDs, including guideline-based care recommendations and gene-specific medications and/or clinical trials. Participants were most interested in topics related to their own specialty, identifying patients who may have a genetic risk, knowing which genes might be associated with those risks, and managing patients with pathogenic (disease-causing) genetic variants. These desired topics are consistent with the American Heart Association scientific statement that recommended cardiovascular providers should at least, “be conversant in basic concepts of genetics and have the ability to evaluate whether their patients might have genetic cardiovascular conditions.” Participants seek quick-reference information to consult with as they encounter patients who may be at risk for or have a pathogenic genetic variant. Findings from a recent study indicated that genetic and nongenetic professionals spend time seeking out additional information to educate themselves before discussing results with the family. Thus, having up to date information about cardiology-relevant variants in a single repository with curated guidelines for testing, treatment, and/or management may improve clinical efficacy and facilitate variant interpretation. Having organization specific point of care tools and/or care plans would help them to know what tests might be useful and what the latest guidelines suggest in addition to highlighting organizational workflow and resources that can improve clinical efficiency. In contrast with another study, which found cardiologists were prepared to clinically implement genome sequencing, participants in our study, particularly those serving adult populations, felt unprepared to incorporate genetic testing without the support of a genetic expert. Similar to the aforementioned study, participants in our study felt an obligation to know how to interpret and communicate the information from genetic testing to their patients. Apprehension about their own knowledge about genetics and skills in communicating that information to patients prevented some participants from regularly incorporating genetic testing in their practice. Those whose organization supported genetics by providing department-specific genetic counselors were more likely to integrate genetics in their practice. Despite their enthusiasm, those who did not have a genetic counselor available at their institution did not frequently use genetics in their practice. This also correlates with our participants who noted that the frequency with which they encounter a genetic counselor or genetics expert served as a reminder to consider genetics. Frequent interactions between genetics experts and cardiovascular teams could potentially raise awareness for genetics in practice. A recent American Heart Association scientific statement provides guidance on best practices in cardiovascular genetic testing and highlights the importance of including a genetics professional during patient identification as a candidate for genetic testing to support choosing the appropriate test, interpreting the results, and counseling the patient appropriately. In addition, several recent studies echo our findings and point to the advantage of including a genetic counselor in cardiac care teams. For example, a recent study found that genetic counselors were more confident in counseling patients with variants of uncertain significance but were less willing to provide treatment recommendations, whereas the reverse was true for cardiologists. An article examining recent changes in pediatric cardiovascular genetics found that the increased use of panel testing, which includes a greater number of genes associated with cardiovascular conditions, has increased the complexity of genetic testing and result interpretation, thus leading to the recommendation of including genetic counselors in pediatric electrophysiology and cardiomyopathy teams. To summarize, involving a genetic counselor in cardiology practice can facilitate appropriate test selection, facilitate identification of the best person in a family to test, provide accurate result interpretation, and facilitate effective communication with the patient and their family, which ultimately can reduce health care costs. Participants in this study were identified by an expert group of clinicians, researchers, and genetic counselors. Participants in our study may be more likely to be part of a network more familiar with cardiovascular genetics. Furthermore, all participants were located in urban settings; most participants were located at teaching hospitals and were therefore more likely to have access to a genetics expert. Future research should evaluate the rural providers experiences and those with limited access to a genetics expert. Cardiology providers find genetics and genetic testing valuable in practice with regard to diagnosis, treatment, and prevention. Cardiac genetics is viewed as a specialized field that should be incorporated in a team-based approach to cardiac care through a cardiac genetics expert; 1 model suggests the use of Centers of Excellence for both patient care and training. The increased use of telemedicine may facilitate integration between Centers of Excellence and patients and providers who do not have ready access to integrated cardiac genetics care. Information they believed is needed to facilitate genetic testing through a cardiac genetics expert included information about phenotypes that may indicate genetic testing, the genes associated with conditions related to their specialty, and treatment and management recommendations for those with a positive genetic test. Given the rapid evolution of genetics, cardiology providers wanted updates about genetics frequently to keep this information at the forefront in practice. They also desired easily accessible tools or care plans where genetic information could be referenced when they encountered a patient who might be appropriate for genetic testing and for patients who tested positive.
Patient-reported outcome and quality of life research policy: Japan Clinical Oncology Group (JCOG) policy
558a2ebd-e067-4f92-a3c8-0d3fa7aaeec2
9991489
Internal Medicine[mh]
Current situation and background Current situation of patient-reported outcome/health-related quality of life research in cancer clinical research Previously, results obtained from clinical trials have been analysed and new treatment methods have been provided based on scientific grounds by evaluating the safety and efficacy using objective endpoints in cancer treatment development. However, a movement favouring the promotion of patient-focused drug development has been spreading mainly in Europe and the USA that would reflect the opinions, experiences and preferences of patients actually being treated . The US Food and Drug Administration published a guidance in 2009 that summarized points to note when using patient-reported outcome (PRO)/health-related quality of life (HR-QOL) as an endpoint in treatment development; in Europe, the European Medicines Agency published a guidance for evaluating HR-QOL in 2005 and a revised edition in 2016 . Japan Clinical Oncology Group (JCOG) has aimed to improve the quality of medical care and treatment outcomes for cancer patients by establishing new, highly effective standard therapies through multicentre clinical trials. To achieve this end, clinical trials have been implemented that preferentially adopt exceptionally reliable and objective endpoints, including overall survival. However, there has been an increasing demand from JCOG researchers for the use of PRO/HR-QOL, whereby patients conduct self-reported evaluations of the treatment they have received in a clinical trial as a secondary endpoint. This situation resulted in the formation of the former quality of life (QOL) ad hoc committee, which deliberated about the conditions for using PRO/HR-QOL in clinical trials conducted by JCOG (hereafter, JCOG trials) and created the former version of the QOL Assessment policy (hereafter, the former QOL policy) approved on 18 January 2006. However, when using PRO/HR-QOL as an endpoint, there are also issues regarding data handling, reliability and scientific aspects of the data, including the lack of standardized methods for dealing with missing values for patients whose conditions have deteriorated and statistical analysis methods for processing the results obtained. Furthermore, PRO/HR-QOL research creates an extremely large burden for JCOG researchers and the Data Center for tasks such as data collection, thus resulting in limited PRO/HR-QOL research in JCOG trials. Although PRO/HR-QOL was adopted as an endpoint in only nine of the 105 trials conducted by JCOG after the creation of the former QOL policy, the collection proportions for PRO/HR-QOL survey forms, which was previously a cause for concern, were relatively good (~90%), so the environment for PRO/HR-QOL research has gradually improved in JCOG trials. Background for establishing a PRO/QOL research committee and revisions of the former QOL policy As stated previously, there has been limited PRO/HR-QOL research in JCOG trials. However, recently, patients and public involvement (PPI) in cancer treatment development has been promoted mainly by the Ministry of Health, Labour and Welfare, and incorporating PRO/HR-QOL assessments into cancer clinical trials is once again attracting attention. JCOG is also now of the opinion that it is necessary to reconsider the position of PRO/HR-QOL research in its trials with the groundswell of support for promoting PPI in cancer treatment development and the deepening of cooperative research and human exchange with the European Organisation for Research and Treatment of Cancer (EORTC), which has led PRO/HR-QOL research in cancer since 1980. The EORTC-JCOG PRO/QOL Workshop was held on 1 September 2018, providing an opportunity to further promote this movement . After deliberating on the topic in this workshop, it was considered necessary to revise the former QOL policy to promote PRO/HR-QOL research in future JCOG trials, resulting in the formation of the JCOG PRO/QOL research ad hoc committee (March 2019) and one of the discipline committees (April 2021). Purpose The purpose of this policy is to define JCOG’s PRO/HR-QOL research and present the guidance when using PRO/HR-QOL as an endpoint in JCOG trials. Glossary The terminology used in this policy is explained below. PRO: refers to clinical research outcomes for which patients evaluate their diseases and treatments; other people (e.g. doctors) do not add a separate interpretation to patient evaluations. QOL: QOL is a term that expresses the overall quality of a person’s lifestyle and life, including multiple factors, such as physical, psychological and social perspectives; QOL covers not only patients but also healthy individuals, as described in the WHO’s definition of health. HR-QOL: HR-QOL limits the scope of evaluation to areas of QOL that are affected by disease or can be expected to improve through medical treatment. Therefore, the QOL measured in cancer clinical trials is HR-QOL, and hereafter, the word QOL in this policy means HR-QOL * . * Some people consider HR-QOL as part of PRO, so there is room for debate regarding this definition. In this policy, both PRO and HR-QOL are distinguished based on different perspectives: PRO looks at how one measures, whereas HR-QOL looks at what one measures . Psychometric properties: the properties whereby the scale (questionnaire) used to measure QOL properties have been evaluated beforehand to ensure the quality of the scale. These are broadly classified into four properties: reliability, validity, responsiveness and interpretability, as described below: Reliability: the extent to which the measured values do not contain errors. Validity: whether the item that the scale aims to measure is actually measured. Responsiveness: the ability to detect change over time. Interpretability: the extent to which qualitative meaning can be given to the evaluation results. Domain: the elements that comprise the concept of QOL, including activity, physicality, spirituality and sociality. Typical QOL questionnaires—the EORTC QLQ-C30 and Functional Assessment of Cancer Therapy-General (FACT-G)—are comprised of multiple corresponding questions to evaluate each domain. Recall period: the period of time in which subjects are asked to remember (recall) when responding to the questionnaire (e.g. ‘Please circle only one number that best fits your condition in the past week’). Minimally important difference (MID): the minimal clinically meaningful difference in QOL evaluations. Scale: a tool such as a question sheet, questionnaire or subject diary used to measure symptoms and function. Subscale: scales specific to disease, tumours, symptoms and/or treatment (e.g. breast cancer: EORTC QLQ-BR23; head and neck cancer: EORTC QLQ-H&N43), in addition to general scales (e.g. EORTC QLQ-C30) to evaluate the QOL of cancer patients. Such specific scales are called subscales. Linguistic validity: typical QOL questionnaires such as the EORTC QLQ-C30 and FACT-G have been translated into Japanese from English. The purpose of translation is to reproduce a questionnaire that is equivalent to the original through appropriate procedures. Important elements for equivalence in translation are conceptual equivalence, semantic equivalence, substantive equivalence and characteristic equivalence. Role of the committee The role of the committee when PRO/QOL research is conducted in JCOG clinical trials is as follows: Providing advice on matters such as appropriate questionnaires, survey intervals, data collection methods, selection and definition of endpoints, and statistical analysis methods for JCOG trials planning PRO/QOL assessment. Reviewing protocol including PRO/QOL assessment. Providing advice on interpretation of analysis results and reporting of JCOG trials in which PRO/QOL assessment was conducted. Revising PRO/QOL research policy as needed (Chapter 8). PRO/QOL research definitions PRO/QOL research in this policy refers to research in which outcome is measured and evaluated using validated PRO/QOL questionnaires and analysed by generally accepted scientific methods, as shown below. Research not meeting this definition is not referred to as PRO/QOL research in JCOG and is excluded from the scope of this policy. Questionnaires used in the research The questionnaires used in the research are to be filled in or input by patients. Many PRO/QOL questionnaires have been used in the field of oncology for both general PRO/QOL assessments and disease- or treatment-specific evaluations, but the questionnaires used for research must have confirmed psychometric properties of reliability, validity and responsiveness listed in the Consensus-based Standards for the selection of health status Measurement Instruments checklist . Furthermore, when using the Japanese version of questionnaires originally created in English, they must have had scale equivalence confirmed during the translation process. The EORTC Translation Manual is available as a reference for the translation process. Questionnaire collection methods The questionnaires are collected by data coordinating centre (JCOG Data Center, PRO/QOL Study Coordinator for each trial, etc.) established for each individual PRO/QOL research to ensure that the questionnaires are not seen by the attending physician at the participating site. Whether using paper-based questionnaires (paper and pencil type) or electronic collection tools (electronic PRO), appropriate staff members—including the attending physician and clinical research coordinator—explain to patients the methods for filling in the questionnaires or inputting information therein. Assistance with completing or inputting may be provided by appropriate staff other than the attending physician. Current situation of patient-reported outcome/health-related quality of life research in cancer clinical research Previously, results obtained from clinical trials have been analysed and new treatment methods have been provided based on scientific grounds by evaluating the safety and efficacy using objective endpoints in cancer treatment development. However, a movement favouring the promotion of patient-focused drug development has been spreading mainly in Europe and the USA that would reflect the opinions, experiences and preferences of patients actually being treated . The US Food and Drug Administration published a guidance in 2009 that summarized points to note when using patient-reported outcome (PRO)/health-related quality of life (HR-QOL) as an endpoint in treatment development; in Europe, the European Medicines Agency published a guidance for evaluating HR-QOL in 2005 and a revised edition in 2016 . Japan Clinical Oncology Group (JCOG) has aimed to improve the quality of medical care and treatment outcomes for cancer patients by establishing new, highly effective standard therapies through multicentre clinical trials. To achieve this end, clinical trials have been implemented that preferentially adopt exceptionally reliable and objective endpoints, including overall survival. However, there has been an increasing demand from JCOG researchers for the use of PRO/HR-QOL, whereby patients conduct self-reported evaluations of the treatment they have received in a clinical trial as a secondary endpoint. This situation resulted in the formation of the former quality of life (QOL) ad hoc committee, which deliberated about the conditions for using PRO/HR-QOL in clinical trials conducted by JCOG (hereafter, JCOG trials) and created the former version of the QOL Assessment policy (hereafter, the former QOL policy) approved on 18 January 2006. However, when using PRO/HR-QOL as an endpoint, there are also issues regarding data handling, reliability and scientific aspects of the data, including the lack of standardized methods for dealing with missing values for patients whose conditions have deteriorated and statistical analysis methods for processing the results obtained. Furthermore, PRO/HR-QOL research creates an extremely large burden for JCOG researchers and the Data Center for tasks such as data collection, thus resulting in limited PRO/HR-QOL research in JCOG trials. Although PRO/HR-QOL was adopted as an endpoint in only nine of the 105 trials conducted by JCOG after the creation of the former QOL policy, the collection proportions for PRO/HR-QOL survey forms, which was previously a cause for concern, were relatively good (~90%), so the environment for PRO/HR-QOL research has gradually improved in JCOG trials. Background for establishing a PRO/QOL research committee and revisions of the former QOL policy As stated previously, there has been limited PRO/HR-QOL research in JCOG trials. However, recently, patients and public involvement (PPI) in cancer treatment development has been promoted mainly by the Ministry of Health, Labour and Welfare, and incorporating PRO/HR-QOL assessments into cancer clinical trials is once again attracting attention. JCOG is also now of the opinion that it is necessary to reconsider the position of PRO/HR-QOL research in its trials with the groundswell of support for promoting PPI in cancer treatment development and the deepening of cooperative research and human exchange with the European Organisation for Research and Treatment of Cancer (EORTC), which has led PRO/HR-QOL research in cancer since 1980. The EORTC-JCOG PRO/QOL Workshop was held on 1 September 2018, providing an opportunity to further promote this movement . After deliberating on the topic in this workshop, it was considered necessary to revise the former QOL policy to promote PRO/HR-QOL research in future JCOG trials, resulting in the formation of the JCOG PRO/QOL research ad hoc committee (March 2019) and one of the discipline committees (April 2021). Previously, results obtained from clinical trials have been analysed and new treatment methods have been provided based on scientific grounds by evaluating the safety and efficacy using objective endpoints in cancer treatment development. However, a movement favouring the promotion of patient-focused drug development has been spreading mainly in Europe and the USA that would reflect the opinions, experiences and preferences of patients actually being treated . The US Food and Drug Administration published a guidance in 2009 that summarized points to note when using patient-reported outcome (PRO)/health-related quality of life (HR-QOL) as an endpoint in treatment development; in Europe, the European Medicines Agency published a guidance for evaluating HR-QOL in 2005 and a revised edition in 2016 . Japan Clinical Oncology Group (JCOG) has aimed to improve the quality of medical care and treatment outcomes for cancer patients by establishing new, highly effective standard therapies through multicentre clinical trials. To achieve this end, clinical trials have been implemented that preferentially adopt exceptionally reliable and objective endpoints, including overall survival. However, there has been an increasing demand from JCOG researchers for the use of PRO/HR-QOL, whereby patients conduct self-reported evaluations of the treatment they have received in a clinical trial as a secondary endpoint. This situation resulted in the formation of the former quality of life (QOL) ad hoc committee, which deliberated about the conditions for using PRO/HR-QOL in clinical trials conducted by JCOG (hereafter, JCOG trials) and created the former version of the QOL Assessment policy (hereafter, the former QOL policy) approved on 18 January 2006. However, when using PRO/HR-QOL as an endpoint, there are also issues regarding data handling, reliability and scientific aspects of the data, including the lack of standardized methods for dealing with missing values for patients whose conditions have deteriorated and statistical analysis methods for processing the results obtained. Furthermore, PRO/HR-QOL research creates an extremely large burden for JCOG researchers and the Data Center for tasks such as data collection, thus resulting in limited PRO/HR-QOL research in JCOG trials. Although PRO/HR-QOL was adopted as an endpoint in only nine of the 105 trials conducted by JCOG after the creation of the former QOL policy, the collection proportions for PRO/HR-QOL survey forms, which was previously a cause for concern, were relatively good (~90%), so the environment for PRO/HR-QOL research has gradually improved in JCOG trials. As stated previously, there has been limited PRO/HR-QOL research in JCOG trials. However, recently, patients and public involvement (PPI) in cancer treatment development has been promoted mainly by the Ministry of Health, Labour and Welfare, and incorporating PRO/HR-QOL assessments into cancer clinical trials is once again attracting attention. JCOG is also now of the opinion that it is necessary to reconsider the position of PRO/HR-QOL research in its trials with the groundswell of support for promoting PPI in cancer treatment development and the deepening of cooperative research and human exchange with the European Organisation for Research and Treatment of Cancer (EORTC), which has led PRO/HR-QOL research in cancer since 1980. The EORTC-JCOG PRO/QOL Workshop was held on 1 September 2018, providing an opportunity to further promote this movement . After deliberating on the topic in this workshop, it was considered necessary to revise the former QOL policy to promote PRO/HR-QOL research in future JCOG trials, resulting in the formation of the JCOG PRO/QOL research ad hoc committee (March 2019) and one of the discipline committees (April 2021). The purpose of this policy is to define JCOG’s PRO/HR-QOL research and present the guidance when using PRO/HR-QOL as an endpoint in JCOG trials. The terminology used in this policy is explained below. PRO: refers to clinical research outcomes for which patients evaluate their diseases and treatments; other people (e.g. doctors) do not add a separate interpretation to patient evaluations. QOL: QOL is a term that expresses the overall quality of a person’s lifestyle and life, including multiple factors, such as physical, psychological and social perspectives; QOL covers not only patients but also healthy individuals, as described in the WHO’s definition of health. HR-QOL: HR-QOL limits the scope of evaluation to areas of QOL that are affected by disease or can be expected to improve through medical treatment. Therefore, the QOL measured in cancer clinical trials is HR-QOL, and hereafter, the word QOL in this policy means HR-QOL * . * Some people consider HR-QOL as part of PRO, so there is room for debate regarding this definition. In this policy, both PRO and HR-QOL are distinguished based on different perspectives: PRO looks at how one measures, whereas HR-QOL looks at what one measures . Psychometric properties: the properties whereby the scale (questionnaire) used to measure QOL properties have been evaluated beforehand to ensure the quality of the scale. These are broadly classified into four properties: reliability, validity, responsiveness and interpretability, as described below: Reliability: the extent to which the measured values do not contain errors. Validity: whether the item that the scale aims to measure is actually measured. Responsiveness: the ability to detect change over time. Interpretability: the extent to which qualitative meaning can be given to the evaluation results. Domain: the elements that comprise the concept of QOL, including activity, physicality, spirituality and sociality. Typical QOL questionnaires—the EORTC QLQ-C30 and Functional Assessment of Cancer Therapy-General (FACT-G)—are comprised of multiple corresponding questions to evaluate each domain. Recall period: the period of time in which subjects are asked to remember (recall) when responding to the questionnaire (e.g. ‘Please circle only one number that best fits your condition in the past week’). Minimally important difference (MID): the minimal clinically meaningful difference in QOL evaluations. Scale: a tool such as a question sheet, questionnaire or subject diary used to measure symptoms and function. Subscale: scales specific to disease, tumours, symptoms and/or treatment (e.g. breast cancer: EORTC QLQ-BR23; head and neck cancer: EORTC QLQ-H&N43), in addition to general scales (e.g. EORTC QLQ-C30) to evaluate the QOL of cancer patients. Such specific scales are called subscales. Linguistic validity: typical QOL questionnaires such as the EORTC QLQ-C30 and FACT-G have been translated into Japanese from English. The purpose of translation is to reproduce a questionnaire that is equivalent to the original through appropriate procedures. Important elements for equivalence in translation are conceptual equivalence, semantic equivalence, substantive equivalence and characteristic equivalence. The role of the committee when PRO/QOL research is conducted in JCOG clinical trials is as follows: Providing advice on matters such as appropriate questionnaires, survey intervals, data collection methods, selection and definition of endpoints, and statistical analysis methods for JCOG trials planning PRO/QOL assessment. Reviewing protocol including PRO/QOL assessment. Providing advice on interpretation of analysis results and reporting of JCOG trials in which PRO/QOL assessment was conducted. Revising PRO/QOL research policy as needed (Chapter 8). PRO/QOL research in this policy refers to research in which outcome is measured and evaluated using validated PRO/QOL questionnaires and analysed by generally accepted scientific methods, as shown below. Research not meeting this definition is not referred to as PRO/QOL research in JCOG and is excluded from the scope of this policy. Questionnaires used in the research The questionnaires used in the research are to be filled in or input by patients. Many PRO/QOL questionnaires have been used in the field of oncology for both general PRO/QOL assessments and disease- or treatment-specific evaluations, but the questionnaires used for research must have confirmed psychometric properties of reliability, validity and responsiveness listed in the Consensus-based Standards for the selection of health status Measurement Instruments checklist . Furthermore, when using the Japanese version of questionnaires originally created in English, they must have had scale equivalence confirmed during the translation process. The EORTC Translation Manual is available as a reference for the translation process. Questionnaire collection methods The questionnaires are collected by data coordinating centre (JCOG Data Center, PRO/QOL Study Coordinator for each trial, etc.) established for each individual PRO/QOL research to ensure that the questionnaires are not seen by the attending physician at the participating site. Whether using paper-based questionnaires (paper and pencil type) or electronic collection tools (electronic PRO), appropriate staff members—including the attending physician and clinical research coordinator—explain to patients the methods for filling in the questionnaires or inputting information therein. Assistance with completing or inputting may be provided by appropriate staff other than the attending physician. The questionnaires used in the research are to be filled in or input by patients. Many PRO/QOL questionnaires have been used in the field of oncology for both general PRO/QOL assessments and disease- or treatment-specific evaluations, but the questionnaires used for research must have confirmed psychometric properties of reliability, validity and responsiveness listed in the Consensus-based Standards for the selection of health status Measurement Instruments checklist . Furthermore, when using the Japanese version of questionnaires originally created in English, they must have had scale equivalence confirmed during the translation process. The EORTC Translation Manual is available as a reference for the translation process. The questionnaires are collected by data coordinating centre (JCOG Data Center, PRO/QOL Study Coordinator for each trial, etc.) established for each individual PRO/QOL research to ensure that the questionnaires are not seen by the attending physician at the participating site. Whether using paper-based questionnaires (paper and pencil type) or electronic collection tools (electronic PRO), appropriate staff members—including the attending physician and clinical research coordinator—explain to patients the methods for filling in the questionnaires or inputting information therein. Assistance with completing or inputting may be provided by appropriate staff other than the attending physician. Trials including PRO/QOL assessment and the design of those trials When planning a clinical trial that includes PRO/QOL as an endpoint, the rationale set for the endpoint and hypothesis based on that rationale should be described in advance in the study protocol, as is the practice in other clinical trials, to ensure the scientific validity of conducting PRO/QOL assessment in the relevant trial. Blinded randomized controlled trials are the most appropriate when using PRO/QOL as an endpoint because the efficacy of the investigated treatment method and any associated adverse events affect the patient’s PRO/QOL assessment. However, in practical terms, blinding is often difficult because of the nature of the cancer treatment, and when conducting group comparisons regarding treatment, evaluation by the medical staff is not always more accurate than PRO/QOL (which is the patient’s own evaluation). In fact, it has been shown that medical staffs tend to underestimate adverse events . Based on this information, adopting the PRO/QOL as an endpoint is also acceptable in randomized controlled trials, including open-label trials. The PRO/QOL assessment in a single-arm clinical trial is permitted, providing that the purpose of the trial is to investigate the feasibility of the evaluation and obtain basic data on the PRO/QOL assessment in subsequent randomized controlled trials. Points to note when developing the study protocol Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) was published in 2013 as guideline for developing protocols for intervention studies. The Standard Protocol Items: Recommendations for Interventional Trials Patient-Reported Outcomes (SPIRIT-PRO) extension was published in 2018 based on the original SPIRIT as guidelines for developing protocols using PRO/QOL as an endpoint. Generally, clinical trials using the PRO/QOL assessment are planned in accordance with SPIRIT-PRO. SPIRIT-PRO has added a total of 16 items: 11 items (extension) with added content to conform to the PRO/QOL assessment and 5 items (elaboration) with detailed descriptions of the 33 items proposed in the SPIRIT guidelines. The following is an edited checklist of points to note when developing a clinical trial protocol using PRO/QOL as an endpoint based on SPIRIT-PRO . Handling as an endpoint PRO/QOL is generally used as a secondary endpoint. Using PRO/QOL as the primary endpoint in studies with limited subjects and study design is a topic for future consideration. Studies with ‘limited subjects and study design’ include those targeting patients with advanced or recurrent cancer for which the main treatment is for symptom relief or studies aimed at the development of palliative treatment. There are actually a large number of clinical studies that have used PRO/QOL as the primary endpoint for the development of palliative radiotherapy. For example, many confirmatory trials on palliative radiotherapy for painful bone metastases have calculated the percentage for pain relief using the numeric rating scale as the primary endpoint . Many confirmatory trials on palliative radiotherapy for dysphagia in esophageal cancer have used the severity of dysphagia based on the PRO assessment as the primary endpoint . On the other hand, there are almost no reports of using PRO/QOL as the primary endpoint in cancer clinical trials to confirm the efficacy of new treatments. The results of a systematic review of Phase III trials for recurrent prostate cancer published between 2000 and 2015 found that only 22.5% of the trials included PRO/QOL assessments, and no trials used PRO/QOL as the primary endpoint . However, Wilson et al. described the importance of the PRO/QOL assessment and concluded that it could be set as an appropriate endpoint depending on the trial target and purpose. The above information does not rule out the possibility of using PRO/QOL as the primary endpoint. Questionnaires Questionnaires should be selected based on the purpose of the study, psychometric properties, patient background and other factors. Consideration should be taken to ensure that the time required to complete a questionnaire is no more than 20 min for the baseline assessment and no more than 10–15 min for a subsequent assessment to avoid overburdening the patient . Additionally, the linguistic validity of Japanese versions of questionnaires used in the trial must be confirmed. Examples of questionnaires The following are examples of questionnaires widely used in cancer clinical trials that have been translated into Japanese: 1) EORTC Quality of Life Questionnaire (EORTC QLQ-C30) This questionnaire is a 30-item form comprised of five domains (five items on physical functioning, two items on role functioning, two items on cognitive functioning, four items on emotional functioning and two items on social functioning) and symptom scales (three items on fatigue, two items on nausea/vomiting, two items on pain, one item on dyspnea, one item on insomnia, one item on appetite loss, one item on constipation, one item on diarrhoea, one item on financial difficulties and two items on global health status/QOL). In addition to the core questionnaire (C30), additional subscales for different types of cancer are also available, including the LC13 (lung cancer), BR23 (breast cancer) and HN43 (head and neck cancer). The recall period is 1 week. When using this questionnaire for research, it is necessary to preregister via the following URL and obtain permission for use: https://qol.eortc.org/questionnaires /. 2) FACT-G This questionnaire is a 27-item form comprised of four domains (seven items on physical well-being, seven items on social/family well-being aspects, six items on emotional well-being and seven items on functional well-being). Several different types of additional subscales are also available for different types of cancer and for treatment/symptom-related questions, including B (breast cancer), L (lung cancer) and Taxane (taxane anticancer drugs toxicity survey). The recall period is 1 week. When using this questionnaire for research, it is necessary to preregister via the following URL, and obtain permission for use: https://www.facit.org/FACITOrg/Questionnaires . 3) MD Anderson Symptom Inventory This is a scale that evaluates 13 symptoms that are quite common in cancer patients (pain, fatigue, nausea, sleep disturbance, distress, shortness of breath, difficulty remembering, lack of appetite, drowsiness, dry mouth, sadness, vomiting and numbness). It includes six items on impediments to daily life (activity of daily living, mood, working including housework, relations with other people, walking and enjoyment of life). Symptoms are evaluated on an 11-point scale (0–10). The recall period is 24 h. When using this questionnaire for research, it is necessary to preregister via the following URL and obtain permission for use: https://www4.mdanderson.org/symptomresearch/index.cfm . 4) Edmonton Symptom Assessment System This is an evaluation sheet developed to play a role in the assessment of nine symptoms (pain, tiredness, drowsiness, nausea, lack of appetite, shortness of breath, depression, anxiety and well-being). The severity of symptoms is evaluated on an 11-point scale (0–10). Permission is not needed to use this evaluation sheet. See the following URL for details: https://www.ncc.go.jp/jp/ncce/clinic/psychiatry/040/ESAS-r-J.pdf . 5) PRO version of the Common Terminology Criteria for Adverse Events The PRO version of the CTCAE was developed by the US National Cancer Institute. There are 124 questions consisting of 78 items regarding adverse events. A Japanese version has been created by Yamaguchi et al. and is available for download free-of-charge from the JCOG website. Refer to the following link: https://healthcaredelivery.cancer.gov/pro-ctcae/pro-ctcae_japanese.pdf . 6) EQ-5D This is a comprehensive evaluation scale developed by the EuroQol group. It is comprised of two parts: questions on five items and a visual analogue scale, and they can be converted to a standardized utility value of ‘completely healthy = 1’ and ‘dead = 0’, based on the response results. An individual’s quality-adjusted life year can be determined with this scale, and it is used for medical economics evaluations. When using this questionnaire for research, it is necessary to preregister via the following URL and obtain permission for use: https://euroqol.org/ . Assessment method and data collection 1) Assessment interval The timing and frequency of the PRO/QOL assessment should strike a balance between the purpose/significance of the study, feasibility and burden on the patient. This is an important issue. Investigate survey timing while considering the following items. The natural course of the disease: the timing of survey should correspond to the (expected) greatest changes in patient symptoms and QOL during the course of the disease. Hypothesis to be confirmed. Data analysis method: comparison with baseline, time-to-event, etc. Characteristics of study treatment: for pharmaceuticals, consider factors such as the dose and how long the effect will be maintained after treatment. Recall period in the questionnaire: how far back in time will patients be required to evaluate their conditions? Patient burden: frequent surveys create a burden for patients and can also affect their willingness to participate in the trial. Ensure that the questionnaires do not overburden patients. 2) Assessment duration It is recommended that the expected onset of symptoms and toxicity be considered so that data can be collected during a period that will cover the most clinically important time. It is important to conduct evaluations continuously after a patient’s condition worsens and during the post-treatment period to ensure an accurate PRO/QOL assessment of the protocol treatment. For example, in a randomized controlled trial, the standard treatment group would be expected to have a shorter time until worsening of the primary disease than the study treatment group. In these instances, stopping PRO/QOL assessments simultaneously with a worsening of the primary disease may result in an overestimation (or underestimation) of PRO/QOL in the standard treatment group. Based on the above information, sufficiently long assessment duration should be specified in the protocol of each study while considering feasibility and interpretability to ensure accurate evaluation of the results of PRO/QOL research. 3) Data collection method Data collection methods include interactive voice response and self-administered surveys (patients complete a paper survey or use electronic device). Select an appropriate collection method considering the feasibility based on the age distribution of the target patients, disease and staging, as well as the introduction cost. Statistical considerations Statistical analysis in PRO/QOL research Most of questionnaires, including QOL scales, are multidimensional and can generate multiple scores (e.g. score for each domain, total score). Furthermore, a PRO/QOL assessment is normally conducted at multiple points in time. Graphic display of PRO data is important, and statistical analysis should be clearly specified in the protocol written before the research begins. PRO endpoints include the score itself and responder/non-responder status (the definition is important; e.g. a 33% reduction in the score) at a specific time point, time to a specific event (e.g. a two-point reduction in the score), and change in the score and area under the curve throughout the entire observation period. It is also necessary to consider an MID (see Glossary). Multiplicity issues may need to be considered at the design, analysis and interpretation steps of the research. Handling missing data Missing data inevitably occur in PRO/QOL assessments. The first consideration is developing a study protocol that will minimize missing data. It is also preferable to apply analytical methods for which missing data are unlikely to affect the conclusion or analytical methods that fully consider the reasons for the missing data. Therefore, a protocol should be formulated that will collect and enable to understand reasons. There are two levels of missing PRO/QOL data at a specific time point: (1) data are missing for some items, but not for all items in the scale, and (2) the entire PRO/QOL assessment has not been conducted. In case of (1), methods for dealing with missing items are shown in some scale scoring manuals (e.g. methods for calculating the entire score), but it is necessary to thoroughly confirm whether application of these methods is appropriate. In case of (2), it is necessary to make an assumption regarding the reason for missing data in the analysis (i.e. make an assumption about the missing data mechanism). There are various statistical approaches, including a complete case analysis, a number of imputation methods and model-based methods, but it is necessary to summarize the missing status for each time point and to conduct a statistically valid analysis under the primary missing data assumptions. It is essential to thoroughly specify the methods used to deal with missing data in the protocol. Unfortunately, no universally applicable methods of handling missing data can be recommended. An investigation should be made concerning the sensitivity of the results of analysis to the method of handling missing data, especially if the number of missing data is substantial. Reporting results When a clinical study including a PRO/QOL assessment has been conducted, the publication of the evaluation results may affect the results of the primary analysis of the study; generally, the results of the PRO/QOL assessment should be published at the same time of or after publication of the primary analysis results of the study. When reporting the PRO/QOL assessment results of a randomized controlled trial, include information on the reproducibility and validity of the questionnaires used in the study, the methods used for the statistical analysis of the PRO/QOL assessment results, and the methods for handling missing data in accordance with CONSORT PRO Extension.10. Required resources and methods for PRO/QOL assessments When conducting PRO/QOL assessments, the research group is required to prepare the necessary resources to achieve the following objectives: Conduct a baseline PRO/QOL assessment before randomization or before starting treatment on all patients who are the subjects of the PRO/QOL assessment. Conduct the minimized PRO/QOL assessment after the start of treatment as much as possible to investigate the hypothesis for PRO/QOL, except in unavoidable cases, such as patient death, deterioration of the patient’s general condition, hospital transfer and patient refusal. The following procedures are implemented for the attending physicians and PRO/QOL data collection assistants in the participating sites as the necessary information received from data coordinating centre (JCOG Data Center, etc.) and EDC systems that are built and operated by the data coordinating centre: Send a reminder about conducting the baseline PRO/QOL assessment immediately after receiving notification of the patient registration in each trial. Send a reminder by e-mail when the scheduled time for the assessment is approaching to ensure that the PRO/QOL assessment is conducted at an appropriate time after start of treatment. Ascertain whether the PRO/QOL assessment has been conducted at an appropriate time after when the scheduled time of the survey and send a reminder or feedback if it is suspected that the survey might have been forgotten or if there were omissions. Because a full-time person is needed to perform these procedures indicated above, the research group must either appoint a PRO/QOL Research Coordinator within the group for each trial or outsource the duties to JCOG Data Center by providing the necessary expenses. Policy revision This policy will be revised as needed, such as when new findings are acquired that should be included herein. Future perspectives The situation surrounding the development of new cancer treatments has become complicated, and various stakeholders such as pharmaceutical companies and clinical trial support organizations, as well as patients, healthcare providers and regulatory authorities, are involved in the evaluation of PRO/QOL in cancer clinical trials. Furthermore, as mentioned earlier, statistical methods such as the handling of missing data and the optimal selection of analytical methods have not been fully established. Under these circumstances, an international project, The Setting International Standards in Analyzing Patient-Reported Outcomes and Quality of Life Endpoints Data (SISAQOL-IMI) Consortium, is currently underway to establish recommendations for standardized methodology to evaluate and analyse PRO/QOL data in cancer clinical trials . The JCOG PRO/QOL research committee has joined this consortium to standardize PRO/QOL research methodologies in the future. In addition, the status of PRO/QOL data collection in clinical research and practice is changing as well. First, the number of PRO/QOL assessment tools is rapidly increasing. To record patient experiences with increased depth and precision, PRO/QOL assessment tools for specific diseases (e.g. breast cancer, lung cancer) and populations (e.g. adolescents and young adults, elderly, cancer survivors) are being developed internationally. Hence, catching up with this trend in Japan is mandatory to make the latest PRO/QOL tools available. Second, the methods of collecting and managing PRO/QOL data can be improved. PRO/QOL data have traditionally been collected from patients using a paper-and-pencil approach; in recent years, however, real-time data collection and electronic monitoring have become possible and have allowed patients to directly enter their information via internet terminals or to send their information using wearable devices . Third, collected PRO/QOL data will be increasingly used for decision-making in clinical practice. Clinical decision-making has mainly relied on clinical examinations such as imaging tests or blood examinations. PRO/QOL data have been less important because of the variety of assessment tools and complexity of data handling. However, it was reported that the use of PRO data improves not only QOL but also clinical outcomes , and guidelines regarding PRO in clinical use have and will continue to be published by academic societies . These innovative changes will bring about a paradigm shift in PRO/QOL data management in clinical research. We will adopt these advances and update this PRO/QOL research policy based on future global trends in PRO/QOL research. When planning a clinical trial that includes PRO/QOL as an endpoint, the rationale set for the endpoint and hypothesis based on that rationale should be described in advance in the study protocol, as is the practice in other clinical trials, to ensure the scientific validity of conducting PRO/QOL assessment in the relevant trial. Blinded randomized controlled trials are the most appropriate when using PRO/QOL as an endpoint because the efficacy of the investigated treatment method and any associated adverse events affect the patient’s PRO/QOL assessment. However, in practical terms, blinding is often difficult because of the nature of the cancer treatment, and when conducting group comparisons regarding treatment, evaluation by the medical staff is not always more accurate than PRO/QOL (which is the patient’s own evaluation). In fact, it has been shown that medical staffs tend to underestimate adverse events . Based on this information, adopting the PRO/QOL as an endpoint is also acceptable in randomized controlled trials, including open-label trials. The PRO/QOL assessment in a single-arm clinical trial is permitted, providing that the purpose of the trial is to investigate the feasibility of the evaluation and obtain basic data on the PRO/QOL assessment in subsequent randomized controlled trials. Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) was published in 2013 as guideline for developing protocols for intervention studies. The Standard Protocol Items: Recommendations for Interventional Trials Patient-Reported Outcomes (SPIRIT-PRO) extension was published in 2018 based on the original SPIRIT as guidelines for developing protocols using PRO/QOL as an endpoint. Generally, clinical trials using the PRO/QOL assessment are planned in accordance with SPIRIT-PRO. SPIRIT-PRO has added a total of 16 items: 11 items (extension) with added content to conform to the PRO/QOL assessment and 5 items (elaboration) with detailed descriptions of the 33 items proposed in the SPIRIT guidelines. The following is an edited checklist of points to note when developing a clinical trial protocol using PRO/QOL as an endpoint based on SPIRIT-PRO . PRO/QOL is generally used as a secondary endpoint. Using PRO/QOL as the primary endpoint in studies with limited subjects and study design is a topic for future consideration. Studies with ‘limited subjects and study design’ include those targeting patients with advanced or recurrent cancer for which the main treatment is for symptom relief or studies aimed at the development of palliative treatment. There are actually a large number of clinical studies that have used PRO/QOL as the primary endpoint for the development of palliative radiotherapy. For example, many confirmatory trials on palliative radiotherapy for painful bone metastases have calculated the percentage for pain relief using the numeric rating scale as the primary endpoint . Many confirmatory trials on palliative radiotherapy for dysphagia in esophageal cancer have used the severity of dysphagia based on the PRO assessment as the primary endpoint . On the other hand, there are almost no reports of using PRO/QOL as the primary endpoint in cancer clinical trials to confirm the efficacy of new treatments. The results of a systematic review of Phase III trials for recurrent prostate cancer published between 2000 and 2015 found that only 22.5% of the trials included PRO/QOL assessments, and no trials used PRO/QOL as the primary endpoint . However, Wilson et al. described the importance of the PRO/QOL assessment and concluded that it could be set as an appropriate endpoint depending on the trial target and purpose. The above information does not rule out the possibility of using PRO/QOL as the primary endpoint. Questionnaires should be selected based on the purpose of the study, psychometric properties, patient background and other factors. Consideration should be taken to ensure that the time required to complete a questionnaire is no more than 20 min for the baseline assessment and no more than 10–15 min for a subsequent assessment to avoid overburdening the patient . Additionally, the linguistic validity of Japanese versions of questionnaires used in the trial must be confirmed. Examples of questionnaires The following are examples of questionnaires widely used in cancer clinical trials that have been translated into Japanese: 1) EORTC Quality of Life Questionnaire (EORTC QLQ-C30) This questionnaire is a 30-item form comprised of five domains (five items on physical functioning, two items on role functioning, two items on cognitive functioning, four items on emotional functioning and two items on social functioning) and symptom scales (three items on fatigue, two items on nausea/vomiting, two items on pain, one item on dyspnea, one item on insomnia, one item on appetite loss, one item on constipation, one item on diarrhoea, one item on financial difficulties and two items on global health status/QOL). In addition to the core questionnaire (C30), additional subscales for different types of cancer are also available, including the LC13 (lung cancer), BR23 (breast cancer) and HN43 (head and neck cancer). The recall period is 1 week. When using this questionnaire for research, it is necessary to preregister via the following URL and obtain permission for use: https://qol.eortc.org/questionnaires /. 2) FACT-G This questionnaire is a 27-item form comprised of four domains (seven items on physical well-being, seven items on social/family well-being aspects, six items on emotional well-being and seven items on functional well-being). Several different types of additional subscales are also available for different types of cancer and for treatment/symptom-related questions, including B (breast cancer), L (lung cancer) and Taxane (taxane anticancer drugs toxicity survey). The recall period is 1 week. When using this questionnaire for research, it is necessary to preregister via the following URL, and obtain permission for use: https://www.facit.org/FACITOrg/Questionnaires . 3) MD Anderson Symptom Inventory This is a scale that evaluates 13 symptoms that are quite common in cancer patients (pain, fatigue, nausea, sleep disturbance, distress, shortness of breath, difficulty remembering, lack of appetite, drowsiness, dry mouth, sadness, vomiting and numbness). It includes six items on impediments to daily life (activity of daily living, mood, working including housework, relations with other people, walking and enjoyment of life). Symptoms are evaluated on an 11-point scale (0–10). The recall period is 24 h. When using this questionnaire for research, it is necessary to preregister via the following URL and obtain permission for use: https://www4.mdanderson.org/symptomresearch/index.cfm . 4) Edmonton Symptom Assessment System This is an evaluation sheet developed to play a role in the assessment of nine symptoms (pain, tiredness, drowsiness, nausea, lack of appetite, shortness of breath, depression, anxiety and well-being). The severity of symptoms is evaluated on an 11-point scale (0–10). Permission is not needed to use this evaluation sheet. See the following URL for details: https://www.ncc.go.jp/jp/ncce/clinic/psychiatry/040/ESAS-r-J.pdf . 5) PRO version of the Common Terminology Criteria for Adverse Events The PRO version of the CTCAE was developed by the US National Cancer Institute. There are 124 questions consisting of 78 items regarding adverse events. A Japanese version has been created by Yamaguchi et al. and is available for download free-of-charge from the JCOG website. Refer to the following link: https://healthcaredelivery.cancer.gov/pro-ctcae/pro-ctcae_japanese.pdf . 6) EQ-5D This is a comprehensive evaluation scale developed by the EuroQol group. It is comprised of two parts: questions on five items and a visual analogue scale, and they can be converted to a standardized utility value of ‘completely healthy = 1’ and ‘dead = 0’, based on the response results. An individual’s quality-adjusted life year can be determined with this scale, and it is used for medical economics evaluations. When using this questionnaire for research, it is necessary to preregister via the following URL and obtain permission for use: https://euroqol.org/ . Assessment method and data collection 1) Assessment interval The timing and frequency of the PRO/QOL assessment should strike a balance between the purpose/significance of the study, feasibility and burden on the patient. This is an important issue. Investigate survey timing while considering the following items. The natural course of the disease: the timing of survey should correspond to the (expected) greatest changes in patient symptoms and QOL during the course of the disease. Hypothesis to be confirmed. Data analysis method: comparison with baseline, time-to-event, etc. Characteristics of study treatment: for pharmaceuticals, consider factors such as the dose and how long the effect will be maintained after treatment. Recall period in the questionnaire: how far back in time will patients be required to evaluate their conditions? Patient burden: frequent surveys create a burden for patients and can also affect their willingness to participate in the trial. Ensure that the questionnaires do not overburden patients. 2) Assessment duration It is recommended that the expected onset of symptoms and toxicity be considered so that data can be collected during a period that will cover the most clinically important time. It is important to conduct evaluations continuously after a patient’s condition worsens and during the post-treatment period to ensure an accurate PRO/QOL assessment of the protocol treatment. For example, in a randomized controlled trial, the standard treatment group would be expected to have a shorter time until worsening of the primary disease than the study treatment group. In these instances, stopping PRO/QOL assessments simultaneously with a worsening of the primary disease may result in an overestimation (or underestimation) of PRO/QOL in the standard treatment group. Based on the above information, sufficiently long assessment duration should be specified in the protocol of each study while considering feasibility and interpretability to ensure accurate evaluation of the results of PRO/QOL research. 3) Data collection method Data collection methods include interactive voice response and self-administered surveys (patients complete a paper survey or use electronic device). Select an appropriate collection method considering the feasibility based on the age distribution of the target patients, disease and staging, as well as the introduction cost. The following are examples of questionnaires widely used in cancer clinical trials that have been translated into Japanese: 1) EORTC Quality of Life Questionnaire (EORTC QLQ-C30) This questionnaire is a 30-item form comprised of five domains (five items on physical functioning, two items on role functioning, two items on cognitive functioning, four items on emotional functioning and two items on social functioning) and symptom scales (three items on fatigue, two items on nausea/vomiting, two items on pain, one item on dyspnea, one item on insomnia, one item on appetite loss, one item on constipation, one item on diarrhoea, one item on financial difficulties and two items on global health status/QOL). In addition to the core questionnaire (C30), additional subscales for different types of cancer are also available, including the LC13 (lung cancer), BR23 (breast cancer) and HN43 (head and neck cancer). The recall period is 1 week. When using this questionnaire for research, it is necessary to preregister via the following URL and obtain permission for use: https://qol.eortc.org/questionnaires /. 2) FACT-G This questionnaire is a 27-item form comprised of four domains (seven items on physical well-being, seven items on social/family well-being aspects, six items on emotional well-being and seven items on functional well-being). Several different types of additional subscales are also available for different types of cancer and for treatment/symptom-related questions, including B (breast cancer), L (lung cancer) and Taxane (taxane anticancer drugs toxicity survey). The recall period is 1 week. When using this questionnaire for research, it is necessary to preregister via the following URL, and obtain permission for use: https://www.facit.org/FACITOrg/Questionnaires . 3) MD Anderson Symptom Inventory This is a scale that evaluates 13 symptoms that are quite common in cancer patients (pain, fatigue, nausea, sleep disturbance, distress, shortness of breath, difficulty remembering, lack of appetite, drowsiness, dry mouth, sadness, vomiting and numbness). It includes six items on impediments to daily life (activity of daily living, mood, working including housework, relations with other people, walking and enjoyment of life). Symptoms are evaluated on an 11-point scale (0–10). The recall period is 24 h. When using this questionnaire for research, it is necessary to preregister via the following URL and obtain permission for use: https://www4.mdanderson.org/symptomresearch/index.cfm . 4) Edmonton Symptom Assessment System This is an evaluation sheet developed to play a role in the assessment of nine symptoms (pain, tiredness, drowsiness, nausea, lack of appetite, shortness of breath, depression, anxiety and well-being). The severity of symptoms is evaluated on an 11-point scale (0–10). Permission is not needed to use this evaluation sheet. See the following URL for details: https://www.ncc.go.jp/jp/ncce/clinic/psychiatry/040/ESAS-r-J.pdf . 5) PRO version of the Common Terminology Criteria for Adverse Events The PRO version of the CTCAE was developed by the US National Cancer Institute. There are 124 questions consisting of 78 items regarding adverse events. A Japanese version has been created by Yamaguchi et al. and is available for download free-of-charge from the JCOG website. Refer to the following link: https://healthcaredelivery.cancer.gov/pro-ctcae/pro-ctcae_japanese.pdf . 6) EQ-5D This is a comprehensive evaluation scale developed by the EuroQol group. It is comprised of two parts: questions on five items and a visual analogue scale, and they can be converted to a standardized utility value of ‘completely healthy = 1’ and ‘dead = 0’, based on the response results. An individual’s quality-adjusted life year can be determined with this scale, and it is used for medical economics evaluations. When using this questionnaire for research, it is necessary to preregister via the following URL and obtain permission for use: https://euroqol.org/ . 1) Assessment interval The timing and frequency of the PRO/QOL assessment should strike a balance between the purpose/significance of the study, feasibility and burden on the patient. This is an important issue. Investigate survey timing while considering the following items. The natural course of the disease: the timing of survey should correspond to the (expected) greatest changes in patient symptoms and QOL during the course of the disease. Hypothesis to be confirmed. Data analysis method: comparison with baseline, time-to-event, etc. Characteristics of study treatment: for pharmaceuticals, consider factors such as the dose and how long the effect will be maintained after treatment. Recall period in the questionnaire: how far back in time will patients be required to evaluate their conditions? Patient burden: frequent surveys create a burden for patients and can also affect their willingness to participate in the trial. Ensure that the questionnaires do not overburden patients. 2) Assessment duration It is recommended that the expected onset of symptoms and toxicity be considered so that data can be collected during a period that will cover the most clinically important time. It is important to conduct evaluations continuously after a patient’s condition worsens and during the post-treatment period to ensure an accurate PRO/QOL assessment of the protocol treatment. For example, in a randomized controlled trial, the standard treatment group would be expected to have a shorter time until worsening of the primary disease than the study treatment group. In these instances, stopping PRO/QOL assessments simultaneously with a worsening of the primary disease may result in an overestimation (or underestimation) of PRO/QOL in the standard treatment group. Based on the above information, sufficiently long assessment duration should be specified in the protocol of each study while considering feasibility and interpretability to ensure accurate evaluation of the results of PRO/QOL research. 3) Data collection method Data collection methods include interactive voice response and self-administered surveys (patients complete a paper survey or use electronic device). Select an appropriate collection method considering the feasibility based on the age distribution of the target patients, disease and staging, as well as the introduction cost. Statistical analysis in PRO/QOL research Most of questionnaires, including QOL scales, are multidimensional and can generate multiple scores (e.g. score for each domain, total score). Furthermore, a PRO/QOL assessment is normally conducted at multiple points in time. Graphic display of PRO data is important, and statistical analysis should be clearly specified in the protocol written before the research begins. PRO endpoints include the score itself and responder/non-responder status (the definition is important; e.g. a 33% reduction in the score) at a specific time point, time to a specific event (e.g. a two-point reduction in the score), and change in the score and area under the curve throughout the entire observation period. It is also necessary to consider an MID (see Glossary). Multiplicity issues may need to be considered at the design, analysis and interpretation steps of the research. Handling missing data Missing data inevitably occur in PRO/QOL assessments. The first consideration is developing a study protocol that will minimize missing data. It is also preferable to apply analytical methods for which missing data are unlikely to affect the conclusion or analytical methods that fully consider the reasons for the missing data. Therefore, a protocol should be formulated that will collect and enable to understand reasons. There are two levels of missing PRO/QOL data at a specific time point: (1) data are missing for some items, but not for all items in the scale, and (2) the entire PRO/QOL assessment has not been conducted. In case of (1), methods for dealing with missing items are shown in some scale scoring manuals (e.g. methods for calculating the entire score), but it is necessary to thoroughly confirm whether application of these methods is appropriate. In case of (2), it is necessary to make an assumption regarding the reason for missing data in the analysis (i.e. make an assumption about the missing data mechanism). There are various statistical approaches, including a complete case analysis, a number of imputation methods and model-based methods, but it is necessary to summarize the missing status for each time point and to conduct a statistically valid analysis under the primary missing data assumptions. It is essential to thoroughly specify the methods used to deal with missing data in the protocol. Unfortunately, no universally applicable methods of handling missing data can be recommended. An investigation should be made concerning the sensitivity of the results of analysis to the method of handling missing data, especially if the number of missing data is substantial. Most of questionnaires, including QOL scales, are multidimensional and can generate multiple scores (e.g. score for each domain, total score). Furthermore, a PRO/QOL assessment is normally conducted at multiple points in time. Graphic display of PRO data is important, and statistical analysis should be clearly specified in the protocol written before the research begins. PRO endpoints include the score itself and responder/non-responder status (the definition is important; e.g. a 33% reduction in the score) at a specific time point, time to a specific event (e.g. a two-point reduction in the score), and change in the score and area under the curve throughout the entire observation period. It is also necessary to consider an MID (see Glossary). Multiplicity issues may need to be considered at the design, analysis and interpretation steps of the research. Missing data inevitably occur in PRO/QOL assessments. The first consideration is developing a study protocol that will minimize missing data. It is also preferable to apply analytical methods for which missing data are unlikely to affect the conclusion or analytical methods that fully consider the reasons for the missing data. Therefore, a protocol should be formulated that will collect and enable to understand reasons. There are two levels of missing PRO/QOL data at a specific time point: (1) data are missing for some items, but not for all items in the scale, and (2) the entire PRO/QOL assessment has not been conducted. In case of (1), methods for dealing with missing items are shown in some scale scoring manuals (e.g. methods for calculating the entire score), but it is necessary to thoroughly confirm whether application of these methods is appropriate. In case of (2), it is necessary to make an assumption regarding the reason for missing data in the analysis (i.e. make an assumption about the missing data mechanism). There are various statistical approaches, including a complete case analysis, a number of imputation methods and model-based methods, but it is necessary to summarize the missing status for each time point and to conduct a statistically valid analysis under the primary missing data assumptions. It is essential to thoroughly specify the methods used to deal with missing data in the protocol. Unfortunately, no universally applicable methods of handling missing data can be recommended. An investigation should be made concerning the sensitivity of the results of analysis to the method of handling missing data, especially if the number of missing data is substantial. When a clinical study including a PRO/QOL assessment has been conducted, the publication of the evaluation results may affect the results of the primary analysis of the study; generally, the results of the PRO/QOL assessment should be published at the same time of or after publication of the primary analysis results of the study. When reporting the PRO/QOL assessment results of a randomized controlled trial, include information on the reproducibility and validity of the questionnaires used in the study, the methods used for the statistical analysis of the PRO/QOL assessment results, and the methods for handling missing data in accordance with CONSORT PRO Extension.10. When conducting PRO/QOL assessments, the research group is required to prepare the necessary resources to achieve the following objectives: Conduct a baseline PRO/QOL assessment before randomization or before starting treatment on all patients who are the subjects of the PRO/QOL assessment. Conduct the minimized PRO/QOL assessment after the start of treatment as much as possible to investigate the hypothesis for PRO/QOL, except in unavoidable cases, such as patient death, deterioration of the patient’s general condition, hospital transfer and patient refusal. The following procedures are implemented for the attending physicians and PRO/QOL data collection assistants in the participating sites as the necessary information received from data coordinating centre (JCOG Data Center, etc.) and EDC systems that are built and operated by the data coordinating centre: Send a reminder about conducting the baseline PRO/QOL assessment immediately after receiving notification of the patient registration in each trial. Send a reminder by e-mail when the scheduled time for the assessment is approaching to ensure that the PRO/QOL assessment is conducted at an appropriate time after start of treatment. Ascertain whether the PRO/QOL assessment has been conducted at an appropriate time after when the scheduled time of the survey and send a reminder or feedback if it is suspected that the survey might have been forgotten or if there were omissions. Because a full-time person is needed to perform these procedures indicated above, the research group must either appoint a PRO/QOL Research Coordinator within the group for each trial or outsource the duties to JCOG Data Center by providing the necessary expenses. This policy will be revised as needed, such as when new findings are acquired that should be included herein. The situation surrounding the development of new cancer treatments has become complicated, and various stakeholders such as pharmaceutical companies and clinical trial support organizations, as well as patients, healthcare providers and regulatory authorities, are involved in the evaluation of PRO/QOL in cancer clinical trials. Furthermore, as mentioned earlier, statistical methods such as the handling of missing data and the optimal selection of analytical methods have not been fully established. Under these circumstances, an international project, The Setting International Standards in Analyzing Patient-Reported Outcomes and Quality of Life Endpoints Data (SISAQOL-IMI) Consortium, is currently underway to establish recommendations for standardized methodology to evaluate and analyse PRO/QOL data in cancer clinical trials . The JCOG PRO/QOL research committee has joined this consortium to standardize PRO/QOL research methodologies in the future. In addition, the status of PRO/QOL data collection in clinical research and practice is changing as well. First, the number of PRO/QOL assessment tools is rapidly increasing. To record patient experiences with increased depth and precision, PRO/QOL assessment tools for specific diseases (e.g. breast cancer, lung cancer) and populations (e.g. adolescents and young adults, elderly, cancer survivors) are being developed internationally. Hence, catching up with this trend in Japan is mandatory to make the latest PRO/QOL tools available. Second, the methods of collecting and managing PRO/QOL data can be improved. PRO/QOL data have traditionally been collected from patients using a paper-and-pencil approach; in recent years, however, real-time data collection and electronic monitoring have become possible and have allowed patients to directly enter their information via internet terminals or to send their information using wearable devices . Third, collected PRO/QOL data will be increasingly used for decision-making in clinical practice. Clinical decision-making has mainly relied on clinical examinations such as imaging tests or blood examinations. PRO/QOL data have been less important because of the variety of assessment tools and complexity of data handling. However, it was reported that the use of PRO data improves not only QOL but also clinical outcomes , and guidelines regarding PRO in clinical use have and will continue to be published by academic societies . These innovative changes will bring about a paradigm shift in PRO/QOL data management in clinical research. We will adopt these advances and update this PRO/QOL research policy based on future global trends in PRO/QOL research.
Automatic purse-string suture skill assessment in transanal total mesorectal excision using deep learning-based video analysis
4fa17290-4ac7-46ed-9105-84d02a8e51fe
9991500
Suturing[mh]
Transanal total mesorectal excision (TaTME) has been favourably welcomed in the field of rectal surgery, with the expectation of improving clinical, oncological, and functional outcomes by providing better visualization and securing distal and circumferential resection margins . Since its introduction in 2010 , numerous data have shown safe oncological dissection after TaTME, including margin status and specimen quality . Although good short-to-intermediate-term oncological outcomes after TaTME have been reported , concerns exist regarding the rate of high local recurrence. In particular, local recurrence with an unconventional multifocal pattern and early occurrence have been reported , , raising serious concerns regarding the procedure in Norway . Purse-string suture is a key procedural step during TaTME. Although exfoliated tumour cells may either directly seed the resection bed or become aerosolized during TaTME dissection , , , a tight closing suture can seal the rectum to prevent any leakage of gas or liquid contaminated with malignant cells . Therefore, it is important to obtain adequate purse-string suture skills, and standardized education and training are crucial. Skill assessment is an important aspect of surgical education and training. It is a prerequisite for targeted feedback, which facilitates skill acquisition by telling surgeons how to improve . However, manual surgical skill assessment relies on the observations and judgements of an individual, which is inevitably associated with subjectivity and bias . Furthermore, trainees require the time and resources of an expert surgeon or a trained rater. Therefore, the development of automatic surgical skill assessment tools could be of great interest. Computer vision (CV) is a research field that handles artificial intelligence (AI)-based understanding of images and videos , and it has resulted in advanced applications. Furthermore, CV has benefited from deep learning (DL), and, along with the evolution of DL technology, the visual recognition accuracy in CV has dramatically improved. Nowadays, AI can perform difficult visual recognition tasks that were previously only possible by humans. Inevitably, it is expected to bring benefits to the surgical field, especially in education and training, and has the potential to automate surgical skill assessment. The aims of this study were to develop an automatic skill assessment system for purse-string suture in TaTME using a DL-based CV approach and to evaluate the reliability of the score output from the proposed system. Study design and video dataset This study was a single-institution retrospective observational study. The video dataset included intraoperative videos of consecutive TaTMEs performed at the National Cancer Center Hospital East, Chiba, Japan, between January 2018 and March 2019. The indication for TaTME, introduced in 2013, was rectal tumours within 10 cm from the anal verge. The procedure was conducted with two teams (transabdominal and transanal) operating simultaneously. The GelPOINT Path transanal access platform (Applied Medical, Rancho Santa Margarita, CA, USA) was employed and purse-string sutures followed by rectal washing were routinely conducted. Every intraoperative video was recorded using the Image 1 S camera system (Karl Storz SE & Co., KG, Tuttlingen, Germany). Each video had: patient information, including age, sex, BMI, and diagnosis; tumour information, including distance from the anal verge; and clinical T category. This study followed the reporting guidelines of the Standards for Quality Improvement Reporting Excellence (SQUIRE) and the Standards for Reporting of Diagnostic Accuracy (STARD). The study’s protocol was reviewed and approved by the Ethics Committee of the National Cancer Center Hospital East, Chiba, Japan (Registration No.: 2018–100). Informed consent was obtained in the form of an opt-out on the website, and those who opted out were excluded. The study conformed to the provisions of the Declaration of Helsinki in 1964 (and revised in Brazil in 2013). Scoring system Information related to purse-string suture skill, including purse-string suture time, surgeon’s experience, and manual purse-string suture skill assessment score, was acquired. The surgeon’s experience was classified into expert, intermediate, and novice, defined as those with more than 30 cases, ten to 30 cases, and fewer than ten cases of TaTME performance respectively. A performance rubric was used as a manual purse-string suture skill assessment tool . The performance rubric was developed as a manual surgical skill assessment tool exclusively for purse-string suture in TaTME, with high inter-rater reliability and strong correlation with Global Operative Assessment of Laparoscopic Skills (GOALS) , . The performance rubric had four skill assessment items (loading the needle (LN), atraumatic needle passage (AP), planned suture path (PS), and overall appearance (OA)), and the score range for each item was 1 to 3. Details of the performance rubric are listed in . Here, manual scores were annotated by two board-certified colorectal surgeons based on the performance rubric definition, and score discrepancies were resolved via discussion. Pre-processing Purse-string suture scenes extracted from each video were divided into five video fragments. Subsequently, each video fragment was split into consecutive static images and input into a convolutional neural network (CNN). In the final layer of the CNN architecture, five groups of consecutive static images were aggregated again as a video clip, and image regression analysis was performed for each video clip. During pre-processing, every image was down-sampled from a resolution of 1280 × 720 pixels to 224 × 224 pixels, and from 30 frames per second (fps) to 10 fps. The maximum length of a video clip was 1 min owing to the GPU memory. The analysis process is shown in . From all cases, five video clips were randomly extracted, and a total of 225 video clips were used. The dataset was divided into training (180 video clips) and test (45 video clips) datasets and validated using the leave-one-supertrial-out (LOSO) scheme , and the video clips included in the training dataset were not present in the test dataset. Deep-learning model To assess surgical skill, it was necessary for the DL model to recognize actions by analysing videos instead of static images. Three-dimensional CNN (3D-CNN)-based DL models enabled the analysis of information, including both spatial and temporal dimensions, and were used for various types of action recognition tasks , . Therefore, the Inception-v1 I3D two-stream (RGB + Optical-flow) model , a 3D-CNN-based DL model, was applied. Pre-training The model was pretrained on the ImageNet dataset and then on the Kinetics dataset . ImageNet contains more than 14 million manually annotated images and has more than 20 000 typical categories, such as ‘balloon’ or ‘strawberry’, consisting of several hundred images. Kinetics, one of the largest human action video datasets available, consists of 400 action classes and contains at least 400 video clips per class. Computational specifications A computer equipped with an NVIDIA Quadro GP100 GPU with 16 GB of VRAM (NVIDIA, Santa Clara, CA, USA) and an Intel ® Xeon ® CPU E5-1620 v4 @ 3.50 GHz with 32 GB of RAM was used for model training and inference. All modelling procedures were performed using a source code written in Python 3.6 (Python Software Foundation, Wilmington, DE, USA) based on https://gitlab.com/nct_tso_public/surgical_skill_classification . Evaluation metrics All the videos in the training dataset and their manual scores were input as the training data, and DL-based image regression analysis was performed for every video clip in the test dataset. Through the DL-based image regression task, the purse-string suture skill scores in TaTME predicted by the trained DL model, that is the AI scores, were output as continuous variables for each skill assessment item, including LN, AP, PS, and OA. Although the DL model is trained to output an AI score that is as close as possible to the manual score, because no upper or lower limits were set in the task, unlike manual scores, the AI scores can be above 3 or below 1. The absolute error between the manual and AI scores for each item was used as the evaluation metric for model performance: ( A b s o l u t e E r r o r ) = | ( M a n u a l S c o r e ) – ( A I S c o r e ) | Furthermore, the correlations between the manual and AI scores for each item, purse-string suture time and total AI score, and the surgeons’ experience and total AI scores were evaluated. Statistical analysis Continuous variables, including the manual and AI scores for the total and each skill assessment item, are presented as mean(s.d.) or median (range) as appropriate. Data were compared using Kruskal-Wallis test or Spearman’s rank correlation coefficient with the significance level set at P < 0.05. All statistical analyses were performed using EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan), which is a graphical user interface for R (R Foundation for Statistical Computing, Vienna, Austria). EZR is a modified version of Commander, which was designed for statistical functions and is frequently used in biostatistics . This study was a single-institution retrospective observational study. The video dataset included intraoperative videos of consecutive TaTMEs performed at the National Cancer Center Hospital East, Chiba, Japan, between January 2018 and March 2019. The indication for TaTME, introduced in 2013, was rectal tumours within 10 cm from the anal verge. The procedure was conducted with two teams (transabdominal and transanal) operating simultaneously. The GelPOINT Path transanal access platform (Applied Medical, Rancho Santa Margarita, CA, USA) was employed and purse-string sutures followed by rectal washing were routinely conducted. Every intraoperative video was recorded using the Image 1 S camera system (Karl Storz SE & Co., KG, Tuttlingen, Germany). Each video had: patient information, including age, sex, BMI, and diagnosis; tumour information, including distance from the anal verge; and clinical T category. This study followed the reporting guidelines of the Standards for Quality Improvement Reporting Excellence (SQUIRE) and the Standards for Reporting of Diagnostic Accuracy (STARD). The study’s protocol was reviewed and approved by the Ethics Committee of the National Cancer Center Hospital East, Chiba, Japan (Registration No.: 2018–100). Informed consent was obtained in the form of an opt-out on the website, and those who opted out were excluded. The study conformed to the provisions of the Declaration of Helsinki in 1964 (and revised in Brazil in 2013). Information related to purse-string suture skill, including purse-string suture time, surgeon’s experience, and manual purse-string suture skill assessment score, was acquired. The surgeon’s experience was classified into expert, intermediate, and novice, defined as those with more than 30 cases, ten to 30 cases, and fewer than ten cases of TaTME performance respectively. A performance rubric was used as a manual purse-string suture skill assessment tool . The performance rubric was developed as a manual surgical skill assessment tool exclusively for purse-string suture in TaTME, with high inter-rater reliability and strong correlation with Global Operative Assessment of Laparoscopic Skills (GOALS) , . The performance rubric had four skill assessment items (loading the needle (LN), atraumatic needle passage (AP), planned suture path (PS), and overall appearance (OA)), and the score range for each item was 1 to 3. Details of the performance rubric are listed in . Here, manual scores were annotated by two board-certified colorectal surgeons based on the performance rubric definition, and score discrepancies were resolved via discussion. Purse-string suture scenes extracted from each video were divided into five video fragments. Subsequently, each video fragment was split into consecutive static images and input into a convolutional neural network (CNN). In the final layer of the CNN architecture, five groups of consecutive static images were aggregated again as a video clip, and image regression analysis was performed for each video clip. During pre-processing, every image was down-sampled from a resolution of 1280 × 720 pixels to 224 × 224 pixels, and from 30 frames per second (fps) to 10 fps. The maximum length of a video clip was 1 min owing to the GPU memory. The analysis process is shown in . From all cases, five video clips were randomly extracted, and a total of 225 video clips were used. The dataset was divided into training (180 video clips) and test (45 video clips) datasets and validated using the leave-one-supertrial-out (LOSO) scheme , and the video clips included in the training dataset were not present in the test dataset. To assess surgical skill, it was necessary for the DL model to recognize actions by analysing videos instead of static images. Three-dimensional CNN (3D-CNN)-based DL models enabled the analysis of information, including both spatial and temporal dimensions, and were used for various types of action recognition tasks , . Therefore, the Inception-v1 I3D two-stream (RGB + Optical-flow) model , a 3D-CNN-based DL model, was applied. The model was pretrained on the ImageNet dataset and then on the Kinetics dataset . ImageNet contains more than 14 million manually annotated images and has more than 20 000 typical categories, such as ‘balloon’ or ‘strawberry’, consisting of several hundred images. Kinetics, one of the largest human action video datasets available, consists of 400 action classes and contains at least 400 video clips per class. A computer equipped with an NVIDIA Quadro GP100 GPU with 16 GB of VRAM (NVIDIA, Santa Clara, CA, USA) and an Intel ® Xeon ® CPU E5-1620 v4 @ 3.50 GHz with 32 GB of RAM was used for model training and inference. All modelling procedures were performed using a source code written in Python 3.6 (Python Software Foundation, Wilmington, DE, USA) based on https://gitlab.com/nct_tso_public/surgical_skill_classification . All the videos in the training dataset and their manual scores were input as the training data, and DL-based image regression analysis was performed for every video clip in the test dataset. Through the DL-based image regression task, the purse-string suture skill scores in TaTME predicted by the trained DL model, that is the AI scores, were output as continuous variables for each skill assessment item, including LN, AP, PS, and OA. Although the DL model is trained to output an AI score that is as close as possible to the manual score, because no upper or lower limits were set in the task, unlike manual scores, the AI scores can be above 3 or below 1. The absolute error between the manual and AI scores for each item was used as the evaluation metric for model performance: ( A b s o l u t e E r r o r ) = | ( M a n u a l S c o r e ) – ( A I S c o r e ) | Furthermore, the correlations between the manual and AI scores for each item, purse-string suture time and total AI score, and the surgeons’ experience and total AI scores were evaluated. Continuous variables, including the manual and AI scores for the total and each skill assessment item, are presented as mean(s.d.) or median (range) as appropriate. Data were compared using Kruskal-Wallis test or Spearman’s rank correlation coefficient with the significance level set at P < 0.05. All statistical analyses were performed using EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan), which is a graphical user interface for R (R Foundation for Statistical Computing, Vienna, Austria). EZR is a modified version of Commander, which was designed for statistical functions and is frequently used in biostatistics . Study population Forty-five videos obtained from five surgeons were evaluated. The median age of the patients was 65 (range 35–85) years, 33 of 45 (73 per cent) were male, the median BMI was 22.9 (range 16.5–37.5) kg/m 2 , and the major diagnosis was primary rectal adenocarcinoma (78 per cent). The median distance from the tumour to the anal verge was 6.8 (range 4–9) cm, and the major clinical T category in the primary rectal adenocarcinoma case was T3 (51 per cent) ( ). Purse-string suture skill information In the video dataset, purse-string suture was performed by an expert in 27 of 45 (60 per cent) cases, and the median purse-string suture time was 3.2 (range 1.1–9.0) min. The mean(s.d.) total manual score was 9.2(2.7) points, and the scores for each skill assessment item were as follows: LN, 2.3(0.81) points; AP, 2.2(0.79) points; PS, 2.5(0.69) points; and OA, 2.2(0.81) points. presents the detailed purse-string suture skill information. AI score output by image regression DL-based image regression analysis was performed for every video clip in the test dataset, and the AI score was automatically output. The mean(s.d.) total AI score was 10.2(3.9) points, and the AI scores for each skill assessment item were as follows: LN, 2.4(1.0) points; AP, 2.5(1.1) points; PS, 2.8(0.94) points; and OA, 2.5(1.2) points. The mean(s.d.) absolute errors between the AI and manual scores for each item were as follows: LN, 0.38(0.36); AP, 0.44(0.39); PS, 0.40(0.38); and OA, 0.46(0.44). shows the absolute errors and the graphical comparison between the AI and manual scores. Reliability evaluation of the AI score summarizes the correlation between the AI and manual scores for each skill assessment item. For LN, the mean(s.d.) AI scores in the group with manual scores of 1, 2, and 3 points were 1.1(0.19), 1.9(0.23), and 3.3(0.65) points respectively. For AP, the mean(s.d.) AI scores in the group with manual scores of 1, 2, and 3 points were 1.2(0.30), 2.0(0.33), and 3.6(0.57) points respectively. For PS, the mean(s.d.) AI scores in the group with manual scores of 1, 2, and 3 points were 1.3(0.38), 2.0(0.11), and 3.5(0.50) points respectively. For OA, the mean(s.d.) AI scores in the group with manual scores of 1, 2, and 3 points were 1.1(0.27), 2.0(0.32), and 3.6(0.69) points respectively. shows the correlation between the total AI score and the purse-string suture time. The median total AI score and purse-string suture time were 9.6 (range 4.7–18.8) points and 3.2 (range 1.1–9.0) min respectively. There was a negative correlation between the AI score and the purse-string suture time with statistical significance; that is higher AI scores were output for efficient purse-string suture procedures, and lower AI scores were output for inefficient ones (correlation coefficient −0.728; P < 0.001). also shows the correlation between the total AI score and the surgeon’s experience. The median total AI score had no significant correlation between novices and surgeons with intermediate experience (6.0 versus 7.8 points; P = 0.822); however, the AI score of experts was higher than those of novices (12.6 versus 6.0 points; P = 0.000290) and surgeons with intermediate experience (12.6 versus 7.8 points; P = 0.00387). Forty-five videos obtained from five surgeons were evaluated. The median age of the patients was 65 (range 35–85) years, 33 of 45 (73 per cent) were male, the median BMI was 22.9 (range 16.5–37.5) kg/m 2 , and the major diagnosis was primary rectal adenocarcinoma (78 per cent). The median distance from the tumour to the anal verge was 6.8 (range 4–9) cm, and the major clinical T category in the primary rectal adenocarcinoma case was T3 (51 per cent) ( ). In the video dataset, purse-string suture was performed by an expert in 27 of 45 (60 per cent) cases, and the median purse-string suture time was 3.2 (range 1.1–9.0) min. The mean(s.d.) total manual score was 9.2(2.7) points, and the scores for each skill assessment item were as follows: LN, 2.3(0.81) points; AP, 2.2(0.79) points; PS, 2.5(0.69) points; and OA, 2.2(0.81) points. presents the detailed purse-string suture skill information. DL-based image regression analysis was performed for every video clip in the test dataset, and the AI score was automatically output. The mean(s.d.) total AI score was 10.2(3.9) points, and the AI scores for each skill assessment item were as follows: LN, 2.4(1.0) points; AP, 2.5(1.1) points; PS, 2.8(0.94) points; and OA, 2.5(1.2) points. The mean(s.d.) absolute errors between the AI and manual scores for each item were as follows: LN, 0.38(0.36); AP, 0.44(0.39); PS, 0.40(0.38); and OA, 0.46(0.44). shows the absolute errors and the graphical comparison between the AI and manual scores. summarizes the correlation between the AI and manual scores for each skill assessment item. For LN, the mean(s.d.) AI scores in the group with manual scores of 1, 2, and 3 points were 1.1(0.19), 1.9(0.23), and 3.3(0.65) points respectively. For AP, the mean(s.d.) AI scores in the group with manual scores of 1, 2, and 3 points were 1.2(0.30), 2.0(0.33), and 3.6(0.57) points respectively. For PS, the mean(s.d.) AI scores in the group with manual scores of 1, 2, and 3 points were 1.3(0.38), 2.0(0.11), and 3.5(0.50) points respectively. For OA, the mean(s.d.) AI scores in the group with manual scores of 1, 2, and 3 points were 1.1(0.27), 2.0(0.32), and 3.6(0.69) points respectively. shows the correlation between the total AI score and the purse-string suture time. The median total AI score and purse-string suture time were 9.6 (range 4.7–18.8) points and 3.2 (range 1.1–9.0) min respectively. There was a negative correlation between the AI score and the purse-string suture time with statistical significance; that is higher AI scores were output for efficient purse-string suture procedures, and lower AI scores were output for inefficient ones (correlation coefficient −0.728; P < 0.001). also shows the correlation between the total AI score and the surgeon’s experience. The median total AI score had no significant correlation between novices and surgeons with intermediate experience (6.0 versus 7.8 points; P = 0.822); however, the AI score of experts was higher than those of novices (12.6 versus 6.0 points; P = 0.000290) and surgeons with intermediate experience (12.6 versus 7.8 points; P = 0.00387). In this study, an automatic skill assessment system for purse-string suture in TaTME using a DL-based CV approach was developed. The AI score, which was automatically output from the developed system, correlated with the manual purse-string suture skill assessment score, purse-string suture time, and the surgeon’s experience with statistical significance. These results indicated that the AI score was reliable, and the proposed approach could be used in practical applications. It is widely accepted that surgical skill is associated with procedural delivery and subsequent outcomes. In the few available reports, the technical skill of surgeons was closely associated with clinical outcomes such as postoperative complications in laparoscopic gastric bypass , radical prostatectomy , and laparoscopic colorectal resection , . Adequate purse-string suture skill is essential to perform TaTME because purse-string suture failure due to inadequate skill is hypothesized to directly affect local recurrence after the operation . Currently, surgical training programmes aim to evaluate objectively the basic surgical skills of surgical trainees using tools such as the Objective Structured Assessment of Technical Skills (OSATS) and GOALS ; however, the outcomes of these tools are subjective, biased, and time-consuming . Therefore, automating the surgical skill assessment process is a promising approach. It can not only address the abovementioned problems but also enable novice surgeons to train and obtain feedback in the absence of a human supervisor. DL models are a class of machine learning that can learn a hierarchy of features by building high-level features from low-level features, thereby automating the feature extraction process. CNNs are a type of DL model in which trainable filters and local neighbourhood pooling operations are applied alternately on raw input images, resulting in a hierarchy of increasingly complex features. Therefore, CNNs can achieve superior performance, especially in CV tasks , . However, CNNs are usually applied to static images instead of videos because they cannot consider the motion information encoded in multiple contiguous frames. Video data are high-dimensional and more complex than sequences of a few motion variables. To address these challenges, approaches working on surgical video data are usually applied to track surgical instruments in a video and then analyse the obtained instrument motion data , . Recently, a 3D-CNN was proposed, which can be effectively applied to analyse videos with temporal dimensions in addition to spatial dimensions . The 3D-CNN is considered to have affinity for automatic surgical skill assessment tasks for the following reasons. First, although motion analysis is crucial for automatic surgical skill assessment, this approach can be directly applied to raw surgical video data. Second, because extracting kinetic data, such as surgical instrument tracking, is not necessary, a time-consuming annotation process can be omitted. Third, it has the potential to be easily applied to every surgery in every field as long as the video has a reliable score. Despite its advantages, the current study has several limitations. First, it was a retrospective experimental study in the beginning phase, and the number of videos included in the dataset was limited. Furthermore, the correlation between surgical skill and patient outcomes, including postoperative complications and oncological recurrence, was not evaluated in this study. Therefore, prospective large-scale verification, including short- and long-term outcomes, is required. Second, the videos in this dataset were obtained from a single institution; thus, the complexity of the data was limited to case variability. Training a DL model with such a dataset could lead to overfitting, which could subsequently reduce the generalizability of the network. To obtain more generalized networks, videos from other medical institutions should be included in future work to ensure higher variability in the dataset. However, in the field associated with DL, technology is evolving daily, and novel CNN architectures are being developed continually. Further improvement in accuracy can be expected by accumulating training data and optimizing the DL model. Nevertheless, this approach has the potential to create a new stream of surgical skill assessment in a variety of surgical fields. zrac176_Supplementary_Data Click here for additional data file.
Recent trends of “manels”: gender representation among invited panelists at an international oncology conference
ad6b8867-16ae-4aa7-8c3f-2af5dff445d0
9991598
Internal Medicine[mh]
Using ASCO online programs, 2018-2021 sessions were reviewed. Faculty information was obtained for those who participated in a panel (defined as a session with minimum 2 speakers, including a chair or moderator); data were extracted by mixed-gender coders. Faculty and presenter information was not obtained for those who presented original research because scientific abstracts selected for presentation are based on merit, and abstract presenters can select alternates to present in their absence, whereas participation in a panel or as a panel chair or moderator is generally ASCO committee appointed. Data collected included perceived or self-reported gender that was based on the panelist’s institutional website or their professional website. Where possible, these were confirmed with the National Provider Identifier (NPI) database, where gender is a required field. Of note, the NPI database asks for “gender” with options of “male/female,” which is generally noted as a biological definition (not a gender identity). The NPI database only provides the options of “male” and “female”—there is no opportunity to select or input anything else. Thus, for the purposes of this analysis, gender was extracted as binary. Also collected were medical specialty, panel role (chair or moderator vs nonchair or nonmoderator), session type, and topic. For 2021 panelists, academic position (when available), number of publications, number of citations, and H-index were retrieved from Web of Science and Scopus between September and December 2021. The Mass General Brigham Institutional Review Board deemed this study as exempt from formal review because of use of public information. Primary outcomes included percentage of manels (defined as panels comprised of all men) and proportion of women panelists. Representation of women among chair or moderator role, specialties, session type, and topic were evaluated. The gender distribution of individual panelists participating in more than 1 role was evaluated. Manel sessions were evaluated by session type and topic. Statistical analysis The Cochran-Armitage test was used to analyze trends in the proportion of manels and representation of women over time. Fisher’s exact test was used to compare the gender distribution between each session type, topic, or specialty with other categories combined and across academic rank. For 2021, analysis was performed by unique panelist, and Wilcoxon rank-sum test was used to compare the number of publications, number of citations, and H-index between genders. P values are based on a 2-sided hypothesis. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc, Cary, NC, USA). The Cochran-Armitage test was used to analyze trends in the proportion of manels and representation of women over time. Fisher’s exact test was used to compare the gender distribution between each session type, topic, or specialty with other categories combined and across academic rank. For 2021, analysis was performed by unique panelist, and Wilcoxon rank-sum test was used to compare the number of publications, number of citations, and H-index between genders. P values are based on a 2-sided hypothesis. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc, Cary, NC, USA). During the ASCO Annual Meetings from 2018 to 2021, there were 670 panels total, 81 of which (12.1%) were manels. Among 2475 panelists, 1181 (47.7%) were women. Over time, there was a statistically significant decrease in the number of manels, from 17.4% (33 of 190 panels) in 2018 to 9.9% (15 of 151 panels) in 2021 ( P = .030) and a corresponding increase in proportion of women panelists from 41.6% to 54.0% ( P < .001) . In addition, the role of chair or moderator was a majority of men (53.2%) in 2018, but since 2019, women represent more than 50% of total chairs or moderators (52.3% in 2019, 50.5% in 2020, and 54.8% in 2021; P trend = .157). Women panelist representation In terms of specialty representation, the representation of women panelists in medical oncology and radiation oncology statistically significantly increased over time, from 42.3% (224 of 530) in 2018 to 52.8% (167 of 316) in 2021 for medical oncology ( P = .003) and 31.8% (14 of 44) in 2018 to 61.5% (24 of 39) in 2021 for radiation oncology ( P = .008; ). The lowest proportions of women panelists were in pathology, radiology, and dermatology specialties (26.2%, P = .001; , available online). Representation of women on leadership or special session panels improved between 2018 and 2021, from 28.6% (8 of 28) in 2018 to 67.9% in 2021 (19 of 28, P = .001). Representation of women also improved for educational panels, from 38.4% (176 of 458) in 2018 to 53.6% (157 of 293) in 2021 ( P < .001). There was no statistically significant change in representation on scientific panels over time (from 48.8% in 2018 to 52.7% in 2021; P = .463; ). Women panelists were underrepresented for the topics of genitourinary cancers (38.6%, P = .029) and translational or preclinical sciences (36.7%, P < .001). However, there was a positive trend toward improved women representation among translational or preclinical sciences (27.4% in 2018 to 41.8% in 2021, P = .031). In contrast, there was no further improvement among genitourinary cancers (41.1% in 2018 to 40.7% in 2021; P = .969; ). In evaluating individuals with more than 1 role (ie, someone who serves as a panelist in more than 1 panel), we found a substantial decrease overall over time from 7.6% (51 of 670) in 2018 to 3.2% (16 of 501) in 2021 ( P < .001; ). There were 8.2% (23 of 282) of women who served in more than 1 role in 2018, which reduced to 3.8% (10 of 266) in 2021 ( P = .011), and there were 7.2% (28 of 388) of men who served in more than 1 role in 2018, which reduced dramatically to 2.6% (6 of 255) in 2021 ( P = .005). Manel sessions by type and topic Among session type and topic, the highest proportion of manels was observed for leadership or special sessions (17.1% vs 12.1% manels overall, P = .419) and translational or preclinical topics (19.6% vs 12.1% manels overall, P = .024). The lowest proportion of manels was observed for scientific sessions (4.2%, P < .001) and supportive oncology (2.8%, P = .110; , available online). The proportion of manels decreased over time among educational sessions from 22.2% in 2018 to 12.9% in 2021 ( P = .037) and leadership or special sessions from 25.0% in 2018 to 0% in 2021 ( P = .057). In contrast, there were some all-women panels, from 12 in 2018 (6.3%), 10 in 2029 (5.8%), 20 in 2020 (12.7%), and 19 in 2021 (12.6%). Comparison of men and women panelists in 2021 In 2021, although there were more women panelists (54%), men held a higher academic rank (43.4% vs 36.5% full professor, P = .028) and had a greater number of publications (median 116 vs 83.5, P < .001), citations (median 5321 vs 2657.5, P < .001), and higher H-index (median 33 vs 25, P < .001) than women . In terms of specialty representation, the representation of women panelists in medical oncology and radiation oncology statistically significantly increased over time, from 42.3% (224 of 530) in 2018 to 52.8% (167 of 316) in 2021 for medical oncology ( P = .003) and 31.8% (14 of 44) in 2018 to 61.5% (24 of 39) in 2021 for radiation oncology ( P = .008; ). The lowest proportions of women panelists were in pathology, radiology, and dermatology specialties (26.2%, P = .001; , available online). Representation of women on leadership or special session panels improved between 2018 and 2021, from 28.6% (8 of 28) in 2018 to 67.9% in 2021 (19 of 28, P = .001). Representation of women also improved for educational panels, from 38.4% (176 of 458) in 2018 to 53.6% (157 of 293) in 2021 ( P < .001). There was no statistically significant change in representation on scientific panels over time (from 48.8% in 2018 to 52.7% in 2021; P = .463; ). Women panelists were underrepresented for the topics of genitourinary cancers (38.6%, P = .029) and translational or preclinical sciences (36.7%, P < .001). However, there was a positive trend toward improved women representation among translational or preclinical sciences (27.4% in 2018 to 41.8% in 2021, P = .031). In contrast, there was no further improvement among genitourinary cancers (41.1% in 2018 to 40.7% in 2021; P = .969; ). In evaluating individuals with more than 1 role (ie, someone who serves as a panelist in more than 1 panel), we found a substantial decrease overall over time from 7.6% (51 of 670) in 2018 to 3.2% (16 of 501) in 2021 ( P < .001; ). There were 8.2% (23 of 282) of women who served in more than 1 role in 2018, which reduced to 3.8% (10 of 266) in 2021 ( P = .011), and there were 7.2% (28 of 388) of men who served in more than 1 role in 2018, which reduced dramatically to 2.6% (6 of 255) in 2021 ( P = .005). Among session type and topic, the highest proportion of manels was observed for leadership or special sessions (17.1% vs 12.1% manels overall, P = .419) and translational or preclinical topics (19.6% vs 12.1% manels overall, P = .024). The lowest proportion of manels was observed for scientific sessions (4.2%, P < .001) and supportive oncology (2.8%, P = .110; , available online). The proportion of manels decreased over time among educational sessions from 22.2% in 2018 to 12.9% in 2021 ( P = .037) and leadership or special sessions from 25.0% in 2018 to 0% in 2021 ( P = .057). In contrast, there were some all-women panels, from 12 in 2018 (6.3%), 10 in 2029 (5.8%), 20 in 2020 (12.7%), and 19 in 2021 (12.6%). In 2021, although there were more women panelists (54%), men held a higher academic rank (43.4% vs 36.5% full professor, P = .028) and had a greater number of publications (median 116 vs 83.5, P < .001), citations (median 5321 vs 2657.5, P < .001), and higher H-index (median 33 vs 25, P < .001) than women . In the oncologic specialties, failure to achieve gender parity among academic faculty at the highest levels of rank or leadership has been demonstrated despite the increasing number of women entering oncology but hovering at lower-rank levels. Scientific society annual meetings serve as important platforms for clinicians and scientists to build a national reputation, gain visibility, and network with collaborators for professional growth and research endeavors. Serving as an invited panelist is critical for promotion and tenure within academia. Overall, at the ASCO Annual Meetings between 2018 and 2021, we found that the proportion of women panelists increased over the study period from 41.6% to 54.0%, with a corresponding decrease in the proportion of manels from 17.4% to 9.9%, demonstrating an improvement in representation of women on panels. There was a striking improvement in women representation in leadership and special sessions as well as educational sessions, with associated decreases in the proportion of manels over time. However, we did find topic areas where there was continued underrepresentation of women (eg, genitourinary cancers and translational or preclinical topics). In addition, the percent of manels among all sessions has remained at approximately 10% since 2019, with no further improvement, even with virtual meetings in 2020 and 2021, due to the COVID-19, or SARS-CoV-2, pandemic. Although in 2018 it was common to have individuals from either gender serve in multiple panelist roles, thankfully in 2021 this trend was dramatically reduced among both men and women. This reduction creates additional speaking opportunities for others. We are encouraged by the improvement in diverse representation of voices among invited panelists at the ASCO Annual Meeting, yet it is imperative that ASCO committees, comprised primarily of volunteer clinicians and scientists from across the world, as well as ASCO leadership be aware of the areas with continued limited representation. Achieving gender equity at the national level has been recognized as an important goal for many different fields. The Lancet announced a No All-Male Panel Policy for Lancet Group editors as part of their commitment to increasing gender equity, diversity, and inclusion in scientific research and publishing . There has been a large social media movement against manels with the Twitter handle @ManelWatchUS as well as others for manels globally. Other fields have started the process of critical introspection to understand their current progress toward achieving gender parity among speakers at large annual conferences. The importance of identifying the prevalence of manels within a field or society cannot be underscored because it is often unintentional and a function of “the first name that comes to mind,” but deliberate efforts can effect substantial change. The American Society for Microbiology Annual Meeting speakers were analyzed between 2011 and 2013, and it was found that an average of 29.6% speakers were women for the 3 years combined . These findings were presented to the leadership and program planning committee for the 2014 meeting, with subsequent analysis demonstrating increased representation of women to 43% at that meeting. These findings were again presented to the 2015 planning committee, with specific instruction to “do better” with respect to gender balance and to avoid sessions with all men, except under “extraordinary circumstances.” The society achieved close to gender parity in 2015 with 48.5% women speakers, along with a dramatic reduction in manels . These results demonstrate that it is possible to achieve gender equity and diversity among speakers in a major scientific meeting in a short time frame based on specific awareness made to the leadership and program planning committee. We recognize that gender equity is not always possible when there are baseline imbalances in the population of experts from which to choose. The finding of disparity of women’s representation in genitourinary cancer topics is in line with the known gender disparity in the field of urology, which appears to extend to the field of urologic oncology . A prior analysis of major urology meetings found that, between December 2019 and November 2020, 63.5% of sessions were manels , which is notably higher compared with the ASCO Annual Meeting. Despite the potential for positive change based on presentation of data on manels and gender disparity among invited speakers as demonstrated by the American Society for Microbiology’s experience, it is important to keep in mind that barriers remain ; so much progress is still needed. Although the presence of manels was overall low (and improved) throughout our study period, it is critically important that ASCO leadership and committee membership continue to maintain their progress, specifically in areas where there are obvious gaps. This is particularly important for ASCO Annual Meeting program planning committees, and ASCO as a whole, given the recent data demonstrating that at the 2017 and 2018 ASCO Annual Meetings, men were less likely to introduce women speakers using a professional address (compared with men, 62% vs 81%, P < .001) and were often introduced by first name only (17% vs 3%, P < .001) , demonstrating unconscious bias and reinforcing gender disparities in oncology. Therefore, further improvement of gender parity among invited speakers as well as training and guidelines regarding speaker introductions may help to reduce this bias. It is also important that panels reflect the demographics of Annual Meeting attendees. In 2017, ASCO queried attendees to collect data on their gender breakdown. Of those who volunteered to categorize their gender as female or male, among full members, 28.5% identified as female vs 28.7% identified as male; among early-career ASCO members, 28.8% identified as female and 33.2% identified as male; and among members-in-training, 23.1% identified as female and 20.7% identified as male . Overall, among all 3 groups, 26.6% were female and 27.5% were male attendees. The difference between the 2 genders was less than 1%, reinforcing the need to ensure speaker invitations and panels are reflective of annual meeting attendees. This is also evident when examining all-woman panels, because there should be a balance of gender diversity to gain a range of perspectives and viewpoints. In terms of medical oncology academic faculty, between 2017 and 2019, approximately 36%-38% identified as women ; of oncology (internal medicine) trainees, between 2018 and 2021, those who identified as women ranged between 20.5% and 33.3%; and of hematology and oncology trainees, those who identified as women ranged between 24.4% and 28.4% . To our knowledge, this is the first evaluation of invited speaker and panel diversity by gender at the one of the largest international oncology meetings over 4 years. There are limitations inherent to the retrospective nature of this study. Because only 1 oncology meeting was studied, there is a chance that these results are not generalizable to the field of oncology as a whole. However, given that this is one of the most widely and globally attended conferences, we feel that these findings are sufficiently representative. In addition, we were limited to only 4 years of data based on data availability at the time of analysis, which may not be sufficient to capture the full range of changes (either positive or negative) over time. However, this analysis represents an initial step and can serve as a benchmark with which to compare future progress. We were not able to capture the age, number of professional years, or educational background of invited speakers and panelists, which could affect our results. Furthermore, we were unable to capture the initial invited speaker(s) by the ASCO committee—it is possible that more women were invited but declined. However, we do note that usually, backup invitees would likely be of the same gender (ie, if an invited woman declined, in general, it is likely that the committee would ask another woman to speak when possible). We also acknowledge that we were unable to deduce race and ethnicity of invited panelists, which is important to evaluate to ensure a diverse representation of views. Finally, we acknowledge that gender is nonbinary, and gender was not self-identified but extracted from various sources; it is possible that these sources were incorrect and speakers or panelists were misclassified. Future analyses using speaker-identified nonbinary gender and race and ethnicity can allow for further insight into the diversity of ASCO-appointed speakers and panels at the ASCO Annual Meeting. Throughout the evaluated study duration, the number of invited women panelists increased during the study period, with a subsequent decrease in the proportion of manels. We applaud ASCO for striving for gender parity among invited panelists, although there are certain topics or specialties where representation of women has remained stagnant. In addition, the proportion of manels has not improved further from 10% since 2019. Although ASCO planning committees are encouraged to be mindful of the diversity of invited speakers to strive for a balance of viewpoints whenever possible, accountability is necessary to ensure that final panels are reflective of the oncology community. It is imperative that ASCO Leadership and Annual Meeting organizers remain aware of current trends and continue to ensure greater representation of voices amongst invited panelists. pkad008_Supplementary_Data Click here for additional data file.
Prevention of device-related infections in patients with cancer: Current practice and future horizons
40086eef-1a7f-479c-b9d9-e35e689f795a
9992006
Internal Medicine[mh]
Cancer is one of the leading causes of death in the industrialized world. The International Agency for Research on Cancer estimated that about 20 million new cases of cancer would be diagnosed and 10 million cancer deaths would occur worldwide in 2020. The past few decades have seen several advances in the management of patients with cancer. Increased research funding through public and private entities and the involvement of numerous scientists and academic institutions in combination with pharmaceutical companies have led to a growing pipeline of life-saving products. New chemotherapeutic regimens, targeted therapies, and checkpoint inhibitors , ; radiation and proton therapy ; and refinement in stem cell transplantation and immune effector cell therapies have all recently entered the oncological armamentarium. In addition, advancements in and contributions of diverse specialized consulting services and numerous supportive care measures as well as increased specialty societal guidelines and standardized institutional procedures have all converged to improve the overall survival rate for patients with cancer. Along with the advances described above, numerous devices have been introduced for the administration of intravenous and intrathecal medications and management of diverse comorbidities and complications related to cancer therapy. These include diverse central venous access devices (CVADs), cardiac-implantable electronic devices (CIEDs), Ommaya reservoirs, external ventricular drains (EVDs), breast implants plus tissue expanders (TEs), percutaneous nephrostomy tubes (PCNTs), and ureteral stents as well as esophageal stents, pleural drains, percutaneous endoscopic gastrostomy (PEG) tubes, percutaneous cholecystostomy tubes, biliary stents, and peritoneal drains ( ). Unfortunately, infections associated with these devices are common, increasing health care costs and complicating patients’ oncological management over both the short and long term, usually leading to delays in further cancer therapy until the infection has resolved. Treating these infections with systemic antimicrobials and removal or replacement of the device is often necessary because of the formation of a three-dimensional biofilm on implants that contains a complex community of sessile bacteria plus host and microbial products. However, removal of an implant may be difficult and at times even prohibitive because of the patient’s underlying comorbidities, thrombocytopenia, immunosuppression, lack of vascular access, and prior surgical interventions. Also, governmental regulations have increased and financial reimbursements have decreased for the treatment of these infections, which can be reasonably prevented through the application of evidence-based guidelines. In 2011, the Centers for Medicare and Medicaid Services began requiring acute care hospitals to report specific types of health care-associated infection data to it through the Centers for Disease Control and Prevention’s National Healthcare Safety Network so that the hospitals can receive their full annual reimbursements. Soon thereafter, the Inpatient Prospective Payment System and Fiscal Year 2013 Rates—Final Rule published by the Centers for Medicare and Medicaid Services listed specific conditions that will not be financially reimbursed, including catheter-associated urinary tract infections, vascular catheter-associated infections, and surgical site infections (SSIs) after certain orthopedic or CIED procedures. Over the following years, more infections deemed to be preventable likely will be added to this list. Therefore, because of the burden on patients, families, physicians, health care institutions, and governments, further reducing the rate of these infections is imperative. Herein, we review the main indications for placement of the above-mentioned devices, their infection rates, as well as the epidemiology and risk factors for infection. We also provide several general and device-specific, evidence-based recommendations for the provider who cares for patients with cancer along with best practices, expert opinions, and novel measures for the prevention and reduction of device-related infections. Several basic principles for the prevention of infections have been implemented over the past few decades. These primary preventive measures, which involve patients, health care workers, and the environment, are commonly used with all surgical interventions, including those that involve the placement of foreign medical devices. Below, we describe these simple and innovative interventions, which should be implemented at all institutions, as they have been demonstrated to significantly reduce the historically high rate of preventable infections ( ). Hand Hygiene In 1847, Dr Ignaz Semmelweis was the first to describe the basic hygienic practice of hand washing as a way to stop the spread of infection and infection-related death in Vienna, Austria. Since then, hand hygiene has become the cornerstone of all surgical and infection-control policies. Although both alcohol-based products and soap and water are effective, in the inpatient and outpatient setting, the former has been demonstrated to be superior to the latter, with a compliance rate that is approximately 25% higher. , In 2006, the World Health Organization added the My Five Moments for Hand Hygiene campaign, which emphasizes key moments for performing hand hygiene based on known mechanisms of microbial cross-transmission among patients, health care workers, and the environment. These five moments are: (1) before touching a patient, (2) before cleaning/aseptic procedures, (3) after body fluid exposure/risk of such exposure, (4) after touching a patient, and (5) after touching a patient’s surroundings. This has proven to be the simplest and most cost-effective intervention for infection prevention. Furthermore, several clinical trials have demonstrated that an increase in hand hygiene acceptance significantly decreases the rates of health care-associated infections. , However, rates of hand hygiene compliance for high-income countries rarely exceed 70%, and the rates are much lower in low-income countries. The long-term challenge in health care settings is to achieve and sustain high hand hygiene compliance among the personnel in all disciplines who interact with patients who have cancer and their environments to decrease the rate of SSIs and device-related infections, especially in the early postoperative period and at subsequent follow-up visits. Perioperative Antisepsis Protocols Many interventions for the prevention of perioperative infections with differing degrees of evidence have been implemented. Recommended interventions with the highest degree of evidence are: (1) administration of antimicrobial prophylaxis according to evidence-based standards and guidelines , (see Perioperative antibacterial reflow skin-preparatory agents if no contraindication exists over povidone iodine, (3) maintenance of normothermia during the perioperative period, (4) optimization of tissue oxygenation by administering supplemental oxygen during and immediately after surgical procedures involving mechanical ventilation, and (5) use of a checklist based on the World Health Organization recommendations to ensure compliance with best practices and thus improve surgical patient safety. Recommended interventions with a moderate degree of evidence are: (1) avoidance of hair removal or use of razors at the operative site unless the presence of hair will interfere with the operation, , (2) control of blood glucose levels during the immediate postoperative period, (3) sterilization of all surgical equipment according to published guidelines, (4) surveillance for SSIs through the use of automated data with ongoing feedback to health care providers and leadership, and (5) implementation of policies and practices aimed at reducing the risk of SSIs that align with evidence-based standards (such as those from the Centers for Disease Control and Prevention, the Society for Healthcare Epidemiology of America, and the Infectious Diseases Society of America). , Recommended interventions with the lowest degree of evidence but that have also been found to be successful are: (1) educating both surgeons and perioperative personnel plus patients and their families about SSI prevention ; (2) observing and reviewing personnel, practices, and the environment of care in the operating room, postanesthesia care unit, surgical intensive care unit, and surgical wards ; (3) measuring and providing feedback to providers regarding rates of compliance with process measures , ; and (4) using an Environmental Protection Agency-approved hospital disinfectant to clean contaminated surfaces, following the American Institute of Architects’ recommendations for proper air handling, and minimizing operating room traffic. , Perioperative Antibacterial Prophylaxis Patients who undergo surgery should receive systemic perioperative antimicrobials according to evidence-based standards and guidelines. , As recommended by the Surgical Care Improvement Project, the use of prophylactic antimicrobials should be based on the surgical procedure and most common pathogens encountered at the surgical site and responsible for causing postoperative SSIs. Every effort to confirm a patient’s reported penicillin allergy should be obtained as part of routine perioperative care, specifically because the odds of developing a SSI increases by 50% when a patient receives a second-line perioperative antibiotic. Also, if a patient is known to be colonized with methicillin-resistant Staphylococcus aureus (MRSA), administering a single dose of vancomycin is reasonable. However, vancomycin is less effective than cefazolin at preventing infections caused by methicillin-susceptible S. aureus or streptococci. For this reason, vancomycin is used in combination with cefazolin at some institutions when the risk of infections with these organisms is high. Furthermore, patients who have cancer usually are already receiving prophylactic antimicrobials because of their underlying immunosuppression and are known to be colonized with or have had prior infections with MRSA, vancomycin-resistant enterococci, and multidrug-resistant gram-negative rods; the decision to use perioperative antimicrobials in such cases should be individualized for each patient. , – Of note, although patients with one of the implanted devices described above have a theoretical risk of becoming secondarily infected during an invasive clean or clean–contaminated procedure, especially if the device was recently placed, evidence that antimicrobial prophylaxis prevents infections of these nonvalvular intravascular medical devices is lacking. , , Prophylactic antimicrobials should be infused within 60 minutes of the incision, whereas vancomycin, aminoglycosides, and quinolones should be infused within 120 minutes. The dosing of these prophylactic antimicrobials should be adjusted on the basis of the patient’s weight and re-dosed at intervals of every two half-lives or when excessive blood loss occurs during the procedure. For surgeries defined as clean or clean–contaminated, the use of all perioperative antimicrobials should be discontinued within 24 hours after the procedure. , In addition to the evidence-based recommendations described above, patients undergoing interventions that involve the placement of an implantable device usually have the device submersed in or the surgical pocket irrigated with an antimicrobial and/or antiseptic solution with the intention to decrease the probability of contaminating the newly placed foreign medical device. Also, surgeons commonly provide postoperative oral antimicrobials beyond 24 hours of surgery to patients who have surgical drains placed near an implantable device, with the hope of further decreasing the rate of infection. Prolonging postoperative antimicrobials in this scenario is performed because these drains are known to allow microbial translocation from the skin to the deeper surgical site where the implant is located. These interventions, with a low degree of evidence, have produced mixed results. Furthermore, extending perioperative use of antimicrobials beyond 24 hours can lead to several unintended side effects, including hypersensitivity reactions, renal failure, antimicrobial resistance, and Clostridium difficile -associated diarrhea. MRSA Screening and Decolonization The likelihood of MRSA colonization increases with: (1) prior history of MRSA infection, (2) hospitalization and exposure to health care facilities within the preceding year, (3) receipt of antibiotics within 3 months before admission, or (4) the presence of select comorbid conditions, such as immunosuppression, diabetes, chronic obstructive pulmonary disease, congestive heart failure, and use of hemodialysis, all of which are commonly encountered in the cancer population. , Surgical patients identified as colonized with MRSA by a positive nasal polymerase chain reaction screen have been found to have 2-fold to 14-fold greater odds of a subsequent MRSA SSI than patients with negative nasal MRSA polymerase chain reaction screens. , Several studies have shown that a bundled approach that includes decolonization protocols plus intravenous vancomycin prophylaxis can decrease the rate of postoperative gram-positive infections, especially in the orthopedic and cardiac surgical population, whereas other studies have not shown this benefit. , Decolonization protocols that include topical mupirocin and chlorhexidine gluconate (CHG) versus a placebo have been effective in reducing the rate of postoperative infections, with a relative risk of infection of 0.42 (95% CI, 0.23–0.75). Development of resistance to mupirocin is unlikely in the perioperative setting, especially when it is not used as an ointment for prolonged periods. CHG resistance is also uncommon, mainly because topical concentrations of CHG used for decolonization are 200-fold higher than the highest recorded minimum inhibitory and bactericidal concentrations of it used for staphylococci. , Therefore, most researchers concluded that the use of preoperative intranasal mupirocin and/or topical CHG in MRSA-colonized patients is safe and potentially beneficial as an adjuvant to intravenous antimicrobial prophylaxis to decrease the occurrence of SSIs. Screening and targeted decolonization should specifically be considered for all those patient at high risk for negative outcomes, including the immunocompromised cancer population with device implantation. Infection Control and Prevention Programs To further decrease the risk of cross-contamination, nosocomial transmission, and SSIs, mainly caused by MRSA, in the acute health care setting, a robust infection control department should be established, ensuring the following : (1) implementation of a MRSA monitoring program along with a laboratory-based alert system that notifies health care workers of new MRSA-colonized or MRSA-infected patients in a timely manner , ; (2) use of contact precautions for MRSA-colonized and MSRA-infected patients , ; (3) cleaning and disinfection of equipment and the environment ; (4) provision of MRSA data and outcome measures to senior leadership, physicians, and nursing staff; and (5) education of health care workers as well as patients and their families about MRSA. In 1847, Dr Ignaz Semmelweis was the first to describe the basic hygienic practice of hand washing as a way to stop the spread of infection and infection-related death in Vienna, Austria. Since then, hand hygiene has become the cornerstone of all surgical and infection-control policies. Although both alcohol-based products and soap and water are effective, in the inpatient and outpatient setting, the former has been demonstrated to be superior to the latter, with a compliance rate that is approximately 25% higher. , In 2006, the World Health Organization added the My Five Moments for Hand Hygiene campaign, which emphasizes key moments for performing hand hygiene based on known mechanisms of microbial cross-transmission among patients, health care workers, and the environment. These five moments are: (1) before touching a patient, (2) before cleaning/aseptic procedures, (3) after body fluid exposure/risk of such exposure, (4) after touching a patient, and (5) after touching a patient’s surroundings. This has proven to be the simplest and most cost-effective intervention for infection prevention. Furthermore, several clinical trials have demonstrated that an increase in hand hygiene acceptance significantly decreases the rates of health care-associated infections. , However, rates of hand hygiene compliance for high-income countries rarely exceed 70%, and the rates are much lower in low-income countries. The long-term challenge in health care settings is to achieve and sustain high hand hygiene compliance among the personnel in all disciplines who interact with patients who have cancer and their environments to decrease the rate of SSIs and device-related infections, especially in the early postoperative period and at subsequent follow-up visits. Many interventions for the prevention of perioperative infections with differing degrees of evidence have been implemented. Recommended interventions with the highest degree of evidence are: (1) administration of antimicrobial prophylaxis according to evidence-based standards and guidelines , (see Perioperative antibacterial reflow skin-preparatory agents if no contraindication exists over povidone iodine, (3) maintenance of normothermia during the perioperative period, (4) optimization of tissue oxygenation by administering supplemental oxygen during and immediately after surgical procedures involving mechanical ventilation, and (5) use of a checklist based on the World Health Organization recommendations to ensure compliance with best practices and thus improve surgical patient safety. Recommended interventions with a moderate degree of evidence are: (1) avoidance of hair removal or use of razors at the operative site unless the presence of hair will interfere with the operation, , (2) control of blood glucose levels during the immediate postoperative period, (3) sterilization of all surgical equipment according to published guidelines, (4) surveillance for SSIs through the use of automated data with ongoing feedback to health care providers and leadership, and (5) implementation of policies and practices aimed at reducing the risk of SSIs that align with evidence-based standards (such as those from the Centers for Disease Control and Prevention, the Society for Healthcare Epidemiology of America, and the Infectious Diseases Society of America). , Recommended interventions with the lowest degree of evidence but that have also been found to be successful are: (1) educating both surgeons and perioperative personnel plus patients and their families about SSI prevention ; (2) observing and reviewing personnel, practices, and the environment of care in the operating room, postanesthesia care unit, surgical intensive care unit, and surgical wards ; (3) measuring and providing feedback to providers regarding rates of compliance with process measures , ; and (4) using an Environmental Protection Agency-approved hospital disinfectant to clean contaminated surfaces, following the American Institute of Architects’ recommendations for proper air handling, and minimizing operating room traffic. , Patients who undergo surgery should receive systemic perioperative antimicrobials according to evidence-based standards and guidelines. , As recommended by the Surgical Care Improvement Project, the use of prophylactic antimicrobials should be based on the surgical procedure and most common pathogens encountered at the surgical site and responsible for causing postoperative SSIs. Every effort to confirm a patient’s reported penicillin allergy should be obtained as part of routine perioperative care, specifically because the odds of developing a SSI increases by 50% when a patient receives a second-line perioperative antibiotic. Also, if a patient is known to be colonized with methicillin-resistant Staphylococcus aureus (MRSA), administering a single dose of vancomycin is reasonable. However, vancomycin is less effective than cefazolin at preventing infections caused by methicillin-susceptible S. aureus or streptococci. For this reason, vancomycin is used in combination with cefazolin at some institutions when the risk of infections with these organisms is high. Furthermore, patients who have cancer usually are already receiving prophylactic antimicrobials because of their underlying immunosuppression and are known to be colonized with or have had prior infections with MRSA, vancomycin-resistant enterococci, and multidrug-resistant gram-negative rods; the decision to use perioperative antimicrobials in such cases should be individualized for each patient. , – Of note, although patients with one of the implanted devices described above have a theoretical risk of becoming secondarily infected during an invasive clean or clean–contaminated procedure, especially if the device was recently placed, evidence that antimicrobial prophylaxis prevents infections of these nonvalvular intravascular medical devices is lacking. , , Prophylactic antimicrobials should be infused within 60 minutes of the incision, whereas vancomycin, aminoglycosides, and quinolones should be infused within 120 minutes. The dosing of these prophylactic antimicrobials should be adjusted on the basis of the patient’s weight and re-dosed at intervals of every two half-lives or when excessive blood loss occurs during the procedure. For surgeries defined as clean or clean–contaminated, the use of all perioperative antimicrobials should be discontinued within 24 hours after the procedure. , In addition to the evidence-based recommendations described above, patients undergoing interventions that involve the placement of an implantable device usually have the device submersed in or the surgical pocket irrigated with an antimicrobial and/or antiseptic solution with the intention to decrease the probability of contaminating the newly placed foreign medical device. Also, surgeons commonly provide postoperative oral antimicrobials beyond 24 hours of surgery to patients who have surgical drains placed near an implantable device, with the hope of further decreasing the rate of infection. Prolonging postoperative antimicrobials in this scenario is performed because these drains are known to allow microbial translocation from the skin to the deeper surgical site where the implant is located. These interventions, with a low degree of evidence, have produced mixed results. Furthermore, extending perioperative use of antimicrobials beyond 24 hours can lead to several unintended side effects, including hypersensitivity reactions, renal failure, antimicrobial resistance, and Clostridium difficile -associated diarrhea. The likelihood of MRSA colonization increases with: (1) prior history of MRSA infection, (2) hospitalization and exposure to health care facilities within the preceding year, (3) receipt of antibiotics within 3 months before admission, or (4) the presence of select comorbid conditions, such as immunosuppression, diabetes, chronic obstructive pulmonary disease, congestive heart failure, and use of hemodialysis, all of which are commonly encountered in the cancer population. , Surgical patients identified as colonized with MRSA by a positive nasal polymerase chain reaction screen have been found to have 2-fold to 14-fold greater odds of a subsequent MRSA SSI than patients with negative nasal MRSA polymerase chain reaction screens. , Several studies have shown that a bundled approach that includes decolonization protocols plus intravenous vancomycin prophylaxis can decrease the rate of postoperative gram-positive infections, especially in the orthopedic and cardiac surgical population, whereas other studies have not shown this benefit. , Decolonization protocols that include topical mupirocin and chlorhexidine gluconate (CHG) versus a placebo have been effective in reducing the rate of postoperative infections, with a relative risk of infection of 0.42 (95% CI, 0.23–0.75). Development of resistance to mupirocin is unlikely in the perioperative setting, especially when it is not used as an ointment for prolonged periods. CHG resistance is also uncommon, mainly because topical concentrations of CHG used for decolonization are 200-fold higher than the highest recorded minimum inhibitory and bactericidal concentrations of it used for staphylococci. , Therefore, most researchers concluded that the use of preoperative intranasal mupirocin and/or topical CHG in MRSA-colonized patients is safe and potentially beneficial as an adjuvant to intravenous antimicrobial prophylaxis to decrease the occurrence of SSIs. Screening and targeted decolonization should specifically be considered for all those patient at high risk for negative outcomes, including the immunocompromised cancer population with device implantation. To further decrease the risk of cross-contamination, nosocomial transmission, and SSIs, mainly caused by MRSA, in the acute health care setting, a robust infection control department should be established, ensuring the following : (1) implementation of a MRSA monitoring program along with a laboratory-based alert system that notifies health care workers of new MRSA-colonized or MRSA-infected patients in a timely manner , ; (2) use of contact precautions for MRSA-colonized and MSRA-infected patients , ; (3) cleaning and disinfection of equipment and the environment ; (4) provision of MRSA data and outcome measures to senior leadership, physicians, and nursing staff; and (5) education of health care workers as well as patients and their families about MRSA. Many patients will likely have implantation of one or more devices at any given time during their cancer journey. These devices are placed either during active oncological therapy or after the patient has been cured to mitigate the unintended side effects of cancer therapy or the cancer itself. These devices may become infected, increasing patient morbidity and mortality and further increasing the complexity of oncological care. Therefore, key stakeholders and health care providers should be knowledgeable and serve as advocates for patients in providing specific interventions for the prevention of device-related infections like those described below. Central Venous Access Devices These devices include nontunneled and tunneled centrally inserted central catheters, peripherally inserted central catheters, as well as totally implantable venous access devices. , These central venous devices, which are used in at least 4 million patients in the United States and are left in place for several months, are essential lifelines for patients living with cancer. However, CVADs are associated with a wide array of infectious complications, including localized exit-site infections, tunnel-related or pocket-related infections, and life-threating catheter-related bloodstream infections (CRBSIs). The infection rates of the latter vary significantly among different clinical settings, but it has been estimated that, in the oncological population, it is approximately 2.5 per 1000 catheter-days. Femorally inserted central catheters have the highest risk of infection, followed by centrally inserted central catheters, peripherally inserted central catheters, and totally implantable venous access devices. In addition, patients receiving chemotherapy, total parenteral nutrition, or who are neutropenic for a prolonged period of time will be at increased risk for infection. The pathogens that most frequently are responsible for CRBSIs are gram-positive bacteria, in particular, coagulase-negative staphylococci, S. aureus , and Enterococcus species, whereas gram-negative microorganisms account for approximately 20%. , The average cost per episode of CRBSI is $45,814 (95% CI, $30,919–$65,245), making CRBSI one of the costliest health care-associated infections. CVADs have four main routes of contamination that are the targets of infection-preventive measures: (1) migration of skin organisms at the insertion site, resulting in bacterial adhesion to the external or intraluminal surface of the device; (2) direct contamination by contact with hands or contaminated fluids or devices; (3) less commonly, catheters may become hematogenously seeded from another focus of infection; and (4) rarely, infusate contamination may lead to a CRBSI. Therefore, several well established, evidence-based recommendations of a bundle approach have been designed to mitigate the risk for infection. This bundle intervention includes the implementation of specific steps during both the insertion and the maintenance of central lines : (1) educating and designating only trained health care personnel; (2) hand hygiene and the use of sterile gloves before catheter insertion; (3) the use of alcohol-containing CHG for skin antisepsis before insertion and during dressing change; (4) maximal sterile barrier precautions, including the use of a cap, mask, gown, and sterile full-body drape; (5) avoiding the use of systemic antimicrobial prophylaxis; (6) preferring an infraclavicular rather than a supraclavicular or groin exit site; (7) selecting a CVAD with the minimum number of lumens and to be used for the fewest days necessary for management of the patient; (8) implementation of ultrasound guidance to reduce the number of catheter placement attempts; (9) choosing a suture-less securement device with needle-less connectors; (10) placing a sterile, transparent dressing over the insertion site and replacing it no more than once a week (unless the dressing is soiled or loose); (11) avoiding submerging the catheter in water or using topical antimicrobial ointments at insertion sites as well as not replacing the CVAD to prevent CRBSI, but replacing the administration set and needle-less connectors at least every 7 days assuming the patient has not received blood, blood products, or fat emulsions, in which case they must be replaced within 24 hours after the infusion; and (12) most importantly, it is encouraged to have collaborative-based performance-improvement initiatives. These interventions require a designated physician and nursing team leader along with a checklist to assess compliance with the elements of the bundle and empowerment to stop the procedure if protocols are not followed. If compliance with all components is high, the bundle approach has reported a statistically significant decrease in the rate of CRBSI of 66% ( p < .002). The American Society of Clinical Oncology has high-lighted the importance of CRBSIs and emphasized the need for more research targeting patients with cancer, mainly because the majority of studies have focused on patients who have indwelling CVADs for a short term, such as in intensive care units. However, based on the available literature, several additional CRBSI-preventive measures can be instituted. Simple and inexpensive interventions (<$10 per unit) in which CRBSI remains elevated despite maximum compliance with the aforementioned measures are the use of 70% isopropyl alcohol caps for needle-less connectors and the placement of a chlorhexidine-impregnated dressing , around the catheter insertion site and exchanging it every 7 days. These two interventions have been effective in reducing the incidence of intraluminal and extraluminal infections, respectively. Furthermore, the introduction of US Food and Drug Administration (FDA)-approved antimicrobial-impregnated catheters (AICs) has added an extra layer of CRBSI prevention. The use of these AICs is associated with a markedly lower rate of catheter colonization and CRBSI compared with non-AICs. , Cost-effectiveness assessments of these relatively inexpensive devices have justified their integration into clinical practice. Of the most commonly used AICs, minocycline/rifampin-impregnated catheters have been associated with lower rates of CRBSI than chlorhexidine/silver sulfadiazine-impregnated catheters (0.3% vs. 3.4%; p < .002) , without an increased incidence of antibacterial resistance of Staphylococcus species. Moreover, AICs ensure protection for a limited time, ranging from 28 to 50 days in the setting of a minocycline/rifampin-impregnated catheter, which contrasts with an average of 7 days in the setting of a chlorhexidine/silver sulfadiazine-impregnated catheter. – Therefore, the use of antimicrobial lock solutions has been proposed as a method of preventing intraluminal CRBSI of CVADs that are projected to remain in place for an extended duration, especially in patients with a history of multiple CRBSIs. A meta-analysis of randomized controlled trials comparing antimicrobial lock solutions with heparin revealed a 69% reduction in the incidence of CRBSIs. These antimicrobial lock solutions can be created with numerous drugs and drug combinations. The simplest lock solutions are those formulated with ethanol, which was revealed in another meta-analysis of randomized controlled trials to significantly decrease CRBSI compared with heparin alone (odds ratio, 0.53; p = .004). However, ethanol concentrations and antimicrobial lock solution dwell times are not standardized. Also, ethanol concentrations >28% should be avoided because they lead to plasma protein precipitation and structural changes in CVADs, mainly polyurethane catheters. Other antimicrobial lock solutions, such as the chelators citrate and EDTA, have gained attention because they have excellent anticoagulant activity, prevent biofilm formation, have antimicrobial characteristics, and inhibit bacterial proliferation, whereas heparin may anecdotally enhance biofilm growth. The use of a combined antimicrobial chelator lock solution, such as minocycline–EDTA and taurolidine–citrate, has led to remarkable progress in preventing CRBSIs in patients who have cancer. , Another promising antimicrobial lock solution is nitroglycerin–citrate–ethanol, a nonantibiotic chelator combination. This lock solution is safe and has unique features of an active anticoagulant, no risk of triggering bacterial resistance, and the ability to disrupt biofilm. These findings were validated in a clinical study that evaluated patients with hematological malignancies and showed a considerable reduction in the incidence of CRBSIs. Although these lock solutions are well studied, currently, there are no FDA-approved lock formulations commercially available for which they are prepared locally in hospital pharmacies. The components of the antimicrobial lock solutions are usually generic, economical, and effective in preventing thrombosis and CRBSIs. However, their beneficial use in preventing infections must be balanced with potential breaches in catheter integrity, bacterial resistance, systemic toxicity, frequent antimicrobial lock solution exchanges (depending on the stability of each component of the solution), and inability to use the CVAD while the lock solution is dwelling. Cardiac-Implantable Electronic Devices The indications for permanent pacemakers, implantable cardiac defibrillators, and cardiac resynchronization therapy, collectively known as CIEDs, are extensive. The cardiotoxicity of some cancer therapies and the rising average age of the oncological population have increased the need for these devices. In the United States, more than 100,000 implantable cardiac defibrillators and 300,000 permanent pacemakers are inserted every year. Unfortunately, the rates of CIED infections have been reported to be approximately 4%, with a disproportionate increase in these rates compared with the increase in CIED implantation. The most common microorganisms causing CIED infections are expected skin flora, such as coagulase-negative staphylococci (38%), S. aureus (31%), and other pathogens, including gram-negative bacteria (9%). , Infections of these devices necessitate the extraction of all CIED components (generator and leads), increasing the mean hospitalization charges in the United States to $173,211, with overall in-hospital mortality rates ranging from 3.7% to 11.3%. Several modifiable and nonmodifiable patient-related, procedure-related, and device-related risk factors for CIED infections have been identified. These risk factors are common in the oncological population and have been compiled in various stratification scores. On the basis of these scoring systems, patients who have cancer are usually at intermediate to high risk for developing a CIED infection. The Prevention of Arrhythmia Device Infection Trial ( ClinicalTrials.gov identifier NCT01628666 ) score , is one of the most commonly used scoring systems because it is simple and has been independently validated to identify high-risk patients who may benefit from tailored strategies to reduce the risk of CIED infection. For patients with several nonmodifiable risks, alternative approaches may be used to lower the overall risk of infection, including confirming the indication for CIED use and consideration of a leadless CIED. , In addition to the general surgical recommendations described above, the identification of modifiable risk factors is important because it may allow for further preventive measures to reduce the risk of CIED infection. These include preventive preprocedural measures supported by scientific consensus, such as: (1) provision of perioperative systemic antimicrobials ; (2) use of a preoperative checklist , ; (3) delay of CIED implantation in patients with infection or fever for at least 24 hours; (4) avoidance of CVADs when introducing a CIED, when feasible ; and (5) measures to decrease the risk of pocket hematoma (increasing platelet count to >50,000/μl, discontinuation of antiplatelet medications within 5–10 days before the procedure, avoidance of therapeutic low-molecular-weight heparin and a bridging approach with heparin, and holding of anticoagulation therapy until the risk of bleeding has diminished in patients with a history of deep venous thrombosis or CHA 2 DS 2 -VASc score <4). The latter three measures are commonly encountered in the cancer population and should be closely addressed. Perioperative recommendations for the prevention of CIED infections include: (1) consideration of adding an acellular dermal matrix within the surgical pocket to reinforce the incision site, (2) avoidance of antimicrobial irrigation within the pocket, and (3) use of an antimicrobial envelope (such as TYRX; Medtronic) that locally releases a high concentration of minocycline and rifampin within the surgical pocket for a minimum of 7 days in patients at high-risk for developing CIED infection. The World-wide Randomized Antibiotic Envelope Infection Prevention Trial ( ClinicalTrials.gov identifier NCT02277990 ) demonstrated that the use of these envelopes significantly reduced the primary end point (infection resulting in CIED extraction or revision, long-term antibiotic therapy, or death within 12 months of device placement) from 1.2% (control) to 0.7% (envelope; hazard ratio, 0.6; p = .04). The number needed to treat was 100 for high-risk patients undergoing implantable cardiac defibrillator/cardiac resynchronization therapy defibrillator replacement or upgrade. However, this trial excluded patients at increased risk for infection, such as those with prior CIED infection, those receiving immunosuppressive therapy, those with long-term vascular access, or patients undergoing hemodialysis. Therefore, selecting a high-risk population for infection, such as an oncological population with several risk factors, would likely decrease the number needed to treat and improve the cost effectiveness of the envelope, which is priced slightly below $1000. , At our institution, all patients who have cancer receive the TYRX envelope as part of a comprehensive prophylactic bundle, which has been demonstrated to be both safe and effective in maintaining a low rate of CIED infection (1.3%) and is well within published averages in the broader population of all CIED recipients. Of note, few studies have evaluated novel techniques for decreasing microbial adherence to CIEDs. Polyurethane has been shown to have a higher affinity for biofilm-producing pathogens than titanium in vitro. Therefore, increasing the titanium:polyurethane surface ratio of these cardiac devices may decrease the rate of CIED infection. Furthermore, the use of silver ion-based antimicrobial surface technology for the reduction of bacterial growth on CIEDs was shown to be safe in an ovine model. However, CIED surface modification techniques are unlikely to progress because of the complexity of the regulatory approval pathways, the diversity of CIED models and manufacturing companies worldwide, and the availability of more cost-effective preventive measures already approved by the FDA, such as antimicrobial envelopes. Furthermore, postprocedural prophylactic measures in CIED recipients include: (1) the use of pressure dressings to decrease hematoma occurrence and hemostatic gelatin sponges in patients receiving anticoagulation or dual antiplatelet therapy ; (2) refraining from early reintervention, which dramatically increases the risk of CIED infection ; and (3) avoidance of postoperative antimicrobials. The last measure was confirmed in the Prevention of Arrhythmia Device Infection Trial, which included 19,603 patients and revealed no benefit from an incremental approach (preoperative intravenous vancomycin or cefazolin plus intraoperative bacitracin wash and postoperative oral cephalosporin) over the conventional approach (single dose of preoperative cefazolin or vancomycin; odds ratio, 0.77; p = .1). Ommaya Reservoirs and External Ventricular Drains An Ommaya reservoir, a small, dome-shaped, subgaleal reservoir connected to an intraventricular catheter, is the preferred device for intrathecal infusion of chemotherapy in patients with leptomeningeal cancer ; whereas EVDs are used for temporary diversion of cerebrospinal fluid (CSF) from an obstructed ventricular system in cases of acute hydrocephalus, monitoring of intracranial pressure, and as part of the treatment approach for infected CSF shunts. These devices can become infected, manifesting as a local skin soft tissue inflammatory infectious process or with meningitis and ventriculitis at a rate of 6% for Ommaya reservoirs and 8% for EVDs. , Concomitant bloodstream infections have been identified in 7.5%–12% of Ommaya reservoir infections. , The overall incidence of infection in previous studies was 0.74 per 10,000 Ommaya reservoir-days and 11.4 per 10,000 EVD-days. These infections usually occur soon after the time of placement or later through retrograde spread by exit-site colonization or direct inoculation through device manipulation. , The main risk factor for Ommaya reservoir infections is the frequency of CSF sampling, whereas the main risk factors for EVD infections include prolonged catheterization, subarachnoid hemorrhage, drain blockage, and CSF leakage at the EVD entry site. , – The most common organisms causing Ommaya reservoir infections are predominantly normal skin flora, including Staphylococcus spp. and Cutibacterium acnes ; whereas EVD infections are increasingly caused by gram-negative rods, such as Escherichia coli , Pseudomonas aeruginosa , and Enterobacter , Acinetobacter , and Klebsiella species. , , Preprocedural use of antimicrobials such as cefazolin is necessary to reduce the rate of SSIs and central nervous system infections in patients with Ommaya reservoirs and EVDs. Perioperative chlorhexidine shampoo and hair clipping, with special care to avoid causing skin abrasions, also should be implemented. In addition, an Ommaya reservoir should be placed under a skin flap that allows for implantation at a safe distance from the incision site. Furthermore, despite few studies with mixed results, at institutions with high rates of infections, the use of subcutaneous long-tunneling EVDs to the chest wall can be considered. Moreover, silver-coated and, more recently, minocycline- and rifampin-impregnated catheters have proven to be cost-effective in significantly reducing the rate of infection in EVDs (risk ratio, 0.31; 95% CI, 0.15–0.64; p = .0002). However, another study did not show an additional benefit of using AICs, likely because of a small sample size. Similar to other devices, studies have shown an advantage with the prolonged use of postprocedural antibiotics as long as an EVD remains in place compared with no postoperative antimicrobial use (3% vs. 11%; p = .01). Other preventative interventions, including the use of a daily prophylactic bundle plus intraventricular amikacin, also had encouraging results. However, because these were relatively small studies with the potential for drug-related toxic effects and development of multidrug-resistant pathogens, these findings should be verified in large, multicenter, randomized controlled studies. Other interventions, such as routine EVD exchange, should not be performed because they have not been shown to reduce the rate of infection. , Also, frequent CSF analysis with cultures at each use may detect preclinical infections with C. acnes or staphylococci. However, these results must be interpreted with caution because these pathogens may also be contaminants. Once an Ommaya reservoir or an EVD has been placed, the risk of infection can be minimized through the use of institutional protocols established for ensuring safe, sterile access of the device by only highly qualified personnel. Minimal manipulation of the device, minimizing the number of days the device remains in situ, and implementing an infection control protocol have all been shown to decrease the incidence of these infections. – The introduction of an EVD care bundle that includes a standardized technique of hand washing for aseptic CSF sampling, the use of surgical theater-standard scrubs and preparations, and cleaning the EVD access ports while wearing a mask and gloves significantly decreased the rate of infection from 21 to 9 cases per 1000 EVD-days ( p = .003). , In a meta-analysis, the addition of a chlorhexidine-impregnated dressing to the catheter exit site significantly reduced the incidence of EVD infections (7.9% vs. 1.7%; risk difference, 0.07; 95% CI, 0.0–0.13; p = .04). , Similar bundled approaches for the prevention of Ommaya reservoir infections have been successful. Hence, because of the difficulty in assessing the effectiveness of each individual component and based on the relatively low cost, to further reduce the rate of these infections, we recommend the continued use of these preventive bundles. Breast Tissue Expanders and Permanent Implants Breast cancer is the most common cancer worldwide, with a 5-year survival rate >90%. In 2021, the American Society of Plastic Surgeons reported that 103,485 postmastectomy implant-based reconstructive procedures were performed in the United States. Some of the patients who underwent these procedures had direct-to-implant reconstruction (one-step approach), whereas >80% had implantation of a temporary TE; once a sufficiently large soft tissue envelope was created, the TE was replaced by a permanent breast implant (two-step approach). Unfortunately, the average TE infection rate is high at 13%. These infections occur mostly in the early postoperative period, with one third occurring within the first 30 days after surgery (median, 48 days). The most common bacteria causing TE infections are methicillin-resistant staphylococci (44%) and gram-negative pathogens (26%), including Pseudomonas (13%) and Klebsiella (5%) spp. In addition to the traditional risk factors for infection, patients with breast TEs have several unique risk factors, including a body mass index >25 kg/m 2 , breast cup size >C, prior breast implant infection, bilateral or immediate breast reconstruction, axillary lymph node resection, use of an acellular dermal matrix, extended duration of surgical drains, mastectomy skin flap necrosis, breaks in the sterility process of TE implant infusions, and use of adjuvant chemotherapy and radiation therapy. , Patients at high risk for infection should consider proceeding with an autologous flap reconstruction instead of an implant-based reconstruction because of the lower rate of infection (approximately 7%) with the former procedure. Similar to other methods of prevention, the use of preprocedural systemic antimicrobials has proven to significantly reduce the rate of infection. In addition, following a detailed best-practice standardized protocol has helped reduce the incidence of these complications. , Furthermore, periprocedural measures, including antimicrobial irrigation of the pocket and implant immersion, were shown in a meta-analysis to decrease infection rates (risk ratio, 0.52; 95% CI, 0.38–0.81; p = .004), although with a relatively low degree of evidence. These antimicrobial solutions are promptly absorbed, rapidly decreasing their effectiveness. Therefore, similar to antibiotic beads used in orthopedics, we developed a completely bioabsorbable film that allows for full expansion of the temporary breast implant and elutes a high concentration of antibiotic locally for an extended period. This promising film has been shown in vitro to prevent biofilm formation by diverse microorganisms on silicone surfaces with minimal cytotoxicity. Of note, acellular dermal matrices have been increasingly used for surgical reconstruction to allow for lower pole support of the breast implant, enhancing aesthetic outcomes while decreasing operative time. These biologic meshes are available in aseptic or sterile form, with no significant difference in the rate of infection between the two forms. However, they have been associated with increased incidence of seroma and hematoma and extended durations of surgical drains. These drains likely serve as microbial conduits for pathogens to migrate from the skin to the implant, with an overall risk ratio for infection of 2.47 (95% CI, 1.71–3.57; p = .01). Also, a seroma located between an acellular dermal matrix and an implant is relatively isolated from the host’s immune system, likely further increasing the probability of infection. Therefore, the goal is to place these drains through a subcutaneous tunnel and then remove them as soon as possible in the presence of <30 ml of daily output or even earlier, not surpassing 7–14 days of use. , Further infection preventive measures during the early postoperative period include (1) avoidance of extending postoperative antimicrobial use beyond 24 hours, although this is common practice because this does not reduce the rate of infection and leads to the development of multidrug-resistant pathogens – ; (2) allowing adequate incisional healing before initiating adjuvant bevacizumab use or radiation therapy ; (3) proceeding with early expansion of the TE to decrease the size of the seroma pocket but without significantly increasing the surface tension and causing a skin flap necrosis ; (4) keeping the surgical bulb at gravity at all times to keep the drained fluid from re-entering the surgical pocket; and (5) consideration of additional techniques, such as using chlorhexidine-impregnated dressing at the exit drain site and exchanging it weekly along with a daily antiseptic solution within the surgical bulb to further decrease bacterial colonization ( p = .03) and the likelihood of a secondary infection within 30 days ( p = .13) and 1 year ( p = .45). Percutaneous Nephrostomy Tubes and Ureteral Stents These devices are mainly indicated for temporary or permanent decompression of the urinary tract because of intrinsic or extrinsic malignant obstructions, mainly cervical or colorectal cancers. Ureteral stents are also used temporarily after urinary diversion or ureteral reimplantation surgeries to prevent strictures at the anastomotic site. The definition of these infections is not standardized but is reported to be 1%–19% for PCNTs and 11% for ureteral stents. By using a stringent clinical and microbiologic definition, at our institution in patients with newly placed PCNTs, we found that the infection rate was 14%, with an infection incidence of 2.65 per 1000 patient-days. These infections occur early, with a median time from PCNT placement to infection of 44 days (interquartile range, 25–61 days). These devices can be readily colonized and infected by lower urinary tract pathogens acquired during or after their placement, including Pseudomonas , Escherichia , Stenotrophomonas , Klebsiella , and Enterococcus spp., with up to 50% of infections being polymicrobial or by normal skin flora at the PCNT exit site. Similar to Foley catheter-related infections, the main risk factor for these infections is the length of time the device remains in place. Therefore, periodically reassessing the need for these devices to determine whether their removal is possible is the best approach to prevent these infections. The use of preprocedural antimicrobials with these clean–contaminated procedures is indicated for elective PCNT and ureteral stent placement and exchange. Prophylaxis with cefazolin that focused mainly on skin flora was not beneficial for patients receiving PCNTs. However, when ceftriaxone or ampicillin/sulbactam was used to cover expected uropathogens, the rate of serious postprocedural sepsis-related complications decreased in high-risk patients from 50% to 9%. For patients receiving ureteral stents considered to be at high risk for infection (those who are immunocompromised, have had recurrent urinary tract infections, have uncontrolled diabetes, or have a history of infected renal stones), we usually administer ciprofloxacin or trimethoprim-sulfamethoxazole prophylaxis or intravenous antimicrobials to patients undergoing complex surgery that requires a high level of instrumentation under general anesthesia. A targeted prophylactic approach based on colonizing organisms’ growth in urine culture obtained a few days before a scheduled exchange appeared to have a more protective effect than providing standard-of-care prophylactic antimicrobials, but larger studies with supporting evidence are needed. , Several approaches to coating these urinary devices to inhibit bacterial adhesion and growth have been evolving. For example, they have been coated with diverse antibiotics as well as chitosan, gendine, hyaluronic acid, hydrogel, silver, triclosan, and many other substances. , One of the main concerns associated with antibiotic-based coatings, as mentioned above, is a lack of long-term effectiveness and development of resistance. Therefore, combination regimens that reduce the probability of resistance, including minocycline-, rifampin-, and chlorhexidine-impregnated catheters, have been developed. Unfortunately, because of their high cost of production and potential toxicity and a lack of adequate clinical studies, these catheters have yet to be introduced into practice. Postprocedural preventive strategies, including maintaining a clean exit site area with antiseptic use, regular dressing exchange, and placement of a closed urinary drainage collection bag under the PCNT insertion site to keep urine from recirculating back into the urinary collection system, may help decrease the rate of infection. Also, concomitant use of Foley catheters with PCNT and ureteral stents should be avoided when feasible. Furthermore, in patients with frequent exit site infections, using a chlorhexidine-impregnated dressing and exchanging it weekly should be considered. Moreover, to avoid development of infections with multidrug-resistant organisms and inappropriate use of antimicrobials, surveillance urinary cultures and giving treatment to asymptomatic patients should be discouraged. Finally, bacterial colonization occurs soon after placement of these urinary devices, with subsequent encrustation of debris and solutes and formation of an intraluminal complex biofilm over time. This eventually leads to obstruction of the device, resulting in progressive hydronephrosis, renal failure, and increased likelihood of pyelonephritis, renal abscess, or even bacteremia. Therefore, routine replacement of the device every 3 months or even more frequently in patients at high risk for intraluminal obstruction and definitive removal should be attempted when clinically possible. The average cost of $3000 per procedure is considerably lower than the approximately $40,000 cost for treatment of each episode of these almost inexorable infectious events. Other Relevant Devices Many additional implantable devices have been used to support and improve the quality of life of patients living with advanced cancers, including pleural and peritoneal drains, esophageal and biliary stents, and PEG and percutaneous cholecystostomy tubes. Unfortunately, data on preventing infections of these devices are limited, mainly because of the relatively low infection rates and short life spans of patients receiving these implants, usually for palliative purposes. However, below, we describe several agreed-upon recommendations for the prevention of developing an infection of these devices. Preprocedural prophylactic antimicrobials are not needed for routine procedures classified as clean, such as esophageal stent and pleural or peritoneal drain placement, or for biliary stent insertion with resolution of an obstruction. However, PEG tube placement, which is considered a clean–contaminated procedure, has been associated with a significant reduction in the incidence of peristomal infection when prophylactic cefazolin was administered (odds ratio, 0.36; 95% CI, 0.26–0.50). Also, percutaneous cholecystostomy tubes are usually placed in patients with cholecystitis; hence, placement of these tubes is considered a contaminated procedure, for which antimicrobials with enteric coverage, such as ampicillin/sulbactam, are warranted if the patient is not already receiving another antibiotic. Similar to other procedures, consensus statements by experts agree that physicians should use an exclusive operating or procedural room during insertion of these devices, an adequate local antiseptic, and fully sterile body draping and sterile gloves, as well as continuously educating health care personnel and follow standardized institutional protocols. – In addition, authors have described three noteworthy measures for the prevention of biliary stent-related infections. (1) Use of a disposable single-use duodenoscope for placement of biliary stents: Because of the complicated design of reusable duodenoscopes used for biliary stent placement, cleaning them under standard sterilization protocols is challenging, which has led to several outbreaks of multidrug-resistant bacterial infections. Until better processes for duodenoscope cleaning are developed, the clinician must rely on personal judgment and infection control reports to detect outbreaks. Therefore, for patients at high risk for infectious complications or ongoing outbreaks, the use of disposable single-use duodenoscopes should be considered. (2) Plastic stents versus covered and uncovered biliary self-expandable metal stents (SEMSs): The use of these stents should be individualized for each patient. Plastic stents are less expensive than SEMSs, but they have a smaller diameter (about one third that of SEMSs). This can result in a more rapid biliary sludge accumulation and bacterial biofilm proliferation, leading to occlusion and eventually increasing the rate of recurrent infections. Hence, plastic stents require routine exchange every 3 months; therefore, these stents are indicated for patients with a life span ≤3 months. SEMSs, conversely, integrate into the biliary tract and become very difficult to remove. To circumvent this complication, fully and partially silicone-covered and polytetrafluoroethylene-covered SEMSs have been developed, which maintains a large luminal patency, decreases tissue embedding, and can be easily removed, specifically if the patient develops an infection because this has been shown to significantly decrease the rate of recurrent cholangitis. Nonetheless, the main limitation of covered SEMSs remains migration of them, which occurs in about 10% of cases. Taking all this into account, there have not been any differences in the rate of infection between covered versus uncovered SEMS, whereas a series of meta-analyses demonstrated substantially lower sepsis and cholangitis rates with SEMSs than with plastic stents (odds ratio, 0.53; 95% CI, 0.37–0.77). (3) Surface modification techniques for biliary stents with silver ions: This promising technology has been shown both in vitro and in animal models to significantly decrease biofilm formation and increase stent patency. Hopefully, the use of these antimicrobial surface modification technologies, which have been successfully used with intravenous catheters, will continue to grow and expand to other devices and eventually will be introduced into clinical practice in the near future. Postprocedural infection preventive recommendations mainly consist of maintaining a clean external drain with the use of soap and water or hydrogen peroxide and covering the drain exit site with sterile dressing. Also, the use of a PEG tube requires daily rotation of 360 degrees both clockwise and counterclockwise to prevent pressure ulcers from forming between the abdominal and gastric walls, leading to tissue necrosis and infection. Furthermore, patients receiving biliary stents should avoid using long-term postprocedural ciprofloxacin for the prevention of biliary stent blockage because this intervention has not been proven to improve stent patency or infection rates. Most importantly, all patients should have an instruction booklet, access to an institutional hotline, as well as regular clinical follow-up according to institutional guidelines with a provider experienced in the long-term use and management of infectious complications of these devices. These devices include nontunneled and tunneled centrally inserted central catheters, peripherally inserted central catheters, as well as totally implantable venous access devices. , These central venous devices, which are used in at least 4 million patients in the United States and are left in place for several months, are essential lifelines for patients living with cancer. However, CVADs are associated with a wide array of infectious complications, including localized exit-site infections, tunnel-related or pocket-related infections, and life-threating catheter-related bloodstream infections (CRBSIs). The infection rates of the latter vary significantly among different clinical settings, but it has been estimated that, in the oncological population, it is approximately 2.5 per 1000 catheter-days. Femorally inserted central catheters have the highest risk of infection, followed by centrally inserted central catheters, peripherally inserted central catheters, and totally implantable venous access devices. In addition, patients receiving chemotherapy, total parenteral nutrition, or who are neutropenic for a prolonged period of time will be at increased risk for infection. The pathogens that most frequently are responsible for CRBSIs are gram-positive bacteria, in particular, coagulase-negative staphylococci, S. aureus , and Enterococcus species, whereas gram-negative microorganisms account for approximately 20%. , The average cost per episode of CRBSI is $45,814 (95% CI, $30,919–$65,245), making CRBSI one of the costliest health care-associated infections. CVADs have four main routes of contamination that are the targets of infection-preventive measures: (1) migration of skin organisms at the insertion site, resulting in bacterial adhesion to the external or intraluminal surface of the device; (2) direct contamination by contact with hands or contaminated fluids or devices; (3) less commonly, catheters may become hematogenously seeded from another focus of infection; and (4) rarely, infusate contamination may lead to a CRBSI. Therefore, several well established, evidence-based recommendations of a bundle approach have been designed to mitigate the risk for infection. This bundle intervention includes the implementation of specific steps during both the insertion and the maintenance of central lines : (1) educating and designating only trained health care personnel; (2) hand hygiene and the use of sterile gloves before catheter insertion; (3) the use of alcohol-containing CHG for skin antisepsis before insertion and during dressing change; (4) maximal sterile barrier precautions, including the use of a cap, mask, gown, and sterile full-body drape; (5) avoiding the use of systemic antimicrobial prophylaxis; (6) preferring an infraclavicular rather than a supraclavicular or groin exit site; (7) selecting a CVAD with the minimum number of lumens and to be used for the fewest days necessary for management of the patient; (8) implementation of ultrasound guidance to reduce the number of catheter placement attempts; (9) choosing a suture-less securement device with needle-less connectors; (10) placing a sterile, transparent dressing over the insertion site and replacing it no more than once a week (unless the dressing is soiled or loose); (11) avoiding submerging the catheter in water or using topical antimicrobial ointments at insertion sites as well as not replacing the CVAD to prevent CRBSI, but replacing the administration set and needle-less connectors at least every 7 days assuming the patient has not received blood, blood products, or fat emulsions, in which case they must be replaced within 24 hours after the infusion; and (12) most importantly, it is encouraged to have collaborative-based performance-improvement initiatives. These interventions require a designated physician and nursing team leader along with a checklist to assess compliance with the elements of the bundle and empowerment to stop the procedure if protocols are not followed. If compliance with all components is high, the bundle approach has reported a statistically significant decrease in the rate of CRBSI of 66% ( p < .002). The American Society of Clinical Oncology has high-lighted the importance of CRBSIs and emphasized the need for more research targeting patients with cancer, mainly because the majority of studies have focused on patients who have indwelling CVADs for a short term, such as in intensive care units. However, based on the available literature, several additional CRBSI-preventive measures can be instituted. Simple and inexpensive interventions (<$10 per unit) in which CRBSI remains elevated despite maximum compliance with the aforementioned measures are the use of 70% isopropyl alcohol caps for needle-less connectors and the placement of a chlorhexidine-impregnated dressing , around the catheter insertion site and exchanging it every 7 days. These two interventions have been effective in reducing the incidence of intraluminal and extraluminal infections, respectively. Furthermore, the introduction of US Food and Drug Administration (FDA)-approved antimicrobial-impregnated catheters (AICs) has added an extra layer of CRBSI prevention. The use of these AICs is associated with a markedly lower rate of catheter colonization and CRBSI compared with non-AICs. , Cost-effectiveness assessments of these relatively inexpensive devices have justified their integration into clinical practice. Of the most commonly used AICs, minocycline/rifampin-impregnated catheters have been associated with lower rates of CRBSI than chlorhexidine/silver sulfadiazine-impregnated catheters (0.3% vs. 3.4%; p < .002) , without an increased incidence of antibacterial resistance of Staphylococcus species. Moreover, AICs ensure protection for a limited time, ranging from 28 to 50 days in the setting of a minocycline/rifampin-impregnated catheter, which contrasts with an average of 7 days in the setting of a chlorhexidine/silver sulfadiazine-impregnated catheter. – Therefore, the use of antimicrobial lock solutions has been proposed as a method of preventing intraluminal CRBSI of CVADs that are projected to remain in place for an extended duration, especially in patients with a history of multiple CRBSIs. A meta-analysis of randomized controlled trials comparing antimicrobial lock solutions with heparin revealed a 69% reduction in the incidence of CRBSIs. These antimicrobial lock solutions can be created with numerous drugs and drug combinations. The simplest lock solutions are those formulated with ethanol, which was revealed in another meta-analysis of randomized controlled trials to significantly decrease CRBSI compared with heparin alone (odds ratio, 0.53; p = .004). However, ethanol concentrations and antimicrobial lock solution dwell times are not standardized. Also, ethanol concentrations >28% should be avoided because they lead to plasma protein precipitation and structural changes in CVADs, mainly polyurethane catheters. Other antimicrobial lock solutions, such as the chelators citrate and EDTA, have gained attention because they have excellent anticoagulant activity, prevent biofilm formation, have antimicrobial characteristics, and inhibit bacterial proliferation, whereas heparin may anecdotally enhance biofilm growth. The use of a combined antimicrobial chelator lock solution, such as minocycline–EDTA and taurolidine–citrate, has led to remarkable progress in preventing CRBSIs in patients who have cancer. , Another promising antimicrobial lock solution is nitroglycerin–citrate–ethanol, a nonantibiotic chelator combination. This lock solution is safe and has unique features of an active anticoagulant, no risk of triggering bacterial resistance, and the ability to disrupt biofilm. These findings were validated in a clinical study that evaluated patients with hematological malignancies and showed a considerable reduction in the incidence of CRBSIs. Although these lock solutions are well studied, currently, there are no FDA-approved lock formulations commercially available for which they are prepared locally in hospital pharmacies. The components of the antimicrobial lock solutions are usually generic, economical, and effective in preventing thrombosis and CRBSIs. However, their beneficial use in preventing infections must be balanced with potential breaches in catheter integrity, bacterial resistance, systemic toxicity, frequent antimicrobial lock solution exchanges (depending on the stability of each component of the solution), and inability to use the CVAD while the lock solution is dwelling. The indications for permanent pacemakers, implantable cardiac defibrillators, and cardiac resynchronization therapy, collectively known as CIEDs, are extensive. The cardiotoxicity of some cancer therapies and the rising average age of the oncological population have increased the need for these devices. In the United States, more than 100,000 implantable cardiac defibrillators and 300,000 permanent pacemakers are inserted every year. Unfortunately, the rates of CIED infections have been reported to be approximately 4%, with a disproportionate increase in these rates compared with the increase in CIED implantation. The most common microorganisms causing CIED infections are expected skin flora, such as coagulase-negative staphylococci (38%), S. aureus (31%), and other pathogens, including gram-negative bacteria (9%). , Infections of these devices necessitate the extraction of all CIED components (generator and leads), increasing the mean hospitalization charges in the United States to $173,211, with overall in-hospital mortality rates ranging from 3.7% to 11.3%. Several modifiable and nonmodifiable patient-related, procedure-related, and device-related risk factors for CIED infections have been identified. These risk factors are common in the oncological population and have been compiled in various stratification scores. On the basis of these scoring systems, patients who have cancer are usually at intermediate to high risk for developing a CIED infection. The Prevention of Arrhythmia Device Infection Trial ( ClinicalTrials.gov identifier NCT01628666 ) score , is one of the most commonly used scoring systems because it is simple and has been independently validated to identify high-risk patients who may benefit from tailored strategies to reduce the risk of CIED infection. For patients with several nonmodifiable risks, alternative approaches may be used to lower the overall risk of infection, including confirming the indication for CIED use and consideration of a leadless CIED. , In addition to the general surgical recommendations described above, the identification of modifiable risk factors is important because it may allow for further preventive measures to reduce the risk of CIED infection. These include preventive preprocedural measures supported by scientific consensus, such as: (1) provision of perioperative systemic antimicrobials ; (2) use of a preoperative checklist , ; (3) delay of CIED implantation in patients with infection or fever for at least 24 hours; (4) avoidance of CVADs when introducing a CIED, when feasible ; and (5) measures to decrease the risk of pocket hematoma (increasing platelet count to >50,000/μl, discontinuation of antiplatelet medications within 5–10 days before the procedure, avoidance of therapeutic low-molecular-weight heparin and a bridging approach with heparin, and holding of anticoagulation therapy until the risk of bleeding has diminished in patients with a history of deep venous thrombosis or CHA 2 DS 2 -VASc score <4). The latter three measures are commonly encountered in the cancer population and should be closely addressed. Perioperative recommendations for the prevention of CIED infections include: (1) consideration of adding an acellular dermal matrix within the surgical pocket to reinforce the incision site, (2) avoidance of antimicrobial irrigation within the pocket, and (3) use of an antimicrobial envelope (such as TYRX; Medtronic) that locally releases a high concentration of minocycline and rifampin within the surgical pocket for a minimum of 7 days in patients at high-risk for developing CIED infection. The World-wide Randomized Antibiotic Envelope Infection Prevention Trial ( ClinicalTrials.gov identifier NCT02277990 ) demonstrated that the use of these envelopes significantly reduced the primary end point (infection resulting in CIED extraction or revision, long-term antibiotic therapy, or death within 12 months of device placement) from 1.2% (control) to 0.7% (envelope; hazard ratio, 0.6; p = .04). The number needed to treat was 100 for high-risk patients undergoing implantable cardiac defibrillator/cardiac resynchronization therapy defibrillator replacement or upgrade. However, this trial excluded patients at increased risk for infection, such as those with prior CIED infection, those receiving immunosuppressive therapy, those with long-term vascular access, or patients undergoing hemodialysis. Therefore, selecting a high-risk population for infection, such as an oncological population with several risk factors, would likely decrease the number needed to treat and improve the cost effectiveness of the envelope, which is priced slightly below $1000. , At our institution, all patients who have cancer receive the TYRX envelope as part of a comprehensive prophylactic bundle, which has been demonstrated to be both safe and effective in maintaining a low rate of CIED infection (1.3%) and is well within published averages in the broader population of all CIED recipients. Of note, few studies have evaluated novel techniques for decreasing microbial adherence to CIEDs. Polyurethane has been shown to have a higher affinity for biofilm-producing pathogens than titanium in vitro. Therefore, increasing the titanium:polyurethane surface ratio of these cardiac devices may decrease the rate of CIED infection. Furthermore, the use of silver ion-based antimicrobial surface technology for the reduction of bacterial growth on CIEDs was shown to be safe in an ovine model. However, CIED surface modification techniques are unlikely to progress because of the complexity of the regulatory approval pathways, the diversity of CIED models and manufacturing companies worldwide, and the availability of more cost-effective preventive measures already approved by the FDA, such as antimicrobial envelopes. Furthermore, postprocedural prophylactic measures in CIED recipients include: (1) the use of pressure dressings to decrease hematoma occurrence and hemostatic gelatin sponges in patients receiving anticoagulation or dual antiplatelet therapy ; (2) refraining from early reintervention, which dramatically increases the risk of CIED infection ; and (3) avoidance of postoperative antimicrobials. The last measure was confirmed in the Prevention of Arrhythmia Device Infection Trial, which included 19,603 patients and revealed no benefit from an incremental approach (preoperative intravenous vancomycin or cefazolin plus intraoperative bacitracin wash and postoperative oral cephalosporin) over the conventional approach (single dose of preoperative cefazolin or vancomycin; odds ratio, 0.77; p = .1). An Ommaya reservoir, a small, dome-shaped, subgaleal reservoir connected to an intraventricular catheter, is the preferred device for intrathecal infusion of chemotherapy in patients with leptomeningeal cancer ; whereas EVDs are used for temporary diversion of cerebrospinal fluid (CSF) from an obstructed ventricular system in cases of acute hydrocephalus, monitoring of intracranial pressure, and as part of the treatment approach for infected CSF shunts. These devices can become infected, manifesting as a local skin soft tissue inflammatory infectious process or with meningitis and ventriculitis at a rate of 6% for Ommaya reservoirs and 8% for EVDs. , Concomitant bloodstream infections have been identified in 7.5%–12% of Ommaya reservoir infections. , The overall incidence of infection in previous studies was 0.74 per 10,000 Ommaya reservoir-days and 11.4 per 10,000 EVD-days. These infections usually occur soon after the time of placement or later through retrograde spread by exit-site colonization or direct inoculation through device manipulation. , The main risk factor for Ommaya reservoir infections is the frequency of CSF sampling, whereas the main risk factors for EVD infections include prolonged catheterization, subarachnoid hemorrhage, drain blockage, and CSF leakage at the EVD entry site. , – The most common organisms causing Ommaya reservoir infections are predominantly normal skin flora, including Staphylococcus spp. and Cutibacterium acnes ; whereas EVD infections are increasingly caused by gram-negative rods, such as Escherichia coli , Pseudomonas aeruginosa , and Enterobacter , Acinetobacter , and Klebsiella species. , , Preprocedural use of antimicrobials such as cefazolin is necessary to reduce the rate of SSIs and central nervous system infections in patients with Ommaya reservoirs and EVDs. Perioperative chlorhexidine shampoo and hair clipping, with special care to avoid causing skin abrasions, also should be implemented. In addition, an Ommaya reservoir should be placed under a skin flap that allows for implantation at a safe distance from the incision site. Furthermore, despite few studies with mixed results, at institutions with high rates of infections, the use of subcutaneous long-tunneling EVDs to the chest wall can be considered. Moreover, silver-coated and, more recently, minocycline- and rifampin-impregnated catheters have proven to be cost-effective in significantly reducing the rate of infection in EVDs (risk ratio, 0.31; 95% CI, 0.15–0.64; p = .0002). However, another study did not show an additional benefit of using AICs, likely because of a small sample size. Similar to other devices, studies have shown an advantage with the prolonged use of postprocedural antibiotics as long as an EVD remains in place compared with no postoperative antimicrobial use (3% vs. 11%; p = .01). Other preventative interventions, including the use of a daily prophylactic bundle plus intraventricular amikacin, also had encouraging results. However, because these were relatively small studies with the potential for drug-related toxic effects and development of multidrug-resistant pathogens, these findings should be verified in large, multicenter, randomized controlled studies. Other interventions, such as routine EVD exchange, should not be performed because they have not been shown to reduce the rate of infection. , Also, frequent CSF analysis with cultures at each use may detect preclinical infections with C. acnes or staphylococci. However, these results must be interpreted with caution because these pathogens may also be contaminants. Once an Ommaya reservoir or an EVD has been placed, the risk of infection can be minimized through the use of institutional protocols established for ensuring safe, sterile access of the device by only highly qualified personnel. Minimal manipulation of the device, minimizing the number of days the device remains in situ, and implementing an infection control protocol have all been shown to decrease the incidence of these infections. – The introduction of an EVD care bundle that includes a standardized technique of hand washing for aseptic CSF sampling, the use of surgical theater-standard scrubs and preparations, and cleaning the EVD access ports while wearing a mask and gloves significantly decreased the rate of infection from 21 to 9 cases per 1000 EVD-days ( p = .003). , In a meta-analysis, the addition of a chlorhexidine-impregnated dressing to the catheter exit site significantly reduced the incidence of EVD infections (7.9% vs. 1.7%; risk difference, 0.07; 95% CI, 0.0–0.13; p = .04). , Similar bundled approaches for the prevention of Ommaya reservoir infections have been successful. Hence, because of the difficulty in assessing the effectiveness of each individual component and based on the relatively low cost, to further reduce the rate of these infections, we recommend the continued use of these preventive bundles. Breast cancer is the most common cancer worldwide, with a 5-year survival rate >90%. In 2021, the American Society of Plastic Surgeons reported that 103,485 postmastectomy implant-based reconstructive procedures were performed in the United States. Some of the patients who underwent these procedures had direct-to-implant reconstruction (one-step approach), whereas >80% had implantation of a temporary TE; once a sufficiently large soft tissue envelope was created, the TE was replaced by a permanent breast implant (two-step approach). Unfortunately, the average TE infection rate is high at 13%. These infections occur mostly in the early postoperative period, with one third occurring within the first 30 days after surgery (median, 48 days). The most common bacteria causing TE infections are methicillin-resistant staphylococci (44%) and gram-negative pathogens (26%), including Pseudomonas (13%) and Klebsiella (5%) spp. In addition to the traditional risk factors for infection, patients with breast TEs have several unique risk factors, including a body mass index >25 kg/m 2 , breast cup size >C, prior breast implant infection, bilateral or immediate breast reconstruction, axillary lymph node resection, use of an acellular dermal matrix, extended duration of surgical drains, mastectomy skin flap necrosis, breaks in the sterility process of TE implant infusions, and use of adjuvant chemotherapy and radiation therapy. , Patients at high risk for infection should consider proceeding with an autologous flap reconstruction instead of an implant-based reconstruction because of the lower rate of infection (approximately 7%) with the former procedure. Similar to other methods of prevention, the use of preprocedural systemic antimicrobials has proven to significantly reduce the rate of infection. In addition, following a detailed best-practice standardized protocol has helped reduce the incidence of these complications. , Furthermore, periprocedural measures, including antimicrobial irrigation of the pocket and implant immersion, were shown in a meta-analysis to decrease infection rates (risk ratio, 0.52; 95% CI, 0.38–0.81; p = .004), although with a relatively low degree of evidence. These antimicrobial solutions are promptly absorbed, rapidly decreasing their effectiveness. Therefore, similar to antibiotic beads used in orthopedics, we developed a completely bioabsorbable film that allows for full expansion of the temporary breast implant and elutes a high concentration of antibiotic locally for an extended period. This promising film has been shown in vitro to prevent biofilm formation by diverse microorganisms on silicone surfaces with minimal cytotoxicity. Of note, acellular dermal matrices have been increasingly used for surgical reconstruction to allow for lower pole support of the breast implant, enhancing aesthetic outcomes while decreasing operative time. These biologic meshes are available in aseptic or sterile form, with no significant difference in the rate of infection between the two forms. However, they have been associated with increased incidence of seroma and hematoma and extended durations of surgical drains. These drains likely serve as microbial conduits for pathogens to migrate from the skin to the implant, with an overall risk ratio for infection of 2.47 (95% CI, 1.71–3.57; p = .01). Also, a seroma located between an acellular dermal matrix and an implant is relatively isolated from the host’s immune system, likely further increasing the probability of infection. Therefore, the goal is to place these drains through a subcutaneous tunnel and then remove them as soon as possible in the presence of <30 ml of daily output or even earlier, not surpassing 7–14 days of use. , Further infection preventive measures during the early postoperative period include (1) avoidance of extending postoperative antimicrobial use beyond 24 hours, although this is common practice because this does not reduce the rate of infection and leads to the development of multidrug-resistant pathogens – ; (2) allowing adequate incisional healing before initiating adjuvant bevacizumab use or radiation therapy ; (3) proceeding with early expansion of the TE to decrease the size of the seroma pocket but without significantly increasing the surface tension and causing a skin flap necrosis ; (4) keeping the surgical bulb at gravity at all times to keep the drained fluid from re-entering the surgical pocket; and (5) consideration of additional techniques, such as using chlorhexidine-impregnated dressing at the exit drain site and exchanging it weekly along with a daily antiseptic solution within the surgical bulb to further decrease bacterial colonization ( p = .03) and the likelihood of a secondary infection within 30 days ( p = .13) and 1 year ( p = .45). These devices are mainly indicated for temporary or permanent decompression of the urinary tract because of intrinsic or extrinsic malignant obstructions, mainly cervical or colorectal cancers. Ureteral stents are also used temporarily after urinary diversion or ureteral reimplantation surgeries to prevent strictures at the anastomotic site. The definition of these infections is not standardized but is reported to be 1%–19% for PCNTs and 11% for ureteral stents. By using a stringent clinical and microbiologic definition, at our institution in patients with newly placed PCNTs, we found that the infection rate was 14%, with an infection incidence of 2.65 per 1000 patient-days. These infections occur early, with a median time from PCNT placement to infection of 44 days (interquartile range, 25–61 days). These devices can be readily colonized and infected by lower urinary tract pathogens acquired during or after their placement, including Pseudomonas , Escherichia , Stenotrophomonas , Klebsiella , and Enterococcus spp., with up to 50% of infections being polymicrobial or by normal skin flora at the PCNT exit site. Similar to Foley catheter-related infections, the main risk factor for these infections is the length of time the device remains in place. Therefore, periodically reassessing the need for these devices to determine whether their removal is possible is the best approach to prevent these infections. The use of preprocedural antimicrobials with these clean–contaminated procedures is indicated for elective PCNT and ureteral stent placement and exchange. Prophylaxis with cefazolin that focused mainly on skin flora was not beneficial for patients receiving PCNTs. However, when ceftriaxone or ampicillin/sulbactam was used to cover expected uropathogens, the rate of serious postprocedural sepsis-related complications decreased in high-risk patients from 50% to 9%. For patients receiving ureteral stents considered to be at high risk for infection (those who are immunocompromised, have had recurrent urinary tract infections, have uncontrolled diabetes, or have a history of infected renal stones), we usually administer ciprofloxacin or trimethoprim-sulfamethoxazole prophylaxis or intravenous antimicrobials to patients undergoing complex surgery that requires a high level of instrumentation under general anesthesia. A targeted prophylactic approach based on colonizing organisms’ growth in urine culture obtained a few days before a scheduled exchange appeared to have a more protective effect than providing standard-of-care prophylactic antimicrobials, but larger studies with supporting evidence are needed. , Several approaches to coating these urinary devices to inhibit bacterial adhesion and growth have been evolving. For example, they have been coated with diverse antibiotics as well as chitosan, gendine, hyaluronic acid, hydrogel, silver, triclosan, and many other substances. , One of the main concerns associated with antibiotic-based coatings, as mentioned above, is a lack of long-term effectiveness and development of resistance. Therefore, combination regimens that reduce the probability of resistance, including minocycline-, rifampin-, and chlorhexidine-impregnated catheters, have been developed. Unfortunately, because of their high cost of production and potential toxicity and a lack of adequate clinical studies, these catheters have yet to be introduced into practice. Postprocedural preventive strategies, including maintaining a clean exit site area with antiseptic use, regular dressing exchange, and placement of a closed urinary drainage collection bag under the PCNT insertion site to keep urine from recirculating back into the urinary collection system, may help decrease the rate of infection. Also, concomitant use of Foley catheters with PCNT and ureteral stents should be avoided when feasible. Furthermore, in patients with frequent exit site infections, using a chlorhexidine-impregnated dressing and exchanging it weekly should be considered. Moreover, to avoid development of infections with multidrug-resistant organisms and inappropriate use of antimicrobials, surveillance urinary cultures and giving treatment to asymptomatic patients should be discouraged. Finally, bacterial colonization occurs soon after placement of these urinary devices, with subsequent encrustation of debris and solutes and formation of an intraluminal complex biofilm over time. This eventually leads to obstruction of the device, resulting in progressive hydronephrosis, renal failure, and increased likelihood of pyelonephritis, renal abscess, or even bacteremia. Therefore, routine replacement of the device every 3 months or even more frequently in patients at high risk for intraluminal obstruction and definitive removal should be attempted when clinically possible. The average cost of $3000 per procedure is considerably lower than the approximately $40,000 cost for treatment of each episode of these almost inexorable infectious events. Many additional implantable devices have been used to support and improve the quality of life of patients living with advanced cancers, including pleural and peritoneal drains, esophageal and biliary stents, and PEG and percutaneous cholecystostomy tubes. Unfortunately, data on preventing infections of these devices are limited, mainly because of the relatively low infection rates and short life spans of patients receiving these implants, usually for palliative purposes. However, below, we describe several agreed-upon recommendations for the prevention of developing an infection of these devices. Preprocedural prophylactic antimicrobials are not needed for routine procedures classified as clean, such as esophageal stent and pleural or peritoneal drain placement, or for biliary stent insertion with resolution of an obstruction. However, PEG tube placement, which is considered a clean–contaminated procedure, has been associated with a significant reduction in the incidence of peristomal infection when prophylactic cefazolin was administered (odds ratio, 0.36; 95% CI, 0.26–0.50). Also, percutaneous cholecystostomy tubes are usually placed in patients with cholecystitis; hence, placement of these tubes is considered a contaminated procedure, for which antimicrobials with enteric coverage, such as ampicillin/sulbactam, are warranted if the patient is not already receiving another antibiotic. Similar to other procedures, consensus statements by experts agree that physicians should use an exclusive operating or procedural room during insertion of these devices, an adequate local antiseptic, and fully sterile body draping and sterile gloves, as well as continuously educating health care personnel and follow standardized institutional protocols. – In addition, authors have described three noteworthy measures for the prevention of biliary stent-related infections. (1) Use of a disposable single-use duodenoscope for placement of biliary stents: Because of the complicated design of reusable duodenoscopes used for biliary stent placement, cleaning them under standard sterilization protocols is challenging, which has led to several outbreaks of multidrug-resistant bacterial infections. Until better processes for duodenoscope cleaning are developed, the clinician must rely on personal judgment and infection control reports to detect outbreaks. Therefore, for patients at high risk for infectious complications or ongoing outbreaks, the use of disposable single-use duodenoscopes should be considered. (2) Plastic stents versus covered and uncovered biliary self-expandable metal stents (SEMSs): The use of these stents should be individualized for each patient. Plastic stents are less expensive than SEMSs, but they have a smaller diameter (about one third that of SEMSs). This can result in a more rapid biliary sludge accumulation and bacterial biofilm proliferation, leading to occlusion and eventually increasing the rate of recurrent infections. Hence, plastic stents require routine exchange every 3 months; therefore, these stents are indicated for patients with a life span ≤3 months. SEMSs, conversely, integrate into the biliary tract and become very difficult to remove. To circumvent this complication, fully and partially silicone-covered and polytetrafluoroethylene-covered SEMSs have been developed, which maintains a large luminal patency, decreases tissue embedding, and can be easily removed, specifically if the patient develops an infection because this has been shown to significantly decrease the rate of recurrent cholangitis. Nonetheless, the main limitation of covered SEMSs remains migration of them, which occurs in about 10% of cases. Taking all this into account, there have not been any differences in the rate of infection between covered versus uncovered SEMS, whereas a series of meta-analyses demonstrated substantially lower sepsis and cholangitis rates with SEMSs than with plastic stents (odds ratio, 0.53; 95% CI, 0.37–0.77). (3) Surface modification techniques for biliary stents with silver ions: This promising technology has been shown both in vitro and in animal models to significantly decrease biofilm formation and increase stent patency. Hopefully, the use of these antimicrobial surface modification technologies, which have been successfully used with intravenous catheters, will continue to grow and expand to other devices and eventually will be introduced into clinical practice in the near future. Postprocedural infection preventive recommendations mainly consist of maintaining a clean external drain with the use of soap and water or hydrogen peroxide and covering the drain exit site with sterile dressing. Also, the use of a PEG tube requires daily rotation of 360 degrees both clockwise and counterclockwise to prevent pressure ulcers from forming between the abdominal and gastric walls, leading to tissue necrosis and infection. Furthermore, patients receiving biliary stents should avoid using long-term postprocedural ciprofloxacin for the prevention of biliary stent blockage because this intervention has not been proven to improve stent patency or infection rates. Most importantly, all patients should have an instruction booklet, access to an institutional hotline, as well as regular clinical follow-up according to institutional guidelines with a provider experienced in the long-term use and management of infectious complications of these devices. Continued progress in implementation science research has led to several improvements in effective health care-associated infection prevention strategies. However, persistent gaps between recommendations and practices remain. The involvement of several key stakeholders, including governmental policy makers, the research and development industry, specialty medical societies, hospital and infection control interventions, surgeons, oncologists, and consulting health care providers, is paramount for continued reduction in the incidence of preventable foreign medical device-related infections. Advancement in this intricate preventive arena will lead to further progress in cancer outcomes and physicians’ fulfillment as well as a significant decrease in the economic burden to the health care system.
A global perspective on bacterial diversity in the terrestrial deep subsurface
43e422e6-f53c-42ae-9ed9-6bc5377cde10
9993121
Microbiology[mh]
The 16S rRNA gene sequencing data utilized in the present study are available on NCBI under the following project accessions: PRJNA262938, PRJNA268940, PRJNA248749, PRJNA251746, PRJNA375701, PRJEB1468 and PRJEB10822. The code used for the processing and data analysis of the datasets is available at: https://github.com/GeoMicroSoares/mads_scripts . Understanding the distribution of microbial diversity is pivotal for advancing our knowledge of deep subsurface global biogeochemical cycles . Subsurface biomass is suggested to have exceeded that of the Earth’s surface by an order of magnitude (~45 % of Earth’s total biomass) before land plants evolved, ca. 0.5 billion years ago . Integrative modelling of cell count and quantitative PCR (qPCR) data and geophysical factors indicated in late 2018 that the bacterial and archaeal biomass found in the global deep subsurface may range from 23 to 31 petagrams of carbon (PgC) . These values halved estimates from efforts earlier that year but maintained the notion that the terrestrial deep subsurface holds ca. 5-fold more bacterial and archaeal biomass than the deep marine subsurface . Further, it is expected that 20–80 % of the possible 2–6×10 29 prokaryotic cells present in the terrestrial subterranean biome exist as biofilms and play crucial roles in global biogeochemical cycles . Cataloguing microbial diversity and functionality in the terrestrial deep subsurface has mostly been achieved by means of marker gene and metagenome sequencing from aquifers associated with coals, sandstones, carbonates and clays, as well as deep igneous and metamorphic rocks . Only recently has the first comprehensive database of 16S rRNA gene-based studies targeting terrestrial subsurface environments been compiled . This work focused on updating estimates for bacterial and archaeal biomass, and cell numbers across the terrestrial deep subsurface, but also linked the identified bacterial and archaeal phylum-level compositions to host-rock type, and to 16S rRNA gene region primer targets . While highlighting Firmicutes and Proteobacterial dominance in the bacterial component of the terrestrial deep subsurface, no further taxonomic insights emerged. Genus-level identification remains an important niche necessary for understanding community composition, inferred metabolism and hence microbial contributions of distinct community members to biogeochemical cycling in the deep subsurface . Indeed, such genus-specific traits have been demonstrated to be critical for understanding crucial biological functions in other microbiomes, and genus-specific functions of relevance for deep subsurface biogeochemistry are clear . So far, the potential biogeochemical impacts of microbial activity in the deep subsurface have been inferred through shotgun metagenomics, as well as from incubation experiments of primary geological samples amended with molecules or minerals of interest . Recent studies of deep terrestrial subsurface microbial communities further suggest that these are metabolically active, often associated with novel uncultured phyla, and potentially directly involved in carbon and sulphur cycling . Concomitant advancements in subsurface drilling, molecular methods and computational techniques have aided exploration of the subsurface biosphere, but serious challenges remain, mostly related to deciphering sample contamination by drilling methods, community interactions with reactive casing materials and sample transportation to laboratories for processing . The logistical challenges inherent in accessing and recovering in situ samples from hundreds to thousands of metres below the surface complicate our view of terrestrial subsurface microbial ecology . In this study, we capitalize on the increased availability of 16S rRNA gene amplicon data from multiple studies of the terrestrial deep subsurface conducted over the last decade. We apply bespoke bioinformatics scripts to generate insights into the microbial community structure and controls upon bacterial microbiomes of the terrestrial deep subsurface across a large distribution of habitat types on multiple continents. The deep biosphere is as-yet undefined as a biome – elevated temperature, anoxic conditions, varying levels of organic carbon, and measures of isolation from the surface photosphere are some of the criteria used, albeit without a consensus. For this work a more general approach has been taken to define the terrestrial deep subsurface for the purposes of this initial examination as the zone at least 100 m from the surface . Data acquisition The Sequence Read Archive database of the National Center for Biotechnology Information (SRA-NCBI) was queried for 16S rRNA-based deep subsurface datasets (excluding marine and ice samples, as well as any human-impacted samples); available studies were downloaded using the SRA Run Selector. Studies were selected considering their metadata and information on sequencing platform used – i.e. only samples derived from 454 pyrosequencing and Illumina sequencing were considered. Due to a lack of public availability for Illumina datasets targeting environments of interest, only 454 pyrosequencing datasets were retained. Analysis of related literature resulted in the detection of other deposited studies that previous search efforts in NCBI-SRA failed to detect. Further private contacts allowed access to unpublished data included in this study. The final list of NCBI accession numbers , totalling 222 samples, was downloaded using fastq-dump from the SRA toolkit ( https://hpc.nih.gov/apps/sratoolkit .html) As seen in , required metadata included host-rock lithology, general and specific geographical locations, depth of sampling, DNA extraction method, sequenced 16S rRNA gene region and sequencing method. Any samples for which the above-mentioned metadata could not be found were discarded and not considered for downstream analyses. Pre-processing of 16S rRNA gene datasets A customized pipeline was created in bash language making use of python scripts developed for QIIME v1.9.1, to facilitate bioinformatic analyses in this study (see https://github.com/GeoMicroSoares/mads_scripts for scripts) . Briefly, demultiplexed FASTQ files were processed to create an operational taxonomic unit (OTU) table. Quality control steps involved trimming, quality-filtering and chimera checking by means of USEARCH 6.1 . Sequence data that passed quality control were then subjected to closed-reference (CR) OTU-picking on a per-study basis using UCLUST and reverse strand matching against the silva v123 taxonomic references ( https://www.arb-silva.de/documentation/release-123/ ) . CR OTU picking excludes OTUs whose taxonomy has not been found in the 16S rRNA gene database used. Although this limits the recovery of prokaryotic diversity to that recorded in the database, cross-study comparisons of bacterial communities generated by different 16S rRNA gene primers are made possible. This conservative approach classified OTUs in each study individually to the common 16S rRNA gene reference database from the merging of all classification outputs. A single BIOM (Biological Observation Matrix) file was generated using QIIME’s merge_otu_tables.py script. The BIOM file was then filtered to exclude samples represented by fewer than two OTUs using filter_samples_from_otu_table.py, as well as OTUs represented by one sequence (singleton OTUs) by using filter_otus_from_otu_table.py . In an attempt to reduce the impacts of potential contaminant OTUs from the dataset, the post-singleton filtered dataset was further filtered to include only OTUs represented by at least 500 sequences and present in at least 10 samples overall using filter_otus_from_otu_table.py . Data analysis All downstream analyses were conducted using the phyloseq ( https://github.com/joey711/phyloseq ) package within R, which allowed for simple handling of metadata and taxonomy and abundance data . Merged and filtered BIOM files were imported into R using internal phyloseq functions, which allowed further filtering, transformation and plotting of the dataset (see https://github.com/GeoMicroSoares/mads_scripts for scripts). Briefly, following a general assessment of the number of reads across samples and OTUs, tax_glom ( phyloseq ) allowed the agglomeration of the OTU table at the phylum level. For the metadata category-directed analyses, the function merge_samples ( phyloseq ) created averaged OTU tables, which permitted testing of hypotheses for whether geology or depth had significant impacts on bacterial community structure and composition. Computation of a Jensen–Shannon divergence PCoA (principal coordinate analysis) was achieved with ordinate ( phyloseq ), which makes use of metaMDS ( vegan ) . All figures were plotted via the ggplot2 R package ( https://github.com/tidyverse/ggplot2 ), except for the UpsetR plot in Fig. S4, which was plotted with the package UpsetR ( https://github.com/hms-dbmi/UpSetR ). The Sequence Read Archive database of the National Center for Biotechnology Information (SRA-NCBI) was queried for 16S rRNA-based deep subsurface datasets (excluding marine and ice samples, as well as any human-impacted samples); available studies were downloaded using the SRA Run Selector. Studies were selected considering their metadata and information on sequencing platform used – i.e. only samples derived from 454 pyrosequencing and Illumina sequencing were considered. Due to a lack of public availability for Illumina datasets targeting environments of interest, only 454 pyrosequencing datasets were retained. Analysis of related literature resulted in the detection of other deposited studies that previous search efforts in NCBI-SRA failed to detect. Further private contacts allowed access to unpublished data included in this study. The final list of NCBI accession numbers , totalling 222 samples, was downloaded using fastq-dump from the SRA toolkit ( https://hpc.nih.gov/apps/sratoolkit .html) As seen in , required metadata included host-rock lithology, general and specific geographical locations, depth of sampling, DNA extraction method, sequenced 16S rRNA gene region and sequencing method. Any samples for which the above-mentioned metadata could not be found were discarded and not considered for downstream analyses. A customized pipeline was created in bash language making use of python scripts developed for QIIME v1.9.1, to facilitate bioinformatic analyses in this study (see https://github.com/GeoMicroSoares/mads_scripts for scripts) . Briefly, demultiplexed FASTQ files were processed to create an operational taxonomic unit (OTU) table. Quality control steps involved trimming, quality-filtering and chimera checking by means of USEARCH 6.1 . Sequence data that passed quality control were then subjected to closed-reference (CR) OTU-picking on a per-study basis using UCLUST and reverse strand matching against the silva v123 taxonomic references ( https://www.arb-silva.de/documentation/release-123/ ) . CR OTU picking excludes OTUs whose taxonomy has not been found in the 16S rRNA gene database used. Although this limits the recovery of prokaryotic diversity to that recorded in the database, cross-study comparisons of bacterial communities generated by different 16S rRNA gene primers are made possible. This conservative approach classified OTUs in each study individually to the common 16S rRNA gene reference database from the merging of all classification outputs. A single BIOM (Biological Observation Matrix) file was generated using QIIME’s merge_otu_tables.py script. The BIOM file was then filtered to exclude samples represented by fewer than two OTUs using filter_samples_from_otu_table.py, as well as OTUs represented by one sequence (singleton OTUs) by using filter_otus_from_otu_table.py . In an attempt to reduce the impacts of potential contaminant OTUs from the dataset, the post-singleton filtered dataset was further filtered to include only OTUs represented by at least 500 sequences and present in at least 10 samples overall using filter_otus_from_otu_table.py . All downstream analyses were conducted using the phyloseq ( https://github.com/joey711/phyloseq ) package within R, which allowed for simple handling of metadata and taxonomy and abundance data . Merged and filtered BIOM files were imported into R using internal phyloseq functions, which allowed further filtering, transformation and plotting of the dataset (see https://github.com/GeoMicroSoares/mads_scripts for scripts). Briefly, following a general assessment of the number of reads across samples and OTUs, tax_glom ( phyloseq ) allowed the agglomeration of the OTU table at the phylum level. For the metadata category-directed analyses, the function merge_samples ( phyloseq ) created averaged OTU tables, which permitted testing of hypotheses for whether geology or depth had significant impacts on bacterial community structure and composition. Computation of a Jensen–Shannon divergence PCoA (principal coordinate analysis) was achieved with ordinate ( phyloseq ), which makes use of metaMDS ( vegan ) . All figures were plotted via the ggplot2 R package ( https://github.com/tidyverse/ggplot2 ), except for the UpsetR plot in Fig. S4, which was plotted with the package UpsetR ( https://github.com/hms-dbmi/UpSetR ). A total of 233 publicly available subsurface samples targeting multiple 16S rRNA gene hypervariable regions originating in nine countries were originally downloaded from the NCBI SRA database. These accounted for 24 632 035 chimera-checked sequences , which underwent silva 123-aided CR OTU-picking. The discovery of 46 OTUs classified as Chloroplast ( Cyanobacteria ) and phototrophic members of the phyla Chloroflexi and Chlorobi as well as orders Rhodospirillales and Chromatiales ( Alpha - and Gammaproteobacteria classes, respectively) justified the use of additional stricter contamination-aware filtering (see Methodology, Table S1 for differences in numbers of reads between methods). The final dataset consisted of 70 samples and 2207 OTUs (513 929 sequences). Seventeen aquifers were included that were associated with either sedimentary- or crystalline-host rocks, from depths spanning 94–2300 m below the land surface, targeting mostly groundwater across five countries (Table S2). Nine DNA extraction techniques were used in these studies, ranging from standard and modified kit protocols (e.g. MOBIO PowerSoil) to phenol–chloroform and CTAB/NaCl-based methods . Six different primer pair amplified regions of the 16S rRNA gene with 454 pyrosequencing technology were used to generate the datasets (see Fig. S1). Metadata variables that were unavailable for all samples in the dataset were excluded from the statistical analyses. All studies followed aseptic sample handling protocols and included DNA extraction and PCR controls (for further information see Methods sections of the papers enumerated in ) as per recommended guidelines for the subsurface microbiology community . Among a total of 45 detected bacterial phyla, Proteobacteria were seen to dominate most deep subsurface community profiles in this dataset ( ). The most abundant proteobacterial classes ( Alpha -, Beta -, Delta -, Gammaproteobacteria ) represented 57.2 % of the total number of reads. Betaproteobacteria , chiefly represented by the order Burkholderiales , accounted for 26.1 % of all reads in the dataset. The order Burkholderiales was the main component of some host-rocks, accounting for up to 59.5 and 92.7 % of host-rock-level relative abundance profiles for biotite-gneiss and chlorite-sericite-schist (see Fig. S2 for standard deviations of ) and co-dominated others. Gammaproteobacteria and Clostridia ( Firmicutes ) were key components of other profiles. Clostridia and other Firmicutes accounted for large fractions of sedimentary host-rocks (dolomite, siltstone and shale) and a haematite iron formation. Finally, Actinobacteria was the most abundant taxonomic group in rhyolite-tuff-breccia. Analysis of prevalence across the dataset revealed that seven OTUs, all affiliated with the genus Pseudomonas , were present in more than 25 and up to 41 samples, accounting for 18 149 reads (3.5 % of the total reads, see , Table S3). Other bacterial orders, namely Burkholderiales , Alteromonadales and Clostridiales ( Betaproteobacteria , Gammaproteobacteria , Clostridia ) were also highly prevalent throughout. Network analysis ( ) highlighted a Pseudomonas OTU highly connected to other OTUs in the dataset. Furtherore, blast results indicated that recovered sequences for OTUs affiliated with this genus were generally associated with marine and terrestrial soil and sediments (see Fig. S3, Table S4) . Four OTUs affiliated to Burkholderiales ( Betaproteobacteria ) , the second most prevalent order in the dataset, were also found to be connected to up to 34 other OTUs. The genus Thauera ( Betaproteobacteria, Rhodocyclales ), represented by a single OTU, was the second most central to the dataset. While relative abundance patterns across the dataset ( ) indicate that lithology could influence microbial community composition and structure, sample sizes for each host-rock in the final dataset were insufficient to provide robust statistical support of that hypothesis. Despite this, host-rocks (10 out of 15) presented, on average, more unique OTUs than they shared with other host-rocks (Fig. S4). In particular, in sulphide-rich schists, 73 % of the OTUs were, on average, unique to the host-rock. Sub-bituminous and volatile bituminous coals shared a total of 143 OTUs; this was the strongest interaction between host-rocks in the dataset. No significant correlations were found for the presence of the most abundant clades in the dataset and depth, Actinobacteria being the only major taxonomic group to have a positive, albeit weak, correlation with depth (Pearson’s r =0.42, P <0.01, Fig. S5). Proportions of Beta - and Gammaproteobacteria generally decreased with depth (Pearson’s r =−0.29 and −0.093, respectively), but no other major clades were shown to correlate. Ordination of the final dataset further suggests 50.6 % of Jensen–Shannon distances were significantly explained by aquifer lithology (ADONIS/PERMANOVA, F-statistic=4.65, P <0.001, adjusted Bonferroni correction P <0.001). Other environmental features such as absolute depth and medium-scale location (i.e. state, region of the sampling site) explained only 3.08 and 2.78 % of the significant metadata-driven variance in bacterial community structure, respectively (ADONIS/PERMANOVA, F-statistic=3.95, 3.57, P <0.001, adjusted Bonferroni correction P <0.001). Finally, no evidence was found for DNA extraction or 16S rRNA gene region significantly affecting bacterial community structure in this meta-analysis (ADONIS/PERMANOVA, F-statistic=3.85, 3.23, P <0.01, adjusted Bonferroni correction P <0.001). The deep biosphere is an active, diverse biome still largely under-investigated in terms of the Earth’s biogeochemistry . In this study, publicly available 16S rRNA gene data revealed a prevalence of Betaproteobacteria and Gammaproteobacteria in the deep biosphere that may be explained by the diverse metabolic capabilities of taxa within these clades. The families Gallionellaceae, Pseudomonadaceae , Rhodocyclaceae and Hydrogeniphillaceae within Betaproteobacteria and Gammaproteobacteria are suggested to play critical roles in deep subsurface iron, nitrogen, sulphur and carbon cycling across the world . The relative abundance of the order Burkholderiales ( Betaproteobacteria ) in surficial soils has previously been correlated ( R 2 =0.92, ANOVA P <0.005) with mineral dissolution rates, while the genus Pseudomonas ( Gammaproteobacteria ) is widely known to play a key role in hydrocarbon degradation, denitrification and coal solubilization in different locations . The dominance of Betaproteobacteria and Gammaproteobacteria in coals builds on culture-based evidence of widespread degradation of coal-associated complex organic compounds by these classes . The metabolic plasticity of the orders Pseudomonadales and Burkholderiales has been demonstrated and may be a catalyst for their apparent centrality across the terrestrial deep subsurface microbiomes analysed in this study . These bacterial orders may represent important keystone taxa in microbial consortia responsible for providing critical substrates to other colonizers in deep subsurface environments . In particular, given the number of highly central Pseudomonas -affiliated OTUs and the prevalence of this genus in the dataset, we suggest that this genus may play a central role in establishing conditions for microbial colonization in many terrestrial subsurface environments. The genus Pseudomonas and possibly several members of Burkholderiales may therefore comprise an important component of the global core terrestrial deep subsurface bacterial community . Geographically comprehensive RNA-based approaches should in the future investigate the potential roles of the genus Pseudomonas and order Burkholderiales in this biome. The class Clostridia was found to be prevalent across the dataset and to dominate in sedimentary host-rocks (dolomite, siltstone and shale) in this study. This class includes anaerobic hydrogen-driven sulphate reducers also known to sporulate and metabolize a wide range of organic carbon compounds . Previously, members of Clostridia have also been identified as dominant components in extremely deep subsurface ecosystems beneath South Africa, Siberia and California (USA) from metabasaltic and metasedimentary lithologies . Adaptation to extreme environments in this class has been associated with diverse metabolic capabilities that include sporulation ability and capacity for CO 2 - or sulphur-based autotrophic H 2 -dependent growth . In this study, network analysis and prevalence values suggested roles of putative importance for the classes Betaproteobacteria , Gammaproteobacteria and Clostridia in the deep terrestrial subsurface ( , Table S3). Their maintained presence in this biome across strikingly dissimilar host-rocks and depth, among others, could be indicative of higher metabolic plasticity, providing physiological advantages over other members of microbial communities. Lithotrophic microbial metabolisms and mineralogy-driven microbial colonization of relatively inert lithologies have previously been demonstrated with low abundance but more reactive minerals within rock matrices often cited as key controls on community structure . Limiting factors for life in the terrestrial deep subsurface such as pressure and temperature are more closely correlated with depth. Growth of bacterial isolates from the deep subsurface has been documented at up to 48 MPa and 50 °C and has been associated with production of extracellular polymeric substances (EPS) . However, robust conclusions on the effects of lithology or depth on the structure and composition of microbial communities across Earth’s crust have presented a widespread challenge for science, as in this study due to the small and varied sample sizes resulting from the contamination-aware filtering process and the limited number of comparable lithology types. Large-scale evidence for the roles of eukaryotes, bacteria, archaea and viruses in the deep terrestrial subsurface and the environmental controls over their occurrence in this biome is still lacking. We recommend a field-level research strategy to gain insights into these aspects of life within Earth’s crust. Larger scale collation of data from samples collected and processed using unified, reproducible workflows will be cognizant of significant potential for contamination and ultimately allow robust insights on wide-ranging microbial metabolic processes in the terrestrial subsurface. Collecting contamination-free samples from the deep subsurface is difficult but important for cataloguing the authentic microbial diversity of the terrestrial subsurface. This study follows recent recommendations for downstream processing of contaminant-prone samples originated in the deep subsurface (Census of Deep Life project – https://deepcarbon.net/tag/census-deep-life ), where physical, chemical and biological, but also in silico bioinformatics strategies to prevent erroneous conclusions have been highlighted . This study also follows frequency-based OTU filtration techniques similar to those recommended previously and designed to remove possible contaminants introduced during sampling or during the various steps related to sample processing . The pre-emptive quality control steps hereby undertaken support a non-contaminant origin for the taxa analysed in this dataset. As such, the predominance of typically contaminant taxa affiliated, for example, to the genus Pseudomonas is accepted as a representative trend in reflecting the microbial ecology of the terrestrial deep subsurface. Standardizing sampling, DNA extraction, sequencing and bioinformatics methods and strategies across the subsurface research community would help further reduce methodology-based variations. This would more efficiently permit re-analyses after collection, where methodological variations would be controlled, and robust wide-ranging overarching conclusions would more easily be achieved. Despite this, host-rock matrices and local geochemical conditions often pose unique challenges that require particular protocol adjustments . In the near future, the advent of recently developed techniques for primer bias-free long read 16S rRNA and 16S rRNA-ITS gene amplicon long-read-based sequencing may initiate a convergence of molecular methods from which the deep subsurface microbiology community would benefit greatly . The future of large-scale, collaborative deep subsurface microbial diversity studies should encompass not only an effort towards standardization of several molecular biology techniques but also the long-term archival of samples . Finally, the ecology of domains Eukarya and Archaea across the terrestrial deep subsurface remains generally under-characterized and requires future attention. This study presents an important first step towards characterizing bacterial community structure and composition in the terrestrial deep biosphere. A global-scale meta-analysis addressing the available 16S rRNA gene-based studies of the deep terrestrial subsurface revealed a dominance of Betaproteobacteria, Gammaproteobacteria and Firmicutes across this biome. Evidence for a core terrestrial deep subsurface microbiome population was recognized through the prevalence and centrality of the genus Pseudomonas ( Gammaproteobacteria ) and several other genera affiliated with the class Betaproteobacteria . The adaptable metabolic capabilities associated with the above-mentioned taxa may be critical for colonizing the deep subsurface and sustaining communities therein. The terrestrial deep subsurface is a hard-to-reach, complex ecosystem crucial to global biogeochemical cycles. Efforts by multiple teams of investigators to sequence subsurface ecosystems over the last decade were hereby consolidated to characterize the 12–20 % of global biomass this biome represents . The strict contamination-aware filtering process applied whittled down the publicly available datasets representing terrestrial subsurface bacterial diversity to just 70 samples from two continents, indicating the need for systematic exploration of biodiversity within this major component of the biosphere. As a first step, this study consolidates a global-scale understanding of taxonomic trends underpinning a major component of terrestrial deep subsurface microbial ecology and biogeochemistry. Supplementary material 1 Click here for additional data file.
Interview Invitations for Otolaryngology Residency Positions Across Demographic Groups Following Implementation of Preference Signaling
f79c863b-784f-4a7f-a268-bdf524cf929c
9993176
Otolaryngology[mh]
During the residency application process, most otolaryngology–head and neck surgery (OHNS) applicants are eliminated from consideration during the interview selection phase. Prior to the initiation of preference signaling, there was no formal process to consider applicant preferences while programs were making interview selection decisions. The challenge of aligning applicant and program interests during the interview selection phase has been exacerbated by a surge of applications. Within OHNS, students submitted a mean of 84 residency applications in the 2022 National Resident Matching cycle, a 25% increase over the past 5 years. Furthermore, the number of OHNS applicants has increased, resulting in a doubling of applications received by programs. This increase in applications challenges the ability of programs to select from hundreds of applicants and may result in programs relying on algorithms and numerical screening metrics, including US Medical Licensing Examination (USMLE) scores. USMLE Step 1 is a licensure examination and scores are neither designed for use in selection decisions nor associated with residency performance. Within OHNS, an overemphasis on USMLE scores may result in disproportionately low recruitment of applicants who identify as women and as underrepresented in medicine (URM), defined as American Indian or Alaska Native; Black or African American; Hispanic, Latino, or of Spanish origin; or Native Hawaiian or other Pacific Islander. Residency application review also occurs in an environment of informal signaling, with the potential to exacerbate inequities. Applicants have differential access to mentors who advocate on their behalf or guide them through effective avenues for expressing interest prior to interview selection. In the absence of formal signals, residency selection committees may infer applicant preference based on perceived geographic ties, prior training institutions, or other factors subject to the bias of the committee. Preference signaling was implemented in OHNS with the goals of mitigating a surge in applications, aligning program and applicant interests during the interview selection phase, and enhancing the capacity for holistic review, the preferred method for candidate assessment. , While preference signaling has not previously been used in the residency application process, this system was developed and implemented in the economics PhD marketplace and several authors , , , have advocated for this approach during the residency selection process. Preference signaling in OHNS was evaluated with surveys sent to program directors and OHNS applicants in the 2021 National Resident Matching cycle demonstrating a significant association between preference signals and interview selection rate. Additionally, signaling was found to be popular among both applicants and program directors. However, these data are limited by survey response rates of 42% for applicants and 52% for programs. Following this initial experience, the use of preference signals during the residency application process has expanded greatly: in the 2022 National Resident Matching cycle, the Association of American Medical Colleges (AAMC) and Electronic Residency Application Service (ERAS) offered preference signaling through a supplemental application for 3 specialties (general surgery, dermatology, and internal medicine) and now offers this service to 15 specialties in the 2023 National Resident Matching program. Urology adopted preference signaling in the 2022 National Resident Matching cycle and, along with OHNS, continues this program independent of AAMC and ERAS. With 17 specialties participating in preference signaling, more than 80% of residency applicants are anticipated to apply to specialties that use preference signaling. New initiatives with uncertain outcomes across demographic groups must be evaluated to prevent exacerbation of existing disparities and ideally will contribute to reducing disparities. This is particularly important for OHNS where, despite an increase in medical school matriculation for women and students identifying as URM, the workforce lacks gender and ethnic diversity within residency programs and among practicing physicians. , , Widespread adoption of preference signaling necessitates a deeper assessment of this system which includes the results of signaling across demographic groups. The goal of this study is to validate the survey-based data on the association between signals and interview offer rate and to understand how this association varies across demographic groups and USMLE Step 1 scores. Although Step 1 has moved to pass or fail, inclusion of this metric provides a historical record of how scores were used and may help inform approaches for future residency selection cycles. This cross-sectional study was approved by the American Institutes for Research Review of Safeguards for Human Subjects. This study reported aggregate deidentified results obtained; therefore, no consent was required by the institutional review board. The report follows the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. During the 2021 National Resident Matching cycle, OHNS applicants were provided 5 signals to send to programs of particular interest. A website platform was created by the Otolaryngology Program Directors Organization council (OPDO) to disseminate guidance to applicants, provide best practice recommendations to programs, and collect signal submissions. Website creation, signal collection, and signal distribution were performed by existing OPDO staff; participation in signaling was free for both applicants and programs. Applicants were instructed not to signal their home program nor a program where they completed an in-person away rotation within the current academic cycle. In the inaugural year of the OHNS signaling program, 100% of residency tracks (125 programs) participated in signaling. Some programs with multiple tracks (eg, research, clinical) chose to have a single signaling option for their institution while others requested a separate signaling opportunity for each track. For research purposes, research track signals and clinical track signals to the same institution were counted toward the parent program aggregately, resulting in 118 programs in the study, a 100% participation rate. Of 636 OHNS applicants in 2021, 548 unique applicants (86%) participated in signaling. Preference signal data was linked to ERAS data using the applicants’ AAMC ID number. The association between preference signals and the likelihood of being selected for interview was analyzed for the entire applicant cohort as well as for gender and self-identified URM status. URM was defined as applicants who self-identified as 1 or more of the following racial and ethnic categories: American Indian or Alaska Native; Black or African American; Hispanic, Latino, or of Spanish origin; or Native Hawaiian or other Pacific Islander. Applicants were divided into terciles based on their most recent USMLE Step 1 score. Because preference signals are designed to improve the interview selection process, interview selection rate was chosen as the primary outcome measure. These data were obtained from the ERAS Program Directors Workstation (PDWS), which contains a “selected for interview” status as an optional metric within its application tracking parameters. Many programs use methods other than ERAS to invite applicants for interviews; programs with incomplete selected for interview data were excluded. Based on feedback from OPDO members, programs with incomplete interview selection data were defined as those that designated fewer than 7 applicants per available match position as selected for interview. Removal of programs with absent or incomplete selected for interview data resulting in a final sample of 85 programs for analysis ( ). Applications to home programs have a very high rate of resulting in an interview selection and were removed from this analysis to prevent artificial inflation of the interview selection rate to nonsignaled programs. Statistical Analysis We conducted a series of logistic regression analyses at the individual program level. Analyses were conducted separately for each program and by applicant group because programs differ in how signals were incorporated into their selection process and the characteristics of each applicant group may differ. To complete a regression analysis, programs need adequate distribution of signal and nonsignal applications, as well as adequate numbers of women or URM applicants for these analyses. Programs that lack sufficient data for regression analysis comparing gender or URM status were excluded, resulting in 3 distinct samples to analyze this association with respect to gender, URM status, and for the entire cohort. These 3 program cohorts were largely representative of the OHNS programs overall, although the analytic sample programs received more applications, somewhat overrepresent programs in the highest mean USMLE Step 1 score range, and have an overrepresentation of larger programs (eTable in ). Each program within the 3 program cohorts (overall, gender, and URM status) was evaluated with 2 models. Model 1 explored the association between applicants’ signal status and interview invitation status. Signal status (coded as 0, indicating did not send a signal to program, and 1, sent signal to program) and interview selection (coded as 0, indicating did not receive an interview selection from program, and 1, received an interview selection from program) were treated as binary variables. Model 2 explored the association between signal status and interview invitation status while accounting for most recent USMLE Step 1 score. For the regression analyses in model 2, USMLE Step 1 scores were treated as a continuous covariate. However, for simplicity of presentation, probability results are displayed for 3 USMLE Step 1 score tercile categories, with each tercile corresponding to a range of scores that divides the applicant pool into the bottom, middle, and top third of scores. Results were aggregated across programs by computing median probability of being selected for interview and the median 95% CIs across programs. The SD and minimum and maximum estimated probability also are reported. P values were 2-sided, and statistical significance was set at P = .05. Analyses were conducted using R version 4.1.3 (R Project for Statistical Computing). Data were analyzed from June to July 2022. We conducted a series of logistic regression analyses at the individual program level. Analyses were conducted separately for each program and by applicant group because programs differ in how signals were incorporated into their selection process and the characteristics of each applicant group may differ. To complete a regression analysis, programs need adequate distribution of signal and nonsignal applications, as well as adequate numbers of women or URM applicants for these analyses. Programs that lack sufficient data for regression analysis comparing gender or URM status were excluded, resulting in 3 distinct samples to analyze this association with respect to gender, URM status, and for the entire cohort. These 3 program cohorts were largely representative of the OHNS programs overall, although the analytic sample programs received more applications, somewhat overrepresent programs in the highest mean USMLE Step 1 score range, and have an overrepresentation of larger programs (eTable in ). Each program within the 3 program cohorts (overall, gender, and URM status) was evaluated with 2 models. Model 1 explored the association between applicants’ signal status and interview invitation status. Signal status (coded as 0, indicating did not send a signal to program, and 1, sent signal to program) and interview selection (coded as 0, indicating did not receive an interview selection from program, and 1, received an interview selection from program) were treated as binary variables. Model 2 explored the association between signal status and interview invitation status while accounting for most recent USMLE Step 1 score. For the regression analyses in model 2, USMLE Step 1 scores were treated as a continuous covariate. However, for simplicity of presentation, probability results are displayed for 3 USMLE Step 1 score tercile categories, with each tercile corresponding to a range of scores that divides the applicant pool into the bottom, middle, and top third of scores. Results were aggregated across programs by computing median probability of being selected for interview and the median 95% CIs across programs. The SD and minimum and maximum estimated probability also are reported. P values were 2-sided, and statistical significance was set at P = .05. Analyses were conducted using R version 4.1.3 (R Project for Statistical Computing). Data were analyzed from June to July 2022. Of 636 US OHNS applicants, 548 (86%) participated in signaling, including 337 men (61%) and 86 applicants who identified as URM (16%) ( ). The mean, median, SD, and range for key variables in the overall analytic sample are summarized in . US MD applicants were more likely to participate in signaling (93%) than International Medical Graduates (46%) or DO applicants (59%). Participating programs received a mean (SD) 376.3 (94.5) applications and 27.0 (18.3) signals and offered 49.5 (16.6) interviews ( ). The distribution of signals was skewed, with 25% of programs receiving 50% of signals. The mean number of applications submitted by otolaryngology applicants increased in the first year of signaling, from 68.8 in the 2020 Match cycle to 72.8 in the 2021 Match cycle. The selected to interview invitation rate was low among participating programs (median [range], 13% [4%-30%] of applicants; mean [SD], 13% [4.3%] of applicants). Of 548 participants in the study sample, 29 were not selected for interview by any of the 85 programs in the study sample. Preference Signals and Interview Invitations Applications with a signal were significantly more likely to be selected for interview than nonsignal applications (48% [95% CI, 27%-68%] vs 10% [95% CI, 7%-13%]; P < .01) ( A). An increased interview selection rate associated with signals was found across gender ( B) and self-reported URM status ( C). There were no statistically significant differences between the median interview selection rates with or without signals when comparing male (46% [95% CI, 24%-71%] vs 7% [95% CI, 5%-12%]) and female (50% [95% CI, 20%-80%] vs 12% [95% CI, 8%-18%]) applicants. Interview selection rates for signaling URM applicants (53% [95% CI, 16%-88%]) were similar to those for non-URM signaling applicants (49% [95% CI, 32%-68%]). Furthermore, interview rates for nonsignaling URM applicants (15% [95% CI, 8%-26%]) were also similar to rates among nonsignaling non-URM applicants (8% [95% CI, 5%-12%]). There was considerable variability in the association of signals and selection for interview between programs (mean, 47.9%; SD, 19%; range, 11%-92%). Preference Signals and Interview Offer Rate Across Demographic Groups, Stratified by USMLE Score Signals were associated with a marked increase in the likelihood of being selected for interview across all groups and USMLE Step 1 score categories ( ). Applications with a signal and a bottom tercile USMLE Step 1 score had the same likelihood of being selected for interview (14%) as the top tercile of USMLE Step 1 applications without a signal. Women applicants and applicants identifying as URM in the top tercile of USMLE Step 1 scores had the highest likelihood of receiving an interview selection ( B and C), but interview selection rates were not statistically different from those of men and non-URM applicants. Applications with a signal were significantly more likely to be selected for interview than nonsignal applications (48% [95% CI, 27%-68%] vs 10% [95% CI, 7%-13%]; P < .01) ( A). An increased interview selection rate associated with signals was found across gender ( B) and self-reported URM status ( C). There were no statistically significant differences between the median interview selection rates with or without signals when comparing male (46% [95% CI, 24%-71%] vs 7% [95% CI, 5%-12%]) and female (50% [95% CI, 20%-80%] vs 12% [95% CI, 8%-18%]) applicants. Interview selection rates for signaling URM applicants (53% [95% CI, 16%-88%]) were similar to those for non-URM signaling applicants (49% [95% CI, 32%-68%]). Furthermore, interview rates for nonsignaling URM applicants (15% [95% CI, 8%-26%]) were also similar to rates among nonsignaling non-URM applicants (8% [95% CI, 5%-12%]). There was considerable variability in the association of signals and selection for interview between programs (mean, 47.9%; SD, 19%; range, 11%-92%). Signals were associated with a marked increase in the likelihood of being selected for interview across all groups and USMLE Step 1 score categories ( ). Applications with a signal and a bottom tercile USMLE Step 1 score had the same likelihood of being selected for interview (14%) as the top tercile of USMLE Step 1 applications without a signal. Women applicants and applicants identifying as URM in the top tercile of USMLE Step 1 scores had the highest likelihood of receiving an interview selection ( B and C), but interview selection rates were not statistically different from those of men and non-URM applicants. This cross-sectional study evaluates interview selection rates with respect to signals, applicant demographics, and USMLE Step 1 scores. Consistent with the self-reported applicant survey data, signals were associated with increased likelihood of being selected for interview. Along with validating survey-based data demonstrating this correlation, this study found that the positive association between signaling and interview selection rate held across demographic groups. There were no statistically significant differences observed in the interview offer rates associated with signal or nonsignal applications across gender or self-reported URM status. Our study found lower participation rates in the signaling program by applicants from osteopathic schools and international medical graduates. USMLE scores were also positively associated with interview selection rate, and signals were associated with an increase in the interview selection rate across USMLE score ranges. USMLE scores are frequently used to screen applications during interview selection, a practice that may exacerbate the lack of diversity within OHNS residency programs. For an individual application, the presence of a signal may help mitigate the emphasis on USMLE scores in the interview selection process. Applications with a signal and a bottom tercile USMLE Step 1 score had the same likelihood of being selected for interview as the top tercile of USMLE Step 1 applications without a signal (14%). One goal of signaling is to mitigate the challenges associated with increased application numbers. While signals were not associated with a decrease in the number of applications submitted per applicant, they do provide a tool that may simplify the application review process. For example, many programs report using signals as a tie-breaker when selecting students to invite for interviews. Future iterations of signaling with high signal numbers may impact application numbers. Implementation of a 30-signal program in orthopedic surgery was associated with an 11% decrease in the mean number of applications submitted per student. Limitations This study has several limitations. First, the study does not explore the association between signals and interview offers for many applicant types that may be considered URM, including first-generation applicants, low-income applicants, and specific racial or ethnic groups within the URM designation. Additionally, this study treats gender as binary and does not explore intersectional identities. Limited samples of these applicant types precluded such analysis in our study. The data set for this analysis is incomplete. Not all programs entered interview selection information into ERAS. For those that did enter data into the selected for interview category, the completeness of these data is not known. To provide meaningful comparisons between demographic subgroups, we restricted our analyses to programs receiving an adequate number of signals from both men and women or both URM and non-URM applicants to complete a regression analysis. The relatively small proportion of applicants who identified as URM resulted in a smaller sample size for these evaluations and large CIs around the reported estimated probabilities. Therefore, it is important not to overinterpret small differences and to study the validity of results as more data become available. We did not assess variability in how individual programs may interpret, use, or otherwise value signals. Although rates of concordance between signal and selection for interview status varied widely among programs, signals are an indication of applicant interest, not applicant congruence with program qualifications. Therefore, when programs decline to invite applicants who signal for an interview, our data provide no insight into whether this was due to a lack of signal value or a lack of program prioritization of an individual applicant. The likelihood of being selected for interview by signaled programs is not only associated with the presence of a signal, but also with preexisting factors that drive the applicant’s interest: geography, alignment of clinical and research training and career goals, and department culture. In our 2022 survey-based signal analysis, these confounding factors were mitigated by identifying a comparable nonsignal program, ie, the program that the applicant would have signaled had they been provided with 1 additional signal. In this study, comparable nonsignal program data were not available, so interview selection rate may be confounded by potential differences in the programs that applicants selected to signal relative to those that they did not signal. Data from the OHNS signaling survey suggest only a modest increase in the interview selection rate for the comparable nonsignal program relative to selection rate overall for nonsignal programs (23% vs 14%) with a much higher interview selection rate for signal programs (58%). Data from this study may not be replicated in other specialties that implement preference signaling. OHNS is a small surgical subspecialty with a 63% match rate and no unmatched residency slots in the 2021 National Resident Matching cycle. These characteristics vary significantly from many of the programs that will be participating in preference signaling during the 2023 National Resident Matching cycle. Additionally, this is a retrospective study that makes use of data from previous admissions and selection cycles with different selection metrics available for evaluation. USMLE Step 1 scores were included to provide a context in the selection process at the time; inclusion of this data does not endorse use of USMLE scores for admissions and selection decisions by programs. Applicants were instructed not to signal their home program or programs where they completed an in-person visiting rotation in the same academic year. Home programs were excluded from the nonsignal category, but it was not possible to identify which students completed an in-person visiting rotation, so these programs were included within the nonsignal group, likely artificially inflating the interview selection rate within this group. The potential impact from this is believed to be modest, as more than 80% of applicants identified a home program and were not eligible to complete a visiting rotation during this first year of the COVID-19 pandemic. Categorization of visiting rotation programs as nonsignal programs would bias the results of this study by decreasing the association between signals and selected for interview status; our study found a robust association despite this potential miscategorization. This study has several limitations. First, the study does not explore the association between signals and interview offers for many applicant types that may be considered URM, including first-generation applicants, low-income applicants, and specific racial or ethnic groups within the URM designation. Additionally, this study treats gender as binary and does not explore intersectional identities. Limited samples of these applicant types precluded such analysis in our study. The data set for this analysis is incomplete. Not all programs entered interview selection information into ERAS. For those that did enter data into the selected for interview category, the completeness of these data is not known. To provide meaningful comparisons between demographic subgroups, we restricted our analyses to programs receiving an adequate number of signals from both men and women or both URM and non-URM applicants to complete a regression analysis. The relatively small proportion of applicants who identified as URM resulted in a smaller sample size for these evaluations and large CIs around the reported estimated probabilities. Therefore, it is important not to overinterpret small differences and to study the validity of results as more data become available. We did not assess variability in how individual programs may interpret, use, or otherwise value signals. Although rates of concordance between signal and selection for interview status varied widely among programs, signals are an indication of applicant interest, not applicant congruence with program qualifications. Therefore, when programs decline to invite applicants who signal for an interview, our data provide no insight into whether this was due to a lack of signal value or a lack of program prioritization of an individual applicant. The likelihood of being selected for interview by signaled programs is not only associated with the presence of a signal, but also with preexisting factors that drive the applicant’s interest: geography, alignment of clinical and research training and career goals, and department culture. In our 2022 survey-based signal analysis, these confounding factors were mitigated by identifying a comparable nonsignal program, ie, the program that the applicant would have signaled had they been provided with 1 additional signal. In this study, comparable nonsignal program data were not available, so interview selection rate may be confounded by potential differences in the programs that applicants selected to signal relative to those that they did not signal. Data from the OHNS signaling survey suggest only a modest increase in the interview selection rate for the comparable nonsignal program relative to selection rate overall for nonsignal programs (23% vs 14%) with a much higher interview selection rate for signal programs (58%). Data from this study may not be replicated in other specialties that implement preference signaling. OHNS is a small surgical subspecialty with a 63% match rate and no unmatched residency slots in the 2021 National Resident Matching cycle. These characteristics vary significantly from many of the programs that will be participating in preference signaling during the 2023 National Resident Matching cycle. Additionally, this is a retrospective study that makes use of data from previous admissions and selection cycles with different selection metrics available for evaluation. USMLE Step 1 scores were included to provide a context in the selection process at the time; inclusion of this data does not endorse use of USMLE scores for admissions and selection decisions by programs. Applicants were instructed not to signal their home program or programs where they completed an in-person visiting rotation in the same academic year. Home programs were excluded from the nonsignal category, but it was not possible to identify which students completed an in-person visiting rotation, so these programs were included within the nonsignal group, likely artificially inflating the interview selection rate within this group. The potential impact from this is believed to be modest, as more than 80% of applicants identified a home program and were not eligible to complete a visiting rotation during this first year of the COVID-19 pandemic. Categorization of visiting rotation programs as nonsignal programs would bias the results of this study by decreasing the association between signals and selected for interview status; our study found a robust association despite this potential miscategorization. This cross-sectional study found that preference signals were associated with a higher likelihood of OHNS residency applicants being selected for interview by signaled programs. This association was robust and present across the demographic categories of gender and self-identified URM status. Future signaling programs should provide educational outreach to international medical graduates and applicants from osteopathic schools. Additional research is needed to explore the effect of signaling across a broad range of specialties and on later-stage outcomes of the National Resident Matching cycle, including inclusion and position on rank order lists and match outcome.
Hepatitis B sero-prevalence among hematology patients: importance of Anti-HbcAb and efficiency of antiviral prophylaxis
6af57f63-f6aa-44b9-9c0d-ee76a9d4e083
9993312
Internal Medicine[mh]
Hbv infection is one of the most widely seen viral infections. About 350 million of people worldwide have the diagnosis of chronic hepatitis B . Each year; an estimated one million people die due to complications of chronic HBV infection, like cirrhosis, end-stage liver disease and hepatocellular carcinoma . Turkey is an intermediate-endemic country for HBV infection, with the prevalence of HbsAg and anti-HBs 4.0% and 31.9%, respectively . The prevalence alters widely with geographic regions, like 2.3% in Aegean and 7.3% in Southeastern Anatolia . In the same study, isolated anti-HBc positivity was 4.6% whereas anti-HBs positivity with anti-HBc positivity was 22.0%. These patients with anti-HBc positivity represent the ones with past-exposure to HBV, and the importance of them is the high-risk of reactivation in cases of immune-suppression, like cancer chemotherapy, immune-suppressive or biological treatment, solid organ transplantation or bone marrow transplantation, since there is yet no totally-curative treatment for HBV infection . Bone marrow transplantation (BMT) has become an important and curative therapy for hematological disorders but also creates a high risk of morbidity and mortality by causing viral reactivation in patients who had met the virus before immune suppressive treatments. The same risk continues with the use of donors who are Anti-Hbc positive. Studies about the mechanism of HBV reactivation after immune suppression points a rebound increase in number of lymphocytes after stopping immune suppression which results in destruction of infected hepatocytes causing hepatitis. Cytokine analysis showed a decrease in CD4-CD25 T-regulatory cell numbers and an increase in antigen specific cytotoxic T-lymphocytes responsible for liver injury . Reactivation of HBV can initiate a cascade of events from hepatitis to acute liver failure and death. HBV reactivation may also result in discontinuation of hematological treatment. Proper treatment of HBV infection should be given as early as possible, but there may be problems about the recognition of reactivation since these patients are prone to drug induceliver diseases and other forms of viral hepatitis which can cause delay in diagnosis. The ability of HBV to persist in latent replicative form despite the signs of viral clearance may also cause confusion . In cases of immune suppression, for patients carrying a high risk of HBV reactivation, prophylactic oral antiviral treatment is highly recommended to prevent these situations. , In this study, our aim was to evaluate the seroprevalence of HBV among hematology patients, the changes in viral parameters after chemotherapies and bone marrow transplantation and find the efficiency of antiviral prophylaxis given according to the serological parameters. Subjects In this study, we retrospectively searched HBV serology among the patients who had BMT or CT between the years 2012 and 2016, at Izmir University of Economics, Medical park Izmir Hospital Bone Marrow Transplantation Unit, changes in viral parameters throughout therapy; and try to find the efficiency of antiviral prophylaxis given according to the serological parameters. We evaluated the viral parameters; HbsAg, Anti HbsAb, Anti Hbc Ab, HbeAg, Anti Hbe Ab and HBV DNA; which are the assays routinely carried out before BMT and CT. These assays take part in pretreatment protocol and are not specific for this study. Among patients with a positive serology for HBV; the ones with the diagnosis of chronic HBV infection who were on antiviral treatment were not included. We grouped the patients as latent HBV infection (AntiHbcAb+, Anti Hbs-, HbsAg-, HBV DNA -) and inactive carriers (HbsAg+ , HBV DNA+, ALT:normal ) and monitored the efficiency of antiviral prophylaxis in these groups. In this study, we also documented changes in liver function tests and searched the signs of HBV activation. Among 584 patients, we observed changes of viral parameters only in 3 patients mentioned above. No viral parameter change has been detected at the remaining. In this study, we retrospectively searched HBV serology among the patients who had BMT or CT between the years 2012 and 2016, at Izmir University of Economics, Medical park Izmir Hospital Bone Marrow Transplantation Unit, changes in viral parameters throughout therapy; and try to find the efficiency of antiviral prophylaxis given according to the serological parameters. We evaluated the viral parameters; HbsAg, Anti HbsAb, Anti Hbc Ab, HbeAg, Anti Hbe Ab and HBV DNA; which are the assays routinely carried out before BMT and CT. These assays take part in pretreatment protocol and are not specific for this study. Among patients with a positive serology for HBV; the ones with the diagnosis of chronic HBV infection who were on antiviral treatment were not included. We grouped the patients as latent HBV infection (AntiHbcAb+, Anti Hbs-, HbsAg-, HBV DNA -) and inactive carriers (HbsAg+ , HBV DNA+, ALT:normal ) and monitored the efficiency of antiviral prophylaxis in these groups. In this study, we also documented changes in liver function tests and searched the signs of HBV activation. Among 584 patients, we observed changes of viral parameters only in 3 patients mentioned above. No viral parameter change has been detected at the remaining. Hepatitis B is a global health problem, affecting 6% of the whole world population with a large regional variation of prevalence . HBV reactivation is an emerging complication of the virus by the growing use of immune suppressive therapies and organ transplantation. Today BMT has become standard therapy for most of hematological malignancies. However, Immune suppression protocols may activate HBV not only in HbsAg postive patients but also in patients with past exposure to virus that can be identified with Hepatitis B core antibody testing (anti-Hbc Ab). In our retrospective study, we evaluated 584 patients treated with BMT or CT and found 20 patients with latent infection and 10 patients as inactive HBV carriers before their hematological treatments. Our study showed the protective effect of antiviral prophylaxis given before immune suppressive treatments. Cakar et al reported 5 patients with HBV reactivation and 2 patients with acute hepatitis B among 197 patients who underwent hematopoietic stem cell transplantation. They did not give prophylaxis in patients with anti -Hbc positive and Hbs ag negative patients and observed no HBV reactivation in this group . Vigano did not use pretreatment prophylaxis and reported HbsAg seroreversion of 12% in patients with HbsAg negative/anti HbcAb positive . Without prophylaxis, the ratio may be as high as 20% among patients with autologous SCT and 9.1% among allogeneic SCT . Mikulska et al reported HBV reactivation ratio of 10% in patients who were HbsAg negative but HbcAb positive before allogenic hematopoietic stem cell transplantation . They did not use prophylaxis for HBV. There are also some other studies reporting HBV reactivation in the range of 11–29% whose serological markers were negative for HbsAg but positive for AntiHbcAb before BMT – . In our study, we continued antiviral prophylaxis up to one year after cessation of immune suppression, and it was effective since we did not observe any patients with HBV reactivation. There are different proposals like the use of antiviral prophylaxis more than 24 months . Long-term prophylaxis may be used selectively by anticipating the high-risk patients. Another approach is close monitoring of HBV DNA and use of antivirals in cases of reactivation . Despite the use of antiviral agents, HBV reactivation may be fatal, and the risk may increase up to 12% – – . Studies point out the importance of having chronic onco-hematological disease, long duration of immune suppression and variety of chemotherapeutics, low antiHbs titre and even loss of anti-hbs and type of BMT either autologous or allogeneic, as factors increasing the risk of HBV-reactivation , , . In 3 of our patients, that were HbsAg and anti-hbc negative before BMT, we detected increase in serum ALT levels by routine controls after transplantation. These patients were having acute HBV infection, with Hbsag and anti-HbcIgM positivity. Rapid and effective antiviral treatment enabled us to control the infection. Close monitoring of the patients through elevated ALT levels, which was the warning sign, made us be aware of the problem. Since the serological tests of these 3 patients were negative for HBV before BMT, the only explanation of the situation could be seronegative occult HBV infection either in the donor or in the recipient. Occult HBV (OBI) infection can be defined as the persistence of HBVgenomes in the liver (with detectable or undetectale HBV DNA in the serum) in individuals testing as negative HBsAg and positive / negative anti-HbcIgG. . OBI is an important condition in hematological diseases because development of an immune suppressive status mainly by immunotherapy or chemotherapy can induce OBI reactivation and cause development of acute and often severe hepatitis . HBV can also be transmitted through blood transfusion and liver transplantation causing classic forms of hepatitis B in newly infected individuals. Although OBI is significantly associated with the presence of anti-HBV antibodies (anti-HBc and anti-HBs antibodies), more than 20% of occult-infected individuals are negative for all HBV serum markers . In seronegative OBI, only HBV DNA is detectable in serum or liver tissue, but anti-HBcIgG/anti-HBsIgGs are negative in serum . Both blood transfusion and organ transplantation increases the risk of OBI transmission. There are several studies about OBI in the literature. In Hong Kong, the prevelance of occult HBV was found to be 15.3% among HbsAg negative stem cell donors . In Taiwan, the prevalence was 0.11% in blood donors . In Egypt, it varied from 4.1% to 26.8% in hemodialysis patients , . In Iran, the prevalence of OBI has been reported as 2 in 50000 in blood donors and 14% in cryptogenic liver cirrhosis patients, while the prevalence of seropositive OBI was 2.27% – . Thus, OBI is an emerging problem especially in endemic areas and HBV transmission from an OBI donor or recipient with OBI is a well known cause of HBV infection after immune supression periods, which forms the basis for antiviral prophylaxis . Vaccination of both HBV naive donor and recipient before BMT may decrease the risk of acute HBV infection especially in intermediate and highly endemic regions . İmmunity to HBV gained by vaccination can disappear after transplantation and this may reach to %57, but interestingly there are reports of seroconversion in patient who are HbsAg positive before but became antihbs antibody posivite after transplantation from a HBV-immune donor . In these cases, it is thought to be due to adoptive passage of this HbsAg specific cytotoxic T lymphocytes from donor to recipient. The immunity may either be from vaccination or natural infection . As a conclusion, patients undergoing BMT or CT should be checked for viral serological markers before hematological treatments. It is important to be aware of the complications of HBV in these immune suppressed patients. In our opinion, the best approach in inactive HBV carriers and HbsAg negative, HbcAb positive patients with planned immune suppressive treatment is the use of anti-viral agents before immune suppression. Antiviral treatment is safe and effective in preventing HBV related complications. Our study also showed that serological markers such as HbsAg, Anti Hbs and HbcAb may not be adequate for detection of occult HBV infection. Close monitoring of the patients both by clinical and laboratory parameters is the key point for being aware of complications of HBV.
Poor outcome after debridement and implant retention for acute hematogenous periprosthetic joint infection: a cohort study of 43 patients
cc7bcf11-f727-4504-b37c-aea1a692f52c
9993346
Debridement[mh]
We performed a retrospective cohort review according to STROBE guidelines of all consecutive patients diagnosed with AHI following total hip or knee arthroplasty at a single tertiary center between September 2013 and February 2020. The patients were identified from prospectively collected data in an institutional quality register. We used the Delphi international consensus criteria to define PJI ( ). AHI was classified according to the Tsukayama classification, as abrupt symptoms of infection more than 3 months after implantation in an otherwise well-functioning total hip or knee arthroplasty ( ). Treatment success was regarded as free of infection and defined as absence of clinical and laboratory signs of infection and no signs of loosening at the latest follow-up with a minimum of 1 year. A manual chart review was performed. Patient characteristics, i.e., age, sex, body mass index (BMI), comorbidities, specific joint (hip or knee), index surgery (primary or revision), and previous PJI were registered. Clinical manifestation at the time of admission (body temperature > 37.5°C and purulence) and laboratory results (serum C-reactive protein [CRP], white blood-cell [WBC], and peripheral blood cultures) were also registered, as well as organisms cultured. Finally, type of surgical treatment and outcome were recorded. Treatment ( ) The surgical treatment was chosen individually according to patient- and implant-specific conditions. A DAIR procedure, including exchange of modular parts, was considered when there had been a short period of symptoms (4–6 weeks) and with well-fixed implants. If these criteria were not fulfilled, revision surgery with removal of the implant, or lifelong antimicrobial suppressive therapy, was considered. 6–8 tissue samples were obtained during surgery and cultured aerobically and anaerobically for 7 days. Empiric intravenous antimicrobial therapy with vancomycin and a beta-lactam was started after the tissue samples were obtained. Upon identification of the causative microbe, antimicrobial treatment was changed according to the pattern of antibiotic susceptibility. Antimicrobial treatment was given intravenously for 14 days followed by oral treatment typically continued for an additional 4 weeks. The patients were scheduled for follow-up at 6 weeks and at 3 and 12 months after discharge. Radiographs of the joint and laboratory tests with sedimentation rate (ESR), CRP, and leucocyte count were routinely obtained. Outcome The primary endpoint was free of infection at 1-year followup. Success was defined as absence of clinical and laboratory signs of infection and no signs of loosening of the implant. We used the Delphi criteria to define failure within 1-year followup ( ): (i) recurrence of infection, (ii) subsequent surgical intervention, and (iii) death due to PJI. Statistics Descriptive statistical analyses were performed using SPSS, version 26 (IBM Corp, Armonk, NY, USA). Ethics, data sharing, funding, and disclosures Ethics approval from the institutional review board was obtained (20/19708). Data sharing is not possible. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Completed disclosure forms for this article following the ICMJE template are available on the article page, doi: 10.2340/17453674.2023.10312 ) The surgical treatment was chosen individually according to patient- and implant-specific conditions. A DAIR procedure, including exchange of modular parts, was considered when there had been a short period of symptoms (4–6 weeks) and with well-fixed implants. If these criteria were not fulfilled, revision surgery with removal of the implant, or lifelong antimicrobial suppressive therapy, was considered. 6–8 tissue samples were obtained during surgery and cultured aerobically and anaerobically for 7 days. Empiric intravenous antimicrobial therapy with vancomycin and a beta-lactam was started after the tissue samples were obtained. Upon identification of the causative microbe, antimicrobial treatment was changed according to the pattern of antibiotic susceptibility. Antimicrobial treatment was given intravenously for 14 days followed by oral treatment typically continued for an additional 4 weeks. The patients were scheduled for follow-up at 6 weeks and at 3 and 12 months after discharge. Radiographs of the joint and laboratory tests with sedimentation rate (ESR), CRP, and leucocyte count were routinely obtained. The primary endpoint was free of infection at 1-year followup. Success was defined as absence of clinical and laboratory signs of infection and no signs of loosening of the implant. We used the Delphi criteria to define failure within 1-year followup ( ): (i) recurrence of infection, (ii) subsequent surgical intervention, and (iii) death due to PJI. Descriptive statistical analyses were performed using SPSS, version 26 (IBM Corp, Armonk, NY, USA). Ethics approval from the institutional review board was obtained (20/19708). Data sharing is not possible. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Completed disclosure forms for this article following the ICMJE template are available on the article page, doi: 10.2340/17453674.2023.10312 Demographic characteristics We identified 43 consecutive patients with AHI during the study period. 26 of the patients were men, and median age was 75 years (range 43–92). The median ASA score was 3, and 31/43 were categorized as ASA score 3 or 4. Demographic data are presented in . Clinical findings 28 of 43 of the AHIs were in a primary arthroplasty and the hip was the most affected joint (31/43). 15 of 43 implants had previously been revised, 5 because of infection. Infection-free period from joint replacement, either a primary arthroplasty or a revision, to treatment for AHI, was a median 6.5 years (range 0.3–23). Median duration of symptoms was 7 days (range 2–120), and 38 patients had had symptoms for less than 4 weeks. All patients had a painful joint. 22 patients presented with a fever and the median CRP was 229 mg/L (range 3–571). 32 patients had a raised WBC >10 × 10 9 /L, median 13 (range 3.4–30). Median temperature was 37.6°C (range 35.6°– 40.3°). A blood culture was obtained preoperatively in 38 of 43 patients, with a positive culture identical to the microbe identified in biopsies from the joint in 22 of the infections. None of the patients presented with a sinus tract, but in all 40 cases operatively treated there was purulence surrounding the implant during surgery. In 15/43 patients, there was a distant infection or a procedure predisposing to bacteremia identified and considered to be the source of the AHI ( ). The median follow-up was 22 months (range 0.1–96). The mortality rate was 8/43 at 2 years and 12/43 at the latest follow-up. Median time to death was 10 months. Microbiology The AHIs were culture-positive in 41/43 cases. The most frequently isolated organisms were Staphylococcus aureus and streptococcus species, but as many as 16 different microbes were identified ( ). No methicillin-resistant S. aureus (MRSA) or polymicrobial infections were found. Treatment Surgical treatment was performed in 40/43 patients. In 3 severely ill patients, no surgery but lifelong suppressive antimicrobial treatment was chosen due to high surgical risk. 25 infections were treated with a DAIR procedure, and modular components could be exchanged in 18 of the 25 DAIR procedures. Time from start of symptoms to DAIR procedure was median 7 days (range 2–120). The patient with 120 days of symptoms was an outlier where the implant was found not to be replaceable. The success rate of a DAIR procedure was 10/25. The failure rate was not affected by exchange of modular components. In 8 patients, the implants were loose, and hence not available for a DAIR procedure. Further, in 1 patient the implant was old and with much wear, and a revision arthroplasty was also performed. 5 patients with increased comorbidity were not found fit enough to go through major revision surgery in 2 stages, and hence a resection arthroplasty (Girdlestone procedure) was performed with a successful outcome in 4/5 patients. In 1 patient, a knee arthrodesis in 2 stages was successfully performed. Median time to removal of implant was 10 days (range 4–35). Treatment success of removal of the implant as the primary treatment for AHI was 14/15, significantly better than that of a DAIR procedure (10/25). An overview of primary treatment outcome is presented in . 15 of the patients treated with a DAIR procedure failed the initial treatment. 9 of these 15 were successfully treated in a second procedure with either a 2-stage revision (n = 5), a new DAIR (n = 1), a Girdlestone procedure (n = 1), or a femoral amputation (n = 2). In 1 patient, a 2-stage revision was performed with no success and followed by antibiotic suppression. In addition, 3 other patients were treated with suppressive antimicrobial therapy without a second surgical procedure. 2 patients were treated with a second surgical procedure, a DAIR and a resection arthroplasty, but died shortly after surgery. While the median time from the index arthroplasty to onset of AHI was 6.5 years, the implant age was less than 2 years in 13 of the patients, of whom 12 were surgically treated. The success rate was poorer among these implants compared with the older implants, with a success rate of 4/12 versus 20/28, respectively. These young implants had also more often been previously revised for an infection, 4/13 versus 1/30. When comparing the implants successfully surgically treated with the failures, the mean implant age was likewise older in the success group, 111 months (SD 72) versus 47 months (SD 48). Success rates following surgical treatment for AHI were lower among patients with an S. aureus infection (6/15 versus 18/25), but this was not found when analyzing the DAIRs separately. No other associations between organism and treatment outcome were found. The success rates were also lower overall in knee arthroplasties compared with hip arthroplasties (3/10 versus 21/30). An overview of clinical findings and treatment outcome is presented in . We identified 43 consecutive patients with AHI during the study period. 26 of the patients were men, and median age was 75 years (range 43–92). The median ASA score was 3, and 31/43 were categorized as ASA score 3 or 4. Demographic data are presented in . 28 of 43 of the AHIs were in a primary arthroplasty and the hip was the most affected joint (31/43). 15 of 43 implants had previously been revised, 5 because of infection. Infection-free period from joint replacement, either a primary arthroplasty or a revision, to treatment for AHI, was a median 6.5 years (range 0.3–23). Median duration of symptoms was 7 days (range 2–120), and 38 patients had had symptoms for less than 4 weeks. All patients had a painful joint. 22 patients presented with a fever and the median CRP was 229 mg/L (range 3–571). 32 patients had a raised WBC >10 × 10 9 /L, median 13 (range 3.4–30). Median temperature was 37.6°C (range 35.6°– 40.3°). A blood culture was obtained preoperatively in 38 of 43 patients, with a positive culture identical to the microbe identified in biopsies from the joint in 22 of the infections. None of the patients presented with a sinus tract, but in all 40 cases operatively treated there was purulence surrounding the implant during surgery. In 15/43 patients, there was a distant infection or a procedure predisposing to bacteremia identified and considered to be the source of the AHI ( ). The median follow-up was 22 months (range 0.1–96). The mortality rate was 8/43 at 2 years and 12/43 at the latest follow-up. Median time to death was 10 months. The AHIs were culture-positive in 41/43 cases. The most frequently isolated organisms were Staphylococcus aureus and streptococcus species, but as many as 16 different microbes were identified ( ). No methicillin-resistant S. aureus (MRSA) or polymicrobial infections were found. Surgical treatment was performed in 40/43 patients. In 3 severely ill patients, no surgery but lifelong suppressive antimicrobial treatment was chosen due to high surgical risk. 25 infections were treated with a DAIR procedure, and modular components could be exchanged in 18 of the 25 DAIR procedures. Time from start of symptoms to DAIR procedure was median 7 days (range 2–120). The patient with 120 days of symptoms was an outlier where the implant was found not to be replaceable. The success rate of a DAIR procedure was 10/25. The failure rate was not affected by exchange of modular components. In 8 patients, the implants were loose, and hence not available for a DAIR procedure. Further, in 1 patient the implant was old and with much wear, and a revision arthroplasty was also performed. 5 patients with increased comorbidity were not found fit enough to go through major revision surgery in 2 stages, and hence a resection arthroplasty (Girdlestone procedure) was performed with a successful outcome in 4/5 patients. In 1 patient, a knee arthrodesis in 2 stages was successfully performed. Median time to removal of implant was 10 days (range 4–35). Treatment success of removal of the implant as the primary treatment for AHI was 14/15, significantly better than that of a DAIR procedure (10/25). An overview of primary treatment outcome is presented in . 15 of the patients treated with a DAIR procedure failed the initial treatment. 9 of these 15 were successfully treated in a second procedure with either a 2-stage revision (n = 5), a new DAIR (n = 1), a Girdlestone procedure (n = 1), or a femoral amputation (n = 2). In 1 patient, a 2-stage revision was performed with no success and followed by antibiotic suppression. In addition, 3 other patients were treated with suppressive antimicrobial therapy without a second surgical procedure. 2 patients were treated with a second surgical procedure, a DAIR and a resection arthroplasty, but died shortly after surgery. While the median time from the index arthroplasty to onset of AHI was 6.5 years, the implant age was less than 2 years in 13 of the patients, of whom 12 were surgically treated. The success rate was poorer among these implants compared with the older implants, with a success rate of 4/12 versus 20/28, respectively. These young implants had also more often been previously revised for an infection, 4/13 versus 1/30. When comparing the implants successfully surgically treated with the failures, the mean implant age was likewise older in the success group, 111 months (SD 72) versus 47 months (SD 48). Success rates following surgical treatment for AHI were lower among patients with an S. aureus infection (6/15 versus 18/25), but this was not found when analyzing the DAIRs separately. No other associations between organism and treatment outcome were found. The success rates were also lower overall in knee arthroplasties compared with hip arthroplasties (3/10 versus 21/30). An overview of clinical findings and treatment outcome is presented in . Our main finding was that the outcome following a DAIR procedure in patients with AHI was poor, and significantly lower than in patients treated with removal of the implant. There are some characteristics previously described, such as older patients, more comorbidities and a more prominent clinical presentation with fever, purulence, and highly elevated inflammatory markers ( , ). This is in line with our observations. Konigsberg et al. suggest that AHI may be a marker of poor general health that predisposes to infection. They reported a high 2-year mortality rate (25%) ( ). We found a mortality rate of 28% (12/43) at the latest follow-up, and 8 of 43 (19%) had died within 2 years. These rates are higher than previous reports on mortality following PJI (13.6 % at 2 years) ( ), but still lower than in patients with PJI following hemiarthroplasty for an acute hip fracture (47–50 % at 1 year) ( , ). The knee joint has previously been reported as more likely to be affected by AHI than the hip, often explained by the poorer soft-tissue envelope and larger metal surface ( , , , ). We found more hips than knees, which may be due to the relatively low number of total knee arthroplasties compared with hip arthroplasties that have been performed over the years in Norway ( ). We did find, though, that the success rate was poorer in knees compared with hips, 3/10 versus 21/30, respectively. This may be explained by more complicated surgery around the knee, a limited soft-tissue envelope, and a larger implant surface. Half of the knee prostheses in our series had previously been revised, which may further explain the poorer treatment result. A revision arthroplasty has been reported as a risk factor for DAIR failure ( ). Our results also confirmed the different and broad pathogen spectrum causing AHI reported by others ( , ). As acute postoperative PJIs are dominated by staphylococci and the rate of polymicrobial infections may be quite high (30%), the AHIs almost exclusively are monomicrobial ( , , ). None of the infections in this present study was polymicrobial. 13/43 (32%) of infections were caused by streptococci. This is in accordance with previous observations of streptococcal infections (33–39%) ( , , ), all much higher than reported in acute postoperative PJIs ( ). In total, 16 different organisms were identified in our series, dominated by virulent microbes, which may reflect the risk of bacteremia in patients with poor health status, and that some of the microbes, like streptococci, have an affinity for prosthetic implants. The source of infection was identified in only 15 of the 43 patients, with skin infections as the most frequent. This is in line with previous findings, with cutaneous source reported to be a leading cause of AHI (15–26%) ( , , ). The 2 infections with S. epidermidis were both associated with implantation of a vascular device, an observation also made by Rakow et al. ( ). They suggested looking for the infection source in intravascular devices promptly when coagulase-negative staphylococci grow in blood culture. Identification of the source of infection varies between studies, and this may be caused by difficulties in identifying the source but may also reflect lack of a systematic search for the primary infection focus. Rakow et al. advocate a systematic work-up to identify the source of infection in order to avoid recurrence and to optimize the treatment ( ). A DAIR procedure is recommended as the treatment approach for postoperative PJIs and AHIs by current international guidelines ( ). However, several studies have reported unsatisfactory results in treating AHIs with a DAIR, with success rates ranging between 44% and 57% ( , , , ). This was also confirmed in our study. The virulence of the microbes and the fact that the DAIR procedure in this situation is performed during a concomitant bacteremia and hence is prone to further bacterial seeding, may be a potential explanation for the poorer results. It could also be explained by continuous seeding of bacteria due to an unrecognized primary source of infection. Further, the health status of these patients often seems poor, which also may play a significant role. A DAIR procedure is for instance reported to be less successful in acute postoperative infections following a hemiarthroplasty in elderly patients with a hip fracture, often explained by their poor host status and frailty ( , ). Finally, it is also a possibility that the infection is an acute manifestation of a chronic PJI and misclassified as an AHI. This latter may be supported by our findings of significantly poorer treatment results in patients in whom the implant was younger than 2 years, though this is not confirmed in prior studies ( , , ). These patients had also more often previously been revised for an infection. While the results of a 1- or 2-stage revision are well documented, there are some concerns regarding the results when resection arthroplasty is applied as a salvage procedure following a failed DAIR procedure ( ). We found that a 1- or 2-stage revision arthroplasty seemed a safe option as the initial treatment, but also satisfactory as a salvage procedure. A 2-stage procedure was performed successfully in 5 of 7 cases with a failed DAIR in our study. This finding is also supported by others ( , ). The success rate following implant removal was good (14/15) and comparable to previous reports on resection arthroplasty ( , ). Due to the lack of reports on AHI treated with implant removal, the results are somewhat difficult to compare with identical patients, but a few series are reported. Rodríguez et al. treated 9 patients with a 2-stage revision, and 7 with a resection arthroplasty in a series of 50 AHIs, with a success rate of 87% ( ). Wouthuyzen-Bakker et al. reported on 20 1-stage, 78 2-stage, and 7 Girdlestone procedures as primary treatment of AHI. A significantly better outcome (75%) compared with DAIR (55%) was reported ( ). Removal of the implant may hence be a safer treatment option for some patients with AHI and should probably more often be considered. In our study, the success rate of S. aureus infections was 6/15, significantly lower than in non-staphylococcal infections at 18/25. S. aureus has in general been associated with treatment failure of PJI, especially after DAIR procedures ( , ), but in other studies this association has not been confirmed ( , , , ). Wouthuyzen-Bakker et al. found that staphylococcus spp. had lower treatment success in AHIs compared with acute postoperative PJIs, and they suggest that a DAIR procedure in staphylococcal AHI should be reconsidered ( ). The poor results in staphylococcal infection are explained by the virulence, the biofilm production, and frequent antibiotic resistance ( ). In our series, none of the infections were due to MRSA, though. S. aureus also can remain dormant in a biofilm for years, and this may cause chronic infections to be misdiagnosed as AHI and hence explain a poorer result of a DAIR procedure ( ). Strengths and limitations There are several limitations to the study. Even though the patients were prospectively registered, the study has a retrospective design with its associated limitations. The sample size was relatively small, which limits the possibility for analysis. The diagnosis of AHI comes with uncertainties, and some infections may have been chronic infections. The types of antibiotics used were not registered. Our study also has some strengths. AHI is relatively rare, and the prospective registration of our cohort over 7 years results in one of the largest reported. We therefore believe that our cohort is representative of AHIs in general and that our findings reflect daily clinical practice. Conclusion AHIs treated with DAIR had lower treatment success compared with implant removal. The majority of infections were caused by virulent microbes, and the mortality rate was high. Bacterial virulence, patient frailty, and continuous bacterial seeding to the joint from a distant source could all be contributory factors to our findings. Thus, the DAIR procedure may be a viable treatment option in some AHIs, but in patients with an implant age < 2 years, and in S. aureus infections, revision surgery with implant removal should be considered. There are several limitations to the study. Even though the patients were prospectively registered, the study has a retrospective design with its associated limitations. The sample size was relatively small, which limits the possibility for analysis. The diagnosis of AHI comes with uncertainties, and some infections may have been chronic infections. The types of antibiotics used were not registered. Our study also has some strengths. AHI is relatively rare, and the prospective registration of our cohort over 7 years results in one of the largest reported. We therefore believe that our cohort is representative of AHIs in general and that our findings reflect daily clinical practice. AHIs treated with DAIR had lower treatment success compared with implant removal. The majority of infections were caused by virulent microbes, and the mortality rate was high. Bacterial virulence, patient frailty, and continuous bacterial seeding to the joint from a distant source could all be contributory factors to our findings. Thus, the DAIR procedure may be a viable treatment option in some AHIs, but in patients with an implant age < 2 years, and in S. aureus infections, revision surgery with implant removal should be considered.
The personalized cancer network explorer (PeCaX) as a visual analytics tool to support molecular tumor boards
08575a42-56c1-43e9-b4d0-6d81e1064c79
9993744
Internal Medicine[mh]
Cancer is typically caused by genomic alterations inducing unchecked cellular proliferation. In personalized oncology , molecular data (e.g., genomics) is used jointly with clinical data to stratify therapies and choose the therapy best-suited for a specific patient. Next Generation Sequencing (NGS) is widely used to find those genomic alterations, such as single-nucleotide variants (SNVs), copy number variations (CNVs), or gene fusions. Based on this data, the typical analysis workflow is usually as follows: The (cancer) genome of a patient is sequenced and the SNVs and CNVs are stored in a Variant Call Format (VCF) file. Variants are annotated with their effect. Usually only variants with a strong effect are considered. The remaining variants are looked up in databases to identify driver genes and to find drugs associated with these potential targets. If no drug can be found for this specific variant or it is not applicable for the patient, pathways containing the related gene are considered to find druggable targets up- or downstream of the actionable variant. This process of revealing drug-gene information based on the variant annotations and incorporating pharmacogenomics information which is an indicator of the effect of the genes on a patient’s drug response is called clinical annotation. Numerous tools exist for displaying and storing the information of a VCF file in the common tab separated values format (step 1) and to filter the variants for given annotations (step 3), e.g., VCF-Miner , BrowseVCF , VCF-Explorer . But only few applications include the analysis of the SNVs and CNVs and the annotation of the variant effect (step 2), e.g., VCF-Server . Perera-Bel et al. offer the additional option to find drugs targeting the variants (step 4) but their method does not perform variant effect prediction (step 2) and is limited to a specific data structure . It also lacks a graphical user interface (GUI). OncoPDSS performs steps 1 to 4 but it is a web-server which can be a data security issue. Therefore, OncoPDSS does not store the input or results of the analysis . These are displayed in unsearchable tables focusing on information about available pharmacotherapies which can be downloaded as TSV files. But it does not give information on the pathway context of a gene. So far, this information has to be collected manually. We present PeCaX (Personalized Cancer Network Explorer), an integrated application for personalized oncology workflows. PeCaX performs clinical variant annotation by processing SNVs and CNVs and identifying clinically relevant variants and their targeting therapeutics using ClinVAP . Networks containing the connections between the driver genes and the genes in their neighborhood as well as drugs targeting genes in this network are created through the novel SBML4j  and they are visualized with the use of BioGraphVisart , developed specifically for PeCaX. Our user-friendly, web-based graphical user interface does not only interactively display the report generated by ClinVAP and the networks with a few clicks, but adds web links to external gene and drug databases and gives the option to take notes which are stored in PeCaX along with the information presented in the tables and networks. This allows the user to interactively work on and present the results, e.g. in a Molecular Tumor Board (MTB) where clinicians of different areas meet, especially researchers in personalized oncology. The report can be downloaded as PDFs. In addition, the networks are available for download in publication-ready file formats (PNG, SVG and GraphML). In contrast to many VCF-analysis and clinical decision support tools, PeCaX is a local application with a graphical user interface working on the user’s local machine avoiding data privacy issues arising from the use of cloud-based services. Technical overview PeCaX is a service-oriented local application and has been built using the NuxtJS framework for its web-based front end and integrates several other local services developed by us via REST APIs (see Fig. ). It was developed in close interaction with persons responsible for MTB case management, scientific analysis, case preparation and presentation at the University Hospital Tübingen to ensure a user-friendly user interface. It supports concurrent use and works on any modern web browser independent of the operating system. Its design as a web-service allows access from browsers not running on the same machine as the service. Sensitive data is only processed on the machine PeCaX is installed on, not the machines that access it with the GUI. It is easy to deploy via pre-built docker containers and easily integrated using docker compose. The individual docker containers and the communication via REST APIs allow to update the services individually without the need to setup everything from the start. Clinical variant annotation PeCaX relies on VCF files for information on SNVs and TSV files for optional CNV information. Examples are available in the Additional file : Sect. 2.1.1. The validity of files to be uploaded is checked by the filename extension (.vcf,.tsv). If a uploaded file contains semantic or syntactic errors, the analysis process is aborted and the user is notified that the input file is corrupt. PeCaX integrates ClinVAP  to create a case report by processing variants using functional and clinical annotations of the genomic aberrations observed in a patient. ClinVAP employs Ensemble Variant Effect Predictor (VEP)  to obtain functional effects of the observed variants and filters them based on the severity of the predictions. It also performs clinical annotation which reveals the driver genes, actionable targets and enriches them with their known therapeutic associations using an integrated knowledge-base from publicly available databases (e.g., COSMIC , CGI ) . Moreover, it provides an option to filter the results based on the diagnosis type given as ICD10 code which was achieved by obtaining the gene-disease links from the background databases and mapping those diseases to their corresponding ICD10 codes. The mapping between the disease names from the databases and from ICD10 is done by matching their disease related features such as system, organ, histology type. As soon as the annotation is finished the variant files are deleted and PeCaX receives the resulting report as JSON file with information structured into five categories: known driver genes, drugs targeting the variants, therapeutics targeting the affected genes, cancer drugs targeting the mutated genes, and drugs with known adverse effects (Additional file : Sect. 2.3.1). Network generation Cancer is a complex and heterogeneous disease typically caused by genomic alterations. Even a single mutation can modulate the complex interaction network of genes to cause cancer phenotypes. Since these mutations can occur in arbitrary genes, it is useful to understand the role of the altered genes in their physiological context, i.e., within the context of their regulatory networks. By examining the network neighborhood of an altered gene, potential new treatment approaches can be identified for patients without other treatment options (e.g., through targeted therapies). Examining the interplay of gene-drug interactions using networks give insights into the effect of an intervention, for example, for a patient resistant to a drug. If the altered gene is not a drug target or cannot be targeted because of drug resistance or intolerance of the patient, the genes up- or downstream of it might be suitable drug targets. Thus, PeCaX sends the list of genes of each category to SBML4j  which is a service for persisting biological models and pathways in SBML format in a graph database (Additional file : Sect. 2.2). Its graph database is used to extract information on the local network context of each candidate gene. Based on cancer-related pathways from KEGG , PeCaX can thus infer which related genes are up- and downstream of a candidate gene with respect to gene regulation and signalling. Additionally, SBML4j gives information about drugs associated with any of the candidate as well as up-/downstream genes. Interactive graphical user interface PeCaX provides a simple graphical user interface to upload variants and display the results. The report generated by ClinVAP is displayed in an interactive tabular form next to the networks generated by SBML4j. For an easier analysis of the networks, they are visualized as network graphs using BioGraphVisart . The goal of the network analysis is to not only see the individual component but also the local neighborhood crosstalk with known pathways, and nearby options for therapeutic intervention (druggable genes). BioGraphVisart is a web-based tool written in Javascript. It automates the layout of the network graph, the labeling of nodes (genes, drugs) and edges (interactions), the edge style for different interaction types, the node coloring according to easily modifiable node attributes, and the generation of legends. In addition, human genes and proteins can be grouped with respect to predefined pathways from KEGG. Data management The user has to give a project name before starting analysis. This name is used to create a collection in the local database (ArangoDB) used for data management. By that, the user can perform multiple analyses gathered in one collection, e.g., different patients presented in one MTB session. Starting an analysis the uploaded data and set parameters are send via REST API to the ClinVAP container and a unique job id is generated to store the parameters. The uploaded data is only stored during the clinical annotation and afterwards removed. The results of the analysis as well as the ids of the networks generated with SBML4j are stored related to the job ID. The networks themselves are stored in the network database. PeCaX is a service-oriented local application and has been built using the NuxtJS framework for its web-based front end and integrates several other local services developed by us via REST APIs (see Fig. ). It was developed in close interaction with persons responsible for MTB case management, scientific analysis, case preparation and presentation at the University Hospital Tübingen to ensure a user-friendly user interface. It supports concurrent use and works on any modern web browser independent of the operating system. Its design as a web-service allows access from browsers not running on the same machine as the service. Sensitive data is only processed on the machine PeCaX is installed on, not the machines that access it with the GUI. It is easy to deploy via pre-built docker containers and easily integrated using docker compose. The individual docker containers and the communication via REST APIs allow to update the services individually without the need to setup everything from the start. PeCaX relies on VCF files for information on SNVs and TSV files for optional CNV information. Examples are available in the Additional file : Sect. 2.1.1. The validity of files to be uploaded is checked by the filename extension (.vcf,.tsv). If a uploaded file contains semantic or syntactic errors, the analysis process is aborted and the user is notified that the input file is corrupt. PeCaX integrates ClinVAP  to create a case report by processing variants using functional and clinical annotations of the genomic aberrations observed in a patient. ClinVAP employs Ensemble Variant Effect Predictor (VEP)  to obtain functional effects of the observed variants and filters them based on the severity of the predictions. It also performs clinical annotation which reveals the driver genes, actionable targets and enriches them with their known therapeutic associations using an integrated knowledge-base from publicly available databases (e.g., COSMIC , CGI ) . Moreover, it provides an option to filter the results based on the diagnosis type given as ICD10 code which was achieved by obtaining the gene-disease links from the background databases and mapping those diseases to their corresponding ICD10 codes. The mapping between the disease names from the databases and from ICD10 is done by matching their disease related features such as system, organ, histology type. As soon as the annotation is finished the variant files are deleted and PeCaX receives the resulting report as JSON file with information structured into five categories: known driver genes, drugs targeting the variants, therapeutics targeting the affected genes, cancer drugs targeting the mutated genes, and drugs with known adverse effects (Additional file : Sect. 2.3.1). Cancer is a complex and heterogeneous disease typically caused by genomic alterations. Even a single mutation can modulate the complex interaction network of genes to cause cancer phenotypes. Since these mutations can occur in arbitrary genes, it is useful to understand the role of the altered genes in their physiological context, i.e., within the context of their regulatory networks. By examining the network neighborhood of an altered gene, potential new treatment approaches can be identified for patients without other treatment options (e.g., through targeted therapies). Examining the interplay of gene-drug interactions using networks give insights into the effect of an intervention, for example, for a patient resistant to a drug. If the altered gene is not a drug target or cannot be targeted because of drug resistance or intolerance of the patient, the genes up- or downstream of it might be suitable drug targets. Thus, PeCaX sends the list of genes of each category to SBML4j  which is a service for persisting biological models and pathways in SBML format in a graph database (Additional file : Sect. 2.2). Its graph database is used to extract information on the local network context of each candidate gene. Based on cancer-related pathways from KEGG , PeCaX can thus infer which related genes are up- and downstream of a candidate gene with respect to gene regulation and signalling. Additionally, SBML4j gives information about drugs associated with any of the candidate as well as up-/downstream genes. PeCaX provides a simple graphical user interface to upload variants and display the results. The report generated by ClinVAP is displayed in an interactive tabular form next to the networks generated by SBML4j. For an easier analysis of the networks, they are visualized as network graphs using BioGraphVisart . The goal of the network analysis is to not only see the individual component but also the local neighborhood crosstalk with known pathways, and nearby options for therapeutic intervention (druggable genes). BioGraphVisart is a web-based tool written in Javascript. It automates the layout of the network graph, the labeling of nodes (genes, drugs) and edges (interactions), the edge style for different interaction types, the node coloring according to easily modifiable node attributes, and the generation of legends. In addition, human genes and proteins can be grouped with respect to predefined pathways from KEGG. The user has to give a project name before starting analysis. This name is used to create a collection in the local database (ArangoDB) used for data management. By that, the user can perform multiple analyses gathered in one collection, e.g., different patients presented in one MTB session. Starting an analysis the uploaded data and set parameters are send via REST API to the ClinVAP container and a unique job id is generated to store the parameters. The uploaded data is only stored during the clinical annotation and afterwards removed. The results of the analysis as well as the ids of the networks generated with SBML4j are stored related to the job ID. The networks themselves are stored in the network database. PeCaX can be installed locally on a personal computer or for groups of users in an access controlled intranet. Containerization enable convenient deployment without complex software installation and configuration. Overview of PeCaX PeCaX is a comprehensive GUI-based clinical decision support tool that requires no programming knowledge. Users can perform clinical annotation and gene drug interaction network analysis via the interactive graphical interface. PeCaX provides data security as it comes in Docker containers and all analysis are performed on local infrastructure. It is supported by all modern web browsers across platforms. Hence, it is easily integrated into diagnostic and MTB workflows to investigate the relevance of single variants, complete cases or cohorts, e.g. from GWAS. We provide a web page with example data for demonstration purpose only at https://pecax.informatik.uni-tuebingen.de . Data upload To annotate and analyze a data set, PeCaX requires (local) submission of the data. All data is assigned to a specific project , a generic way of grouping data sets and results (e.g., one project per tumor board meeting or for one tumor entity). PeCaX requires one VCF file as input containing information on SNVs and (optionally) a TSV file containing information on CNVs. Details on format requirements and conventions are found in the Additional file : Sect. 2.1.1. In addition the assembly of the human reference genome used in the mapping of the sequencing data is required (both GRCh37 and GRCh38 are supported). The clinical variant annotation can be filtered by a pre-selected cancer diagnosis given as ICD10 code. After uploading, the data is automatically submitted to a local instance of ClinVAP. In order to ensure data privacy, the VCF files are removed from the containers after processing by ClinVAP. The results of the variant annotation are stored in the project database in JSON format together with associated metadata (e.g., project name and the job ID). The user can also upload a previously downloaded JSON file which will skip ClinVAP or can enter the job ID of an analysis executed previously to directly get to the analysis part of the UI (Additional file : Sect. 2.4). All job IDs of a given project are listed on a subpage where the user can select and delete them individually. Deletion of a job ID removes all information stored for this ID in the project database as well as the generated networks from the network database to ensure data privacy. Interactive visualizations The results of the clinical variant annotation performed is structured into several sections, which are all rendered as interactive, responsive tables. The first section contains the list of known cancer driver genes along with the somatic mutations observed in the patient. The list of drugs with the evidence of targeting a specific variant of the mutated gene and the documented drug response for the given mutational profile are displayed in the second section. The third section contains information on somatic mutations in pharmaceutical target affected genes and consists of two tables: therapies that have evidence of targeting the affected gene and the list of cancer drugs targeting the mutated gene. The fifth section contains the list of drugs with known adverse effects. References supporting the results found are displayed in a sixth section and all the somatic variants of the patient with their dbSNP and COSMIC IDs are listed in the last section. Figure shows an example table of the results visualization. Each column of each table can be queried, filtered and sorted individually (Fig. (1)). The interactive table view supports a wide range of table operations in order to simplify navigation of the data, for example, hiding/showing columns (Fig. (2)), highlighting of rows across sections (Fig. (3)). For each gene listed, the tables contain links to various external data sources such as Uniprot , KEGG  or Ensembl . The links can be accessed via the drop-down menu next to the gene symbol (Fig. (4)) and open in a separate browser tab or window. Likewise, the references given in the tables are directly linked to the web page of the related publication on PubMed or clinicaltrials.gov (Fig. (7)). At the end of each section, users can add notes to be stored along with the annotated data in the internal database (Fig. (5)). These notes can be used to record conclusions from the analysis of the data and can be downloaded together with the table as PDF (Fig. (6)). When at least one gene symbol of a table can be associated with an entry in the SBML4j database, a network for this table will be generated and is displayed next to it (see Fig. ). The networks consist of nodes (genes, drugs) and edges (interactions). Genes found in the table are colored red and labelled with the gene symbol. Drugs associated with any of the genes in the network are represented as diamonds and are labelled with the drug name (Fig. (1)). If multiple drugs have the same gene target, they are merged into one node which is expandable in order to make the network representation visually more concise. Different interaction types (e.g., signaling, regulation) are depicted by different edge styles (Fig. (2)). If two nodes have multiple interactions, their edges are merged into one, which can be deactivated by the user (Fig. (3)). Since drug and gene names may become very long, they are shortened and moving the mouse over a node reveals the full node name. Edge types are treated in the same manner (Fig. (4)). The layout can be changed by the user based on five different layout types or the user can drag the nodes or the whole network manually to arrange them in the most informative layout (Fig. (7)). The nodes are searchable by the node label (Fig. (9)) and deletable including connected edges. Gene nodes can be grouped and highlighted by associated KEGG pathways (Fig. (5)). A mouse click on a drug node links to an overview page with links to external databases containing information on this drug such as Drugbank , HGNC , and PDB . Exporting tables and networks The clinical variant annotation report including the stored user-created notes and manual annotations can be downloaded as a whole in PDF and JSON format or every table individually in PDF format (Fig. (6)). The gene drug interaction networks are available for download individually in the formats PNG, SVG (Fig. (12)) and GraphML. Performance The processing time was evaluated using the example data provided in Additional file : Sect. 2.1.1. From submitting a VCF-file until the full report is displayed, PeCaX needs about 92 s on average, see Table . The time needed for clinical annotation (45.48 s ) is less than for network generation (58.19 s ). For a VCF-file in combination with a related TSV-file, it takes on average about 352 s . Here, too, network generation takes longer than clinical annotation. Overall, PeCaX needs 205 s on average for the analysis of the data until the results are displayed. A detailed performance evaluation of the processing time for the example data is recorded in Additional file : Sect. 6. A detailed performance evaluation of ClinVAP and the results of a stress-test on large-scale data have been published previously . PeCaX is a comprehensive GUI-based clinical decision support tool that requires no programming knowledge. Users can perform clinical annotation and gene drug interaction network analysis via the interactive graphical interface. PeCaX provides data security as it comes in Docker containers and all analysis are performed on local infrastructure. It is supported by all modern web browsers across platforms. Hence, it is easily integrated into diagnostic and MTB workflows to investigate the relevance of single variants, complete cases or cohorts, e.g. from GWAS. We provide a web page with example data for demonstration purpose only at https://pecax.informatik.uni-tuebingen.de . To annotate and analyze a data set, PeCaX requires (local) submission of the data. All data is assigned to a specific project , a generic way of grouping data sets and results (e.g., one project per tumor board meeting or for one tumor entity). PeCaX requires one VCF file as input containing information on SNVs and (optionally) a TSV file containing information on CNVs. Details on format requirements and conventions are found in the Additional file : Sect. 2.1.1. In addition the assembly of the human reference genome used in the mapping of the sequencing data is required (both GRCh37 and GRCh38 are supported). The clinical variant annotation can be filtered by a pre-selected cancer diagnosis given as ICD10 code. After uploading, the data is automatically submitted to a local instance of ClinVAP. In order to ensure data privacy, the VCF files are removed from the containers after processing by ClinVAP. The results of the variant annotation are stored in the project database in JSON format together with associated metadata (e.g., project name and the job ID). The user can also upload a previously downloaded JSON file which will skip ClinVAP or can enter the job ID of an analysis executed previously to directly get to the analysis part of the UI (Additional file : Sect. 2.4). All job IDs of a given project are listed on a subpage where the user can select and delete them individually. Deletion of a job ID removes all information stored for this ID in the project database as well as the generated networks from the network database to ensure data privacy. The results of the clinical variant annotation performed is structured into several sections, which are all rendered as interactive, responsive tables. The first section contains the list of known cancer driver genes along with the somatic mutations observed in the patient. The list of drugs with the evidence of targeting a specific variant of the mutated gene and the documented drug response for the given mutational profile are displayed in the second section. The third section contains information on somatic mutations in pharmaceutical target affected genes and consists of two tables: therapies that have evidence of targeting the affected gene and the list of cancer drugs targeting the mutated gene. The fifth section contains the list of drugs with known adverse effects. References supporting the results found are displayed in a sixth section and all the somatic variants of the patient with their dbSNP and COSMIC IDs are listed in the last section. Figure shows an example table of the results visualization. Each column of each table can be queried, filtered and sorted individually (Fig. (1)). The interactive table view supports a wide range of table operations in order to simplify navigation of the data, for example, hiding/showing columns (Fig. (2)), highlighting of rows across sections (Fig. (3)). For each gene listed, the tables contain links to various external data sources such as Uniprot , KEGG  or Ensembl . The links can be accessed via the drop-down menu next to the gene symbol (Fig. (4)) and open in a separate browser tab or window. Likewise, the references given in the tables are directly linked to the web page of the related publication on PubMed or clinicaltrials.gov (Fig. (7)). At the end of each section, users can add notes to be stored along with the annotated data in the internal database (Fig. (5)). These notes can be used to record conclusions from the analysis of the data and can be downloaded together with the table as PDF (Fig. (6)). When at least one gene symbol of a table can be associated with an entry in the SBML4j database, a network for this table will be generated and is displayed next to it (see Fig. ). The networks consist of nodes (genes, drugs) and edges (interactions). Genes found in the table are colored red and labelled with the gene symbol. Drugs associated with any of the genes in the network are represented as diamonds and are labelled with the drug name (Fig. (1)). If multiple drugs have the same gene target, they are merged into one node which is expandable in order to make the network representation visually more concise. Different interaction types (e.g., signaling, regulation) are depicted by different edge styles (Fig. (2)). If two nodes have multiple interactions, their edges are merged into one, which can be deactivated by the user (Fig. (3)). Since drug and gene names may become very long, they are shortened and moving the mouse over a node reveals the full node name. Edge types are treated in the same manner (Fig. (4)). The layout can be changed by the user based on five different layout types or the user can drag the nodes or the whole network manually to arrange them in the most informative layout (Fig. (7)). The nodes are searchable by the node label (Fig. (9)) and deletable including connected edges. Gene nodes can be grouped and highlighted by associated KEGG pathways (Fig. (5)). A mouse click on a drug node links to an overview page with links to external databases containing information on this drug such as Drugbank , HGNC , and PDB . The clinical variant annotation report including the stored user-created notes and manual annotations can be downloaded as a whole in PDF and JSON format or every table individually in PDF format (Fig. (6)). The gene drug interaction networks are available for download individually in the formats PNG, SVG (Fig. (12)) and GraphML. The processing time was evaluated using the example data provided in Additional file : Sect. 2.1.1. From submitting a VCF-file until the full report is displayed, PeCaX needs about 92 s on average, see Table . The time needed for clinical annotation (45.48 s ) is less than for network generation (58.19 s ). For a VCF-file in combination with a related TSV-file, it takes on average about 352 s . Here, too, network generation takes longer than clinical annotation. Overall, PeCaX needs 205 s on average for the analysis of the data until the results are displayed. A detailed performance evaluation of the processing time for the example data is recorded in Additional file : Sect. 6. A detailed performance evaluation of ClinVAP and the results of a stress-test on large-scale data have been published previously . The individual nature of the genomic alterations causing cancer directly implies a personalized, or at least stratified approach to treating cancer if the underlying alterations are known. With the rapid drop in sequencing cost, sequencing has become routine for most cancers, but the interpretation of this data is still a major hurdle to clinical implementation of personalized oncology. With PeCaX we present a novel tool for the exploration of the mutational landscape of a cancer patient and for treatment hypothesis generation, e.g. in the context of Molecular Tumor Boards. It is deployed in Docker containers guaranteeing full reproducibility independent of the operating system and as it is a local application it ensures data security and privacy. A local results database is used to keep track of the results and notes taken by the user. The combination of clinical variant annotation, gene drug interaction networks visualizing somatic variants in their pathway context and interactive web-based visualizations makes PeCaX unique and ensures ease of use for all users without the requirement of programming experience. Its service-oriented architecture, the front end, the graph database in the back end, and the interactive graph visualization components constitute significant developments and in their combination in PeCaX a significant advancement over the mere tabular annotation of somatic variants. PeCaX supports the diagnostic workflow as conducted in Molecular Tumor Boards, to reach transparent personalized therapeutic decision in a shorter amount of time. In the future, we plan to allow the upload of multiple VCF files at once for an easier comparison between patients. The comparison between patients and screening for possible treatment could be further improved by the integration of gene expression values. Standard VCFs do not contain gene expression values. However, this information might be added to PeCaX as the visualization of those values is already available in BioGraphVisart as a standalone tool. Moreover, the information content for cancer diagnosis and targeted therapy could be enriched by integrating a gene fusion detection algorithm in the clinical variant annotation as fusion genes have a tumor-specific expression. PeCaX might also include information on the patient’s background provided by the user, e.g., gender, age, and previous therapies, to prioritize possible treatments based on demographic limitations. Additionally, a quantitative usability study needs to be performed to confirm the added value for the interpretation of gene alterations, especially in the context of Molecular Tumor Boards. Project name: PeCaX Project home page: https://github.com/KohlbacherLab/PeCaX-docker Demo website: We provide a web page with example data for demonstration purpose only at https://pecax.informatik.uni-tuebingen.de . Operating system: Platform-independent via Docker containers Programming language: Javascript Other requirements: Docker Engine release 1.13.0 or higher, Compose release 1.10.0 or higher License: MIT Any restrictions to use by non-academics: Some functionality requires a valid license for the KEGG pathway database for commercial use. Additional file 1. Components, Installation & Availability, Use Case, Performance.
Cardiogenic control of affective behavioural state
9295192d-084c-4c1f-879d-09628da597de
9995271
Physiology[mh]
Interoceptive processing of visceral physiological signals, such as cardiac palpitations or stomach fullness, is crucial for maintaining homeostasis – . Diverse psychiatric conditions, such as anxiety disorders, panic disorder, body dysmorphic disorders and addiction, have been hypothesized to be related to dysregulation of interoceptive monitoring by the brain , , and can be statistically correlated with specific visceral organ dysfunction. For example, patients with panic disorder and agoraphobia are more likely to have mitral valve prolapse or clinical symptoms similar to paroxysmal supraventricular tachycardia , . Modern correlative studies have further suggested links between cardiac changes and affect regulation , , including correlations between cardiac interoception with anxiety and functional alterations in the insular cortex, a cortical region that has a central role in both the processing of physiological signals and the regulation of emotions , . However, determining whether primary physiological signals such as increased heart rate can causally influence behavioural states, as proposed in classical physiological theories of emotion , has—although widely debated—remained largely experimentally intractable . Available nonspecific interventions that might disrupt cardiac signals (such as electrical vagus nerve stimulation) are well known to also induce numerous physiological changes that would be unwanted in this context, including direct suppression of respiratory and heart rates as well as anxiolytic and antidepressive effects – , giving rise to multiple direct confounds for the question explored here. Other nonspecific interventions to alter cardiac rhythms, such as broadly active pharmacological stimulants or electrical pacemakers , also introduce insuperable confounds through initial actions beyond the direct pacing of cardiomyocytes, and thus lack the necessary precision. Studying the key question of how cardiac physiology regulates emotional states has remained inaccessible, and the effects on behaviour remain unknown. Precise modulation of electrochemical signals in the heart and other peripheral organs in vivo would enable fundamental studies of physiology and interoceptive signalling – , but stimulation methods that operate with high spatial and temporal precision in highly dynamic environments such as the beating heart – are limited. Electrical pacemakers require invasive surgical implantation to deliver local indiscriminate stimulation that lacks cell-type specificity – . Optogenetics might in principle facilitate cardiomyocyte-specific control with high spatial and temporal precision , but existing optogenetic methods have been limited to acute demonstrations that require exposure or even excision of the heart to deliver light – , all of which are incompatible with freely moving studies of behaviour. Thus far, to our knowledge, no study beyond the brain has achieved precise and noninvasive organ-level control of behavioural or physiological function. Establishing noninvasive approaches to manipulate physiology with cell-type specificity would enable long-sought functional studies of signals that arise from cells across the organism, and reveal the causal influences of these cells on brain function and behaviour. Conventional microbial opsins have not been sensitive enough to control a large organ such as the heart with the requisite power to facilitate behavioural studies within intact animals – , but the discovery of the highly sensitive and red-shifted pump-like channelrhodopsin ChRmine led us to consider the potential for noninvasive optogenetic control of deep tissue with minimal irradiance . We previously found that ChRmine enabled neuromodulation of deep brain circuits without intracranial surgery , which raised the possibility that this optogenetic tool might be broadly applicable to modulating biological processes across the entire body of large organisms such as mammals. Specifically, we hypothesized that ChRmine might allow on-demand deep-tissue control of cardiac pacing when targeted to cardiomyocytes without the need for direct exposure of the heart , . We first achieved cardiomyocyte-restricted expression by placing the ChRmine transgene under the control of the mouse cardiac troponin T promoter (mTNT), using the AAV9 serotype, which exhibits tropism for cardiac tissue , . Infection of cultured primary cardiomyocytes with AAV9-mTNT::ChRmine-2A-oScarlet enabled light-evoked contractions with irradiance as low as 0.1 mW mm −2 , consistent with the photosensitivity of ChRmine in neurons (Extended Data Fig. and Supplementary Video ). We next determined whether systemic viral gene delivery of ChRmine, despite the lower multiplicity of infection compared with transduction by direct local injection , , could allow noninvasive control of heart rhythms in wild-type mice. Retro-orbital injection of AAV9 enabled restricted expression of ChRmine in cardiomyocytes throughout the heart, with homogeneous expression in both ventricular and atrial walls and no off-target expression in other cardiac cell types (fibroblasts and neuronal ganglia) or in other organs (Fig. and Extended Data Fig. ). When pulsed 589-nm light was delivered through intact skin overlying the thorax of anaesthetized mice, we observed robust photoactivation of cardiac QRS complexes within a safe range of irradiance comparable to that used for transcranial optogenetics (Fig. ). Reliable cardiac rhythms were induced at even supraphysiological rates of up to 900 beats per minute (bpm), within the photophysical properties of ChRmine , with an immediate return to naturally paced sinus rhythm upon the cessation of light (Fig. ). This approach was not able to decrease heart rate below baseline levels, but afforded spatial control of cardiac rhythms by evoking either right or left ventricular pacing depending on the placement of the laser (Extended Data Fig. ). To translate this approach to mouse behaviour, we mounted a 591-nm micro-LED onto a wearable fabric vest to deliver light through intact skin overlying the chest wall (Fig. and Extended Data Fig. ). This integration of a molecular tool with accessible electronics enabled the initial demonstration of noninvasive and sustained ventricular pacing at experimenter-defined rhythms suitable for most behavioural assays in freely moving mice (Extended Data Fig. ). To test whether heart rhythms directly set by this optical pacemaker could influence behaviour, we optogenetically induced intermittent ventricular tachycardia (900 bpm for 500 ms every 1,500 ms) to mimic non-sustained arrhythmias that are observed during stressful contexts – , while shortening the duration of decreases in systolic blood pressure and avoiding incidental heating from light-delivery devices (Fig. and Extended Data Figs. and ). We first assessed the appetitive or aversive effects of optical pacing using a real-time place-preference (RTPP) assay (Fig. ). Mice spent an equal proportion of time on the paced and non-paced sides of the two-chamber arena, and showed no difference in locomotion compared to littermate controls—revealing that optically induced intermittent tachycardia was not intrinsically aversive and did not cause locomotor impairment (Fig. ). Optical pacing also did not modulate pain perception during a hot-plate test, with paced mice exhibiting comparable behavioural responses to control mice (Extended Data Fig. ). By contrast, when we tested for anxiety-related behaviour using an elevated plus maze (EPM) assay, the same mice exhibited limited exploration of the open (exposed) arms of the apparatus after optical pacing, compared to control mice, preferring to remain within the protected areas of the closed arms (Fig. ). Paced mice also avoided the centre area during an open field test (OFT) (Fig. ). We observed no effects from illumination alone in control (opsin-negative) mice, and baseline anxiety levels between control and virally transduced groups were similar (Extended Data Fig. ). Increased anxiety-like behaviour induced by optical pacing during the EPM and OFT assays was similarly observed in female cohorts (Extended Data Fig. ). We found that mice that received intermittent cardiac pacing within baseline ranges (660 bpm) rather than elevated (900 bpm) ranges did not exhibit behavioural differences compared to control mice during the EPM or OFT (Extended Data Fig. ). Because continuous ventricular pacing can have a long-lasting effect on animal health , , we also assessed for potential changes in baseline anxiety levels and mobility in mice that were subjected to longer-term treatments of intermittent tachycardia (one-hour sessions every other day for two weeks) and did not observe locomotor or behavioural differences in these mice when compared to control mice during the EPM and OFT (Extended Data Fig. ). We further investigated whether this context-dependent enhancement of anxiety-related behaviour could translate to a classical operant task, by using a trial-based variation of the Vogel conflict task in which water-restricted mice show willingness to seek a water reward even when the reward is coupled to a risk of mild shock (Fig. ). Mice that received cardiac pacing performed similarly to control littermates when allowed to freely press for water with no delivery of the aversive stimulus (Fig. ). However, when random shocks were introduced in 10% of trials, optically paced mice were found to suppress or terminate water-seeking altogether (Fig. ). These mice exhibited increased apprehension, as revealed both by an overall decreased lever-pressing rate and by an increased time to the next subsequent lever press after a shock trial—consistent with the heightened levels of anxiety that were observed during the EPM and OFT (Fig. ). By contrast, control mice exhibited reduced water-seeking only when the frequency of shock trials was increased to 30% (Extended Data Fig. ). This context-dependent influence of cardiac pacing on anxiety-like behaviour suggested that higher-order brain function was involved in the processing of interoceptive cues. We therefore next used the optical pacemaker to identify potential neural correlates and mechanisms of this observed behaviour along the heart–brain axis. First, transgenic TRAP2 mice, in which neurons with increased expression of the immediate early gene Fos can be labelled with tdTomato as a marker for neural activation , were used to perform a brain-wide screen to identify regions that were affected by optical pacing (Fig. and Extended Data Fig. ). Whole-brain tissue clearing , registration to a reference brain atlas and automated cell counting together revealed that a number of brain regions exhibited increased expression of tdTomato in optically paced mice. These included areas associated with the central autonomic network , , such as the insular cortex (including its visceral area (VISC), gustatory area (GU) and agranular insular area (AI)), prefrontal cortex (including the infralimbic area (ILA), prelimbic area (PL) and anterior cingulate area (ACA)) and brainstem (including the pons (P) and medulla (MY)) (Fig. ). Meanwhile, pointing to specificity, many other cortical regions that are not known to be involved in autonomic or interoceptive processing were not significantly activated, including primary sensory auditory (AUD) and visual (VIS) cortical areas, as well as the cerebellar vermis (VERM) and cerebellar nuclei (CBN) (Fig. ). Consistent with the TRAP2 mapping results, optical pacing similarly increased the endogenous expression of Fos mRNA in the posterior insular cortex (pIC) and in the brainstem (Fig. and Extended Data Fig. ). In particular, sensory relay circuits of the nucleus tractus solitarius (NTS), as well as noradrenergic neurons in the locus coeruleus (LC), which are involved in arousal and stress , also exhibited prominent Fos labelling (Extended Data Fig. ). We next investigated the cardiac-pacing-induced neural dynamics at single-neuron resolution using in vivo electrophysiology in awake mice (Fig. ). On the basis of the observed increased Fos expression in the pIC after cardiac pacing, and because the insular cortex is a key cortical hub for interoception , we used four-shank Neuropixels 2.0 probes to obtain rich multi-regional recordings of the pIC and surrounding regions (Fig. ). Notably, we observed pacing-evoked increases in insular activity at both the single-unit and population levels (Fig. ), whereby pIC neurons were acutely activated by cardiac stimulation with heterogeneous temporal dynamics and returned to basal levels of activity after pacing offset. By contrast, no significant changes in activity were recorded in control mice during photostimulation, ruling out potential light- or heat-induced representations in this region (Fig. ). We also observed a substantial diversity in temporal dynamics triggered by pacing in other recorded regions, including acute responses in the somatosensory cortex and delayed responses in the striatum during stimulation offset (Fig. ). Our data show that the pIC and other regions of the central autonomic network are distinctly engaged by optically evoked tachycardia , , and are in line with human neuroimaging studies that have correlated these brain areas with cardiac interoception – . To determine whether the anxiogenic circuitry recruited by cardiac pacing could be specifically modulated to influence behaviour, we next performed optogenetic inhibition with the 473 nm (blue light)-activated inhibitory channelrhodopsin iC++. We targeted the pIC (with well-established roles in both processing and regulating cardiac sensory signals and anxiety-related behaviours – ) and the medial prefrontal cortex (mPFC) (crucially involved in reward and aversion processing, as well as associated with cardiovascular arousal ). To perform simultaneous optogenetic inhibition of the cortex and optically paced tachycardia, we bilaterally injected AAVdj-hSyn::iC++-eYFP or AAVdj-hSyn::eYFP control virus and implanted fibre-optic cannulas into the pIC or mPFC of mice expressing ChRmine in the heart (Fig. ). As expected, in eYFP-expressing control mice, we found that 473-nm illumination of the pIC did not affect the reduction of water-seeking by optical pacing in trials with a risk of shock (Fig. ). By contrast, the same intervention in mice expressing iC++ instead of eYFP alone in pIC did reverse the reduction of water-seeking (Fig. ); all mice receiving pIC inhibition completed the water-retrieving task and exhibited decreased apprehension, with the time to next lever press after shock reduced to near baseline levels (Fig. ). Similarly, iC++ inhibition increased open-arm exploration time during the EPM assay in optically paced mice relative to eYFP controls (Fig. ). The attenuation of the anxiogenic effect of optical pacing exhibited specificity to pIC inhibition; inhibition of the mPFC did not decrease cardiac-associated anxiogenic behaviours relative to eYFP controls (Fig. ). To test for any direct anxiolysis from optogenetic inhibition of the pIC (that is, not through modulation of the cardiac-pacing effect), we performed the EPM assay and the lever-pressing task in the absence of cardiac pacing and with 30% shock trials to allow the detection of pacing-independent apprehensive behaviour (Extended Data Fig. ). Inhibition of the pIC without cardiac pacing did not increase open-arm exploration, affect lever-press suppression or influence heart rate (Extended Data Fig. ). Thus, pIC inhibition alone appeared to be insufficient to induce anxiolysis, consistent with previous reports . Together, these results are in line with a model in which the pIC is important for mediating the anxiety-related and apprehensive behaviours that arise from direct cardiac pacing. In this study, we have developed a method for noninvasive optogenetic control of specific cardiac rhythms during active behaviour. We show that the optically induced tachycardia was not intrinsically aversive, but rather elicited anxiety-like behaviours and apprehension in potentially risky environments. Although diverse mechanisms may contribute to this effect, we consider that anxiogenic effects of evoked tachycardia are not likely to be mediated through a reduction in blood pressure , as drugs that reduce systolic blood pressure tend to be anxiolytic (for example, propranolol and clonidine) or neutral (for example, Ca 2+ -channel blockers). Our observation of anxiogenesis in response to increased heart rates (900 bpm or 15 Hz) is in line with clinical observations that accelerated heart rates—but not other forms of altered haemodynamics (for example, increased heart rate variability)—are associated with panic and other anxiety-related disorders , . The altered rate, rather than the external nature of cardiac contraction timing, appears to be important; for example, we found that intermittent or asynchronous stimulation close to baseline heart rates at 660 bpm (11 Hz) did not result in anxiety-like behaviour. In further investigations of the mechanisms that underlie these behaviours, we found that optogenetic pacing activated the pIC, consistent with studies of cardiovascular control in anaesthetized rodents and neuroimaging studies of cardiac interoception and reflex control in humans – , including in the setting of panic and anxiety , . It remains unclear whether this pathway can be modulated by peripheral baroreceptive sensory neurons or other sensory mechanisms that detect changes in blood pressure – , and it is possible that there are additional ways in which cardiac viscerosensory information can be relayed to higher cortical areas , . The anxiogenic behavioural effects of cardiac pacing were attenuated during optogenetic inhibition of the pIC, suggesting that the insula has a causal role in integrating sensory information from the heart with a contextual assessment of environmental risk to produce adaptive behavioural patterns. Our findings support the idea that the insular cortex is involved in monitoring not only consummatory – but also entirely internal interoceptive states to instruct relevant behavioural responses, as predicted from human neuroimaging studies of cardiac interoception – . This study shows that cell-type-specific, temporally precise, noninvasive perturbation of organ-scale physiology is possible in fully intact, freely behaving mammals. Although we have applied our approach mainly to study animal behaviour over a period of minutes, future integration with miniaturized wireless devices may facilitate longer-term studies to modulate targeted populations of cells over days to weeks while alleviating the need for intimate contact with the light source. Furthermore, refinement in cell-type-targeting strategies may enable minimally invasive to noninvasive optogenetic dissection of specific cell types (for example, pacemaker cells , Purkinje fibres and cardiac ganglions ) to determine their effects on regulating cardiac electrophysiology and behaviour. Finally, our approach, which requires no specialized optoelectronics or surgery, has the potential for broad application to a range of physiological systems throughout the body—opening up numerous opportunities to explore the complex interactions between physiological systems in health, disease and treatment. Mice All animal procedures followed animal care guidelines approved by Stanford University’s Administrative Panel on Laboratory Animal Care (APLAC) and guidelines of the National Institutes of Health. Investigators were not blinded to the genotypes of the mice. Male and female wild-type C57BL6/J (JAX 0064) mice were used for most behavioural experiments unless specified otherwise, and all mice were 8–12 weeks old at the time of starting behavioural experiments. Mice were housed in plastic cages with disposable bedding on a standard light cycle with food and water available ad libitum, except when placed on water restriction. When on water restriction, mice were provided with 1 ml of water each day and maintained above 85% of baseline weight. Behavioural experiments were performed during the dark phase. Molecular cloning A 685-bp fragment containing the promoter region of the mouse troponin gene was amplified from a wild-type mouse using CGCACGCGTGAGGCCATTTGGCTCATGAGAAGC and CATGGATCCTCTAGAAAGGGCCATGGATTTCCTG primers, cloned upstream of ChRmine-p2A-oScarlet using MluI and BamHI sites in an AAV backbone, sequence-verified and tested for expression in dissociated neonatal cardiomyocytes. In vitro cardiomyocyte experiments Dissociated neonatal mouse cardiomyocytes prepared using the Pierce Isolation Kit (Thermo Fisher Scientific, 88289) were transfected with rAAV-mTNT::ChRmine-p2A-oScarlet (1 µl of 8 × 12 viral genomes (vg) per ml in 500 µl of medium). Three to five days after infection, individual cardiomyocytes were identified under a light microscope. Optical stimulation was provided by a Spectra X Light engine at 585 nm (LumenCore) coupled to a Leica DM LFSA microscope and synchronized with video recording at 100 fps using LabView software. Laser power leaving the imaging objective was measured with an optical power meter (Thorlabs PM100D). Videos were analysed for contraction using custom scripts in MATLAB. In vivo systemic viral delivery Wild-type mice aged three to four weeks were anaesthetized with isoflurane and rAAV-mTNT::ChRmine-p2A-oScarlet (2 ×10 11 vg per mouse) or vehicle was delivered by retro-orbital injection. Our selected titres were previously used for systemic viral transduction of ChR2 in the heart . A total volume of 60 µl 0.9% NaCl saline solution was injected into the right retro-orbital sinus using a 28G needle, then allowed to recover on a warming pad before being returned to the home cage. Optical pacemaker in vivo characterization Mice were tested three weeks after injection of the pacemaker virus. Electrocardiography signals were collected using commercial instruments (Rodent Surgical Monitor+, Indus Instruments), with anaesthetized mice placed in a supine position and limbs placed in contact with electrode pads via a conductive gel. A 594-nm laser (LaserGlow) was attached to a fibre-optic patch cord (Thorlabs) terminating in a 200-µm-diameter, 0.39-NA fibre (Thorlabs) which was positioned against the chest. Optical power was adjusted using the laser’s built-in power modulator and measured with an optical power meter (Thorlabs) at the fibre tip. Stimulation was performed with a pulse width of 10 ms and an inter-pulse interval ranging from 120 ms (equivalent to 500 bpm) to 67 ms (900 bpm), controlled by a TTL signal generator (Master-8). Heart rate (bpm) was derived from the heart rate interval between successive R waves (RR interval) obtained from ECG recordings. Fidelity of photoactivated QRS complexes was quantified by counting the number of beats at a set frequency divided by the number of total beats measured during the middle 20 s of a 30-s stimulation period. Measurements of systolic blood pressure Mice were anaesthetized (1.5–2% isofluorane) and placed in the supine position with the chest shaved. Systolic blood pressure measurements were performed using a 1.4-F pressure sensor mounted Millar catheter (SPR-671, ADInstruments) and recorded using LabChart 7 Pro (ADInstruments). The catheter was inserted via the right carotid artery into the left ventricle. A 589-nm laser was used to deliver 240 mW mm −2 light across intact chest at either constant or intermittent (500 ms ON, 1,500 ms OFF) optical stimulation at 900 bpm with a 10-ms pulse width for 30 s to assess optogenetic pacing effects on systolic blood pressure in real time. Wearable optical pacemaker hardware Custom-made wearable optical stimulators were constructed using 3 × 4.5 mm 591-nm PC Amber Rebel LEDs (Luxeon LXM2-PL01-0000). 30AWG flexible silicone wire (Striveday) was soldered to the LED pad and coated with electrically insulating, thermally conductive epoxy (Arctic Alumina), and adhered to copper sheet cut to 10 × 15 mm for thermal dissipation and subsequently glued to a fabric vest designed for freely moving mouse behaviour (Coulbourn A71-21M25). Wiring was held in place on the vest using hot glue and the free ends were inserted into a breadboard for stimulus control by an LED Driver (Thorlabs LEDD1B T-Cube). The optical power was set to 160–240 mW mm −2 measured from the surface of the LED. Light was delivered at intervals consisting of a 10-ms pulse width at 15 Hz (900 bpm) for 500 ms with 1,500 ms OFF time by using either a Master-8 or an Arduino microcontroller synchronized to behaviour recording software. Computer-aided design schematics were created with Onshape. Thermal measurements were performed using a FLIR C2 Compact thermal camera (FLIR) and the thermal profile at the surface of the micro-LED is plotted in Extended Data Fig. . Freely moving behaviour with pacemaker All mice were habituated to the experimenter and handled for at least three days, and in addition allowed to acclimatize to wearing optical pacemaker hardware for at least five days, before behavioural experiments. Fur over the chest was removed (Nair) at least five days before behavioural experiments. Mice were briefly anaesthetized with isoflurane before the placement of the optical pacing vest and allowed to fully recover in the home cage (at least 1 h) before experiments. We used a stimulation protocol consisting of a 10-ms pulse width at 15 Hz (900 bpm) with 500 ms ON time and 1,500 ms OFF time to introduce intermittent tachycardia or 10-ms pulse width at 11 Hz (660 bpm) with a Poisson distribution to introduce increased heart rate variability. Mice received optical stimulation during the ON periods of the behavioural assay from the wearable micro-LED device in both control and ChRmine cohorts. No statistical difference in behaviour was observed between virally transduced and control groups at baseline, suggesting that there were no side effects from transgene delivery. No statistical difference in behaviour was observed in control groups before, during and after optical stimulation, suggesting that there were no effects from light delivery alone. RTPP Mice were placed in a custom-built RTPP chamber (30.5 × 70 cm) on day 1 to determine their baseline preference for each side of the chamber. Behavioural tracking was performed using blinded automated software (Noldus Ethovision). On day 2, mice were stimulated whenever they were on one side of the chamber. Stimulation sides were randomly assigned and counterbalanced across mice. Each session lasted 20 min. EPM The EPM was made of grey plastic (Med Associates). Mice were gently placed in the closed arm of the EPM. Mice were allowed to freely explore the maze for a 5-min baseline ‘off’ period, followed by a 5-min ‘on’ period during which optical stimulation was delivered, and finally a 5-min ‘off’ period. Behavioural tracking was performed using blinded automated software (Noldus Ethovision) and the overall time spent in open arms was reported for each epoch. OFT Mice were placed in a 60 × 60-cm arena and allowed to freely explore during a 9-min session. Optical stimulation was delivered during the middle 3-min epoch. Movement was tracked with a video camera positioned above the arena. To assess anxiety-related behaviour, the chamber was divided into a peripheral and centre (48 × 48 cm) region. Operant lever-pressing task Water-restricted mice were trained to lever press for a small water reward (around 10 μl water) while freely moving in an operant condition box containing a single retractable lever and a shock grid floor (Coulbourn). Mice were allowed to retrieve a maximum of 50 rewards per day, and sessions were terminated after all rewards had been retrieved or after 30 min. After each lever press, the lever was retracted for 5 s before extending again. After mice retrieved 50 rewards for at least 3 consecutive days (typically 2–3 weeks of training), they were allowed to proceed with stimulation experiments. On shock days, mice were given a 1-s, 0.1-mA foot shock after 10% of lever presses instead of water. Shocks were delivered in a pseudorandom order on lever-press trials 5, 13, 24, 31 and 44, and the time to the next lever press was measured from the time elapsed for these trials until the subsequent lever press. During stimulation experiments (both baseline and shock days), optical stimulation was delivered throughout the experiment. Water was delivered using a custom set-up consisting of a lick spout (Popper and Sons, stainless steel 18-gauge) and a solenoid (Valcor, SV74P61T1) controlled by a microcontroller (Arduino Uno R3). Licking was monitored using a capacitive sensing board (Arduino Tinker Kit) wired to the lick spout and interfacing with the microcontroller. Shocks were delivered using an 8-pole scrambled shock floor (Coulbourn). Behavioural stimuli—lever presentations and retractions, and shocks—were controlled with Coulbourn Graphic State software. The timing of lever presses and licks was also recorded at 5 kHz using a data-acquisition hardware (National Instruments, NI PCIe-6343-X). TRAP2 labelling Fos 2A-iCreER (TRAP2; JAX 030323) mice were backcrossed onto a C57BL6/J background and bred with B6;129S6-Gt(ROSA)26Sor tm14(CAG-tdTomato)/Hze /J (Ai14; JAX 007908) mice, as previously described . Both male and female mice were used for TRAP2 labelling experiments. Mice were injected retro-orbitally with rAAV9-mTNT-ChRmine-oScarlet or a vehicle control at three to four weeks of age. Four weeks later, mice were handled and acclimatized to fresh clean cages and optical pacing equipment for at least seven days before labelling. On the day of labelling, mice were allowed to acclimatize to optical pacing equipment for at least 2 h in a fresh clean cage with food and water, stimulated for 15 min (10-ms pulse width at 15 Hz for 500 ms every 1,500 ms) and left undisturbed for 2 h, at which time they were injected intraperitoneally with 5 mg kg −1 4-hydroxytamoxifen (Sigma) dissolved in normal saline containing 1% Tween-80 and 2.5% DMSO (as described previously , ). Mice were then returned to their home cage and were euthanized at least two weeks later to allow for full expression of the fluorophore. Whole-brain CLARITY and analysis Mice were perfused with ice-cold phosphate-buffered saline (PBS) and 4% paraformaldehyde (PFA), then post-fixed in a 1% CLARITY hydrogel solution (1% acrylamide, 0.003125% bis-acrylamide, 4% PFA and 0.25% VA-044 in 1× PBS) for 2 days. Tissue was degassed, polymerized at 37 °C for 4 h and washed with 200 mM sodium borate with 4% sodium dodecyl sulfate solution overnight. Tissue was then electrophoretically cleared for 3–7 days at 80 V (Life Canvas), passively cleared for an additional 2 days, then washed in PBS containing 0.2% Triton-X and 0.02% sodium azide at least 6 times at 37 °C. Cleared samples were refractive-index-matched using RapiClear (Sunjin Labs) and imaged on a custom-built light-sheet microscope using a 10× objective and 5-µm step size or an LaVision Ultramicroscope with a 0.63× zoom macro lens with a step size of 5 µm. Images were visualized using Vision4D (Arivis). For automated whole-brain registration and cell-segmentation analysis, images were loaded onto Arivis Vision4D software, and neurons were segmented using a built-in supervised pixel-based classifier package based on Ilastik (‘Trainable Segmenter’). Segmentation masks were converted to binary cell masks. Raw light-sheet microscopes images and cell masks were registered to a common reference space defined by the Allen Institute’s Reference Atlas and analysed in a region-based manner using our MIRACL package . Induction of Fos after pacing Mice were injected retro-orbitally with AAV9-mTNT-ChRmine-oScarlet or vehicle at three to four weeks of age. At four weeks after injection, mice were handled and acclimatized to fresh clean cages and optical-pacing equipment for a minimum of seven days before pacing experiments. On the day of labelling, mice were allowed to acclimatize to optical-pacing equipment for at least 2 h in a fresh clean cage with food and water, stimulated for 15 min and euthanized 30 min after stimulation by perfusion with ice-cold PBS and 4% PFA under heavy anaesthesia. Tissue was post-fixed in 4% PFA on ice for an additional 24 h (brain) before staining and imaging. In situ hybridization Post-fixed brains were cut with a vibratome into 65-µm coronal slices. Heart and other organs were sliced at 200-µm thickness. Tissue slices were stored in 70% ethanol at −20 °C. Established protocols for third-generation hairpin chain reaction (HCR) in situ hybridization were used for coronal slice . In situ hybridization probes (ChRmine, Fos and Slc6a2 ) were designed by and purchased from Molecular Instruments. Hybridization was performed overnight in hybridization buffer (Molecular Instruments) at 4 nM probe concentration. The next day, slices were washed (three times in wash buffer at 37 °C then twice in 2× SSCT at room temperature; 30 min each) and then incubated in amplification buffer. Dye-conjugated hairpins (B1-647, B3-488 and B5-546) were heated to 95 °C for 1 min and then cooled to 4 °C. Hairpin amplification was performed by incubating individual slices in 50 µl of amplification buffer with B1, B3 and B5 probes at concentrations of 240 nM overnight in the dark. Samples were stained with DAPI, washed three times with 5× SSCT for 30 min each and then equilibrated in exPROTOS (125 g iohexol, 3 g diatrizoic acid and 5 g N -methyl- d -glucamine dissolved in 100 ml deionized water with the refractive index adjusted to 1.458) (ref. ), a high-refractive-index mounting solution, then imaged. Slices were imaged on a confocal microscope (Olympus FV3000). Cardiac histology At 48 h post-fixation, hearts were sectioned into 200-µm slices. For staining, slices were first incubated for 10 min in blocking solution (3% normal donkey serum (NDS) in PBST), followed by primary antibody staining overnight at 4 °C using the following antibodies: anti-vimentin (ab24525), anti-cardiac troponin I (ab188877) or anti-PGP9.5 (ab108986), purchased from Abcam at 1:200 dilution in blocking solution. Slices were then washed twice in PBST, then stained with secondary antibodies (1 mg ml −1 ) at 1:500 dilution for 3 h at room temperature using the following: F(ab’)2 anti-chicken 488 (703-546-155) and anti-rabbit 647 (711-606-152) purchased from Jackson ImmunoResearch Laboratories. The slices were then stained with DAPI and washed three times with PBST (30 min per wash). Sections were mounted onto slides and mounted with exPROTOS. Slices were imaged on a confocal microscope (Olympus FV3000). Stereotaxic surgery for optogenetic experiments For all surgeries, mice were anesthetized with 1–2% isoflurane, and placed in a stereotaxic apparatus (Kopf Instruments) on a heating pad (Harvard Apparatus). Fur was removed from the scalp, the incision site was cleaned with betadine and a midline incision was made. Sterile surgical techniques were used, and mice were injected with sustained-release buprenorphine for post-operative recovery. Mice were allowed to recover for at least two weeks after surgery before behavioural experiments. For intracranial optogenetic experiments, virus was injected using a 33-gauge beveled needle and a 10-µl Nano-fil syringe (World Precision Instruments), controlled by an injection pump (Harvard Apparatus). Five hundred nanolitres of AAVdj-hSyn::iC++-eYFP or AAVdj-hSyn::eYFP (5×10 11 vg ml −1 ) was injected at 150 nl per min and the syringe was left in place for at least 10 min before removal. The following coordinates were used (relative to Bregma): posterior insula (−0.58 (anterior–posterior (AP)), ±4.2 (medial–lateral (ML)), −3.85 (dorsal–ventral (DV)); mPFC (1.8 (AP), ±0.35 (ML), −2.9 (DV)). Optical fibres (0.39 NA, 200 µm; Thorlabs) were implanted 200 µm above virus injection coordinates. Fibres were secured to the cranium using Metabond (Parkell). Mice were allowed to recover for at least two weeks before behavioural testing. In vivo electrophysiology The mice with or without cardiac-targeted ChRmine expression were implanted with custom-made headplates, reference electrodes and cyanoacrylate-adhesive-based ‘clear-skull caps’ as previously described . After recovery, mice were water-restricted and habituated to head fixation, but they were allowed to drink water to satiate thirst before recording sessions. Craniotomies were made with a dental drill at least several hours before recording sessions and were sealed with Kwik-Cast (World Precision Instruments). Exposed craniotomies before, during and after recordings were kept moist with frequent application of saline until sealed with Kwik-Cast. Before recordings, the mice were placed into the pacemaker vests and reliable pacing was confirmed by ECG under brief anaesthesia with isoflurane. Then the mice were head-fixed and allowed to recover. Next, one or two (for simultaneous bilateral recordings) four-shank Neuropixels 2.0 probes mounted on a multi-probe manipulator system (New Scale Technologies) and controlled by SpikeGLX software (Janelia Research Campus) were inserted through the craniotomies at variable angles (0–20°) depending on the recording geometry. Typically the probes were aimed to touch the skull around the insula, which could be inferred from probe bending or changes in local field potential, and then were retracted around 100 µm and allowed to sit in place for at least 15 min before recordings. Recordings were performed along each of the four shanks sequentially while mice received 5 s of optical stimulation (900 bpm (15 Hz)) with inter-trial intervals of at least 15–25 s. Probes were cleaned with trypsin between recording sessions. Spike sorting was performed by Kilosort 2.5 and auxiliary software as previously described . After recordings, the brains were perfused, cleared, imaged and registered to the Allen Brain Atlas as previously described . Using the traces of lipophilic dye CM-DiI or DiD (which coated the probes before each insertion) and electrophysiological features, the atlas coordinates of the recorded single units were determined. The spikes from single units were aligned to pacing onset, and the visualized peri-stimulus time histograms were calculated by subtracting 5 s baseline firing rate, 10 ms binning and 500 ms half-Gaussian filtering. The population-averaged firing rate of each region was calculated by combining z -scores (before filtering) over time for all single units in the region of interest. Specifically, we used hierarchical bootstrap to combine data from multiple levels as previously described . For each condition, 100 bootstrap datasets were generated, and their mean and s.d. represented the mean and s.e.m. of the initial dataset. For statistical tests comparing ChRmine and control groups, the one-sided P value for the null hypothesis (the ChRmine firing rate subtracted by the control firing rate is zero) was calculated as the fraction of these subtracted values from the pairs of the resampled means (averaged over the time window of interest) that were smaller than zero. Optogenetic freely moving behaviour For optogenetic inhibition of iC++, a 473-nm laser (Omicron Laserage) was used to deliver constant light at 2–3 mW measured from the tip. Laser shutters were controlled using a Master-8 receiving synchronized input from behaviour apparatus and control software (Ethovision). Statistical analysis The target number of subjects used in each experiment was determined on the basis of numbers in previously published studies. No statistical methods were used to predetermine sample size or randomize. Criteria for excluding mice from analysis are listed in the methods. Mean ± s.e.m. was used to report statistics. The statistical test used, definition of n and multiple-hypothesis correction where appropriate are described in the figure legends. Unless otherwise stated, all statistical tests were two-sided. Significance was defined as alpha = 0.05. All statistical analyses were performed in GraphPad Prism 9. Reporting summary Further information on research design is available in the linked to this article. All animal procedures followed animal care guidelines approved by Stanford University’s Administrative Panel on Laboratory Animal Care (APLAC) and guidelines of the National Institutes of Health. Investigators were not blinded to the genotypes of the mice. Male and female wild-type C57BL6/J (JAX 0064) mice were used for most behavioural experiments unless specified otherwise, and all mice were 8–12 weeks old at the time of starting behavioural experiments. Mice were housed in plastic cages with disposable bedding on a standard light cycle with food and water available ad libitum, except when placed on water restriction. When on water restriction, mice were provided with 1 ml of water each day and maintained above 85% of baseline weight. Behavioural experiments were performed during the dark phase. A 685-bp fragment containing the promoter region of the mouse troponin gene was amplified from a wild-type mouse using CGCACGCGTGAGGCCATTTGGCTCATGAGAAGC and CATGGATCCTCTAGAAAGGGCCATGGATTTCCTG primers, cloned upstream of ChRmine-p2A-oScarlet using MluI and BamHI sites in an AAV backbone, sequence-verified and tested for expression in dissociated neonatal cardiomyocytes. Dissociated neonatal mouse cardiomyocytes prepared using the Pierce Isolation Kit (Thermo Fisher Scientific, 88289) were transfected with rAAV-mTNT::ChRmine-p2A-oScarlet (1 µl of 8 × 12 viral genomes (vg) per ml in 500 µl of medium). Three to five days after infection, individual cardiomyocytes were identified under a light microscope. Optical stimulation was provided by a Spectra X Light engine at 585 nm (LumenCore) coupled to a Leica DM LFSA microscope and synchronized with video recording at 100 fps using LabView software. Laser power leaving the imaging objective was measured with an optical power meter (Thorlabs PM100D). Videos were analysed for contraction using custom scripts in MATLAB. Wild-type mice aged three to four weeks were anaesthetized with isoflurane and rAAV-mTNT::ChRmine-p2A-oScarlet (2 ×10 11 vg per mouse) or vehicle was delivered by retro-orbital injection. Our selected titres were previously used for systemic viral transduction of ChR2 in the heart . A total volume of 60 µl 0.9% NaCl saline solution was injected into the right retro-orbital sinus using a 28G needle, then allowed to recover on a warming pad before being returned to the home cage. Mice were tested three weeks after injection of the pacemaker virus. Electrocardiography signals were collected using commercial instruments (Rodent Surgical Monitor+, Indus Instruments), with anaesthetized mice placed in a supine position and limbs placed in contact with electrode pads via a conductive gel. A 594-nm laser (LaserGlow) was attached to a fibre-optic patch cord (Thorlabs) terminating in a 200-µm-diameter, 0.39-NA fibre (Thorlabs) which was positioned against the chest. Optical power was adjusted using the laser’s built-in power modulator and measured with an optical power meter (Thorlabs) at the fibre tip. Stimulation was performed with a pulse width of 10 ms and an inter-pulse interval ranging from 120 ms (equivalent to 500 bpm) to 67 ms (900 bpm), controlled by a TTL signal generator (Master-8). Heart rate (bpm) was derived from the heart rate interval between successive R waves (RR interval) obtained from ECG recordings. Fidelity of photoactivated QRS complexes was quantified by counting the number of beats at a set frequency divided by the number of total beats measured during the middle 20 s of a 30-s stimulation period. Mice were anaesthetized (1.5–2% isofluorane) and placed in the supine position with the chest shaved. Systolic blood pressure measurements were performed using a 1.4-F pressure sensor mounted Millar catheter (SPR-671, ADInstruments) and recorded using LabChart 7 Pro (ADInstruments). The catheter was inserted via the right carotid artery into the left ventricle. A 589-nm laser was used to deliver 240 mW mm −2 light across intact chest at either constant or intermittent (500 ms ON, 1,500 ms OFF) optical stimulation at 900 bpm with a 10-ms pulse width for 30 s to assess optogenetic pacing effects on systolic blood pressure in real time. Custom-made wearable optical stimulators were constructed using 3 × 4.5 mm 591-nm PC Amber Rebel LEDs (Luxeon LXM2-PL01-0000). 30AWG flexible silicone wire (Striveday) was soldered to the LED pad and coated with electrically insulating, thermally conductive epoxy (Arctic Alumina), and adhered to copper sheet cut to 10 × 15 mm for thermal dissipation and subsequently glued to a fabric vest designed for freely moving mouse behaviour (Coulbourn A71-21M25). Wiring was held in place on the vest using hot glue and the free ends were inserted into a breadboard for stimulus control by an LED Driver (Thorlabs LEDD1B T-Cube). The optical power was set to 160–240 mW mm −2 measured from the surface of the LED. Light was delivered at intervals consisting of a 10-ms pulse width at 15 Hz (900 bpm) for 500 ms with 1,500 ms OFF time by using either a Master-8 or an Arduino microcontroller synchronized to behaviour recording software. Computer-aided design schematics were created with Onshape. Thermal measurements were performed using a FLIR C2 Compact thermal camera (FLIR) and the thermal profile at the surface of the micro-LED is plotted in Extended Data Fig. . All mice were habituated to the experimenter and handled for at least three days, and in addition allowed to acclimatize to wearing optical pacemaker hardware for at least five days, before behavioural experiments. Fur over the chest was removed (Nair) at least five days before behavioural experiments. Mice were briefly anaesthetized with isoflurane before the placement of the optical pacing vest and allowed to fully recover in the home cage (at least 1 h) before experiments. We used a stimulation protocol consisting of a 10-ms pulse width at 15 Hz (900 bpm) with 500 ms ON time and 1,500 ms OFF time to introduce intermittent tachycardia or 10-ms pulse width at 11 Hz (660 bpm) with a Poisson distribution to introduce increased heart rate variability. Mice received optical stimulation during the ON periods of the behavioural assay from the wearable micro-LED device in both control and ChRmine cohorts. No statistical difference in behaviour was observed between virally transduced and control groups at baseline, suggesting that there were no side effects from transgene delivery. No statistical difference in behaviour was observed in control groups before, during and after optical stimulation, suggesting that there were no effects from light delivery alone. Mice were placed in a custom-built RTPP chamber (30.5 × 70 cm) on day 1 to determine their baseline preference for each side of the chamber. Behavioural tracking was performed using blinded automated software (Noldus Ethovision). On day 2, mice were stimulated whenever they were on one side of the chamber. Stimulation sides were randomly assigned and counterbalanced across mice. Each session lasted 20 min. The EPM was made of grey plastic (Med Associates). Mice were gently placed in the closed arm of the EPM. Mice were allowed to freely explore the maze for a 5-min baseline ‘off’ period, followed by a 5-min ‘on’ period during which optical stimulation was delivered, and finally a 5-min ‘off’ period. Behavioural tracking was performed using blinded automated software (Noldus Ethovision) and the overall time spent in open arms was reported for each epoch. Mice were placed in a 60 × 60-cm arena and allowed to freely explore during a 9-min session. Optical stimulation was delivered during the middle 3-min epoch. Movement was tracked with a video camera positioned above the arena. To assess anxiety-related behaviour, the chamber was divided into a peripheral and centre (48 × 48 cm) region. Water-restricted mice were trained to lever press for a small water reward (around 10 μl water) while freely moving in an operant condition box containing a single retractable lever and a shock grid floor (Coulbourn). Mice were allowed to retrieve a maximum of 50 rewards per day, and sessions were terminated after all rewards had been retrieved or after 30 min. After each lever press, the lever was retracted for 5 s before extending again. After mice retrieved 50 rewards for at least 3 consecutive days (typically 2–3 weeks of training), they were allowed to proceed with stimulation experiments. On shock days, mice were given a 1-s, 0.1-mA foot shock after 10% of lever presses instead of water. Shocks were delivered in a pseudorandom order on lever-press trials 5, 13, 24, 31 and 44, and the time to the next lever press was measured from the time elapsed for these trials until the subsequent lever press. During stimulation experiments (both baseline and shock days), optical stimulation was delivered throughout the experiment. Water was delivered using a custom set-up consisting of a lick spout (Popper and Sons, stainless steel 18-gauge) and a solenoid (Valcor, SV74P61T1) controlled by a microcontroller (Arduino Uno R3). Licking was monitored using a capacitive sensing board (Arduino Tinker Kit) wired to the lick spout and interfacing with the microcontroller. Shocks were delivered using an 8-pole scrambled shock floor (Coulbourn). Behavioural stimuli—lever presentations and retractions, and shocks—were controlled with Coulbourn Graphic State software. The timing of lever presses and licks was also recorded at 5 kHz using a data-acquisition hardware (National Instruments, NI PCIe-6343-X). Fos 2A-iCreER (TRAP2; JAX 030323) mice were backcrossed onto a C57BL6/J background and bred with B6;129S6-Gt(ROSA)26Sor tm14(CAG-tdTomato)/Hze /J (Ai14; JAX 007908) mice, as previously described . Both male and female mice were used for TRAP2 labelling experiments. Mice were injected retro-orbitally with rAAV9-mTNT-ChRmine-oScarlet or a vehicle control at three to four weeks of age. Four weeks later, mice were handled and acclimatized to fresh clean cages and optical pacing equipment for at least seven days before labelling. On the day of labelling, mice were allowed to acclimatize to optical pacing equipment for at least 2 h in a fresh clean cage with food and water, stimulated for 15 min (10-ms pulse width at 15 Hz for 500 ms every 1,500 ms) and left undisturbed for 2 h, at which time they were injected intraperitoneally with 5 mg kg −1 4-hydroxytamoxifen (Sigma) dissolved in normal saline containing 1% Tween-80 and 2.5% DMSO (as described previously , ). Mice were then returned to their home cage and were euthanized at least two weeks later to allow for full expression of the fluorophore. Mice were perfused with ice-cold phosphate-buffered saline (PBS) and 4% paraformaldehyde (PFA), then post-fixed in a 1% CLARITY hydrogel solution (1% acrylamide, 0.003125% bis-acrylamide, 4% PFA and 0.25% VA-044 in 1× PBS) for 2 days. Tissue was degassed, polymerized at 37 °C for 4 h and washed with 200 mM sodium borate with 4% sodium dodecyl sulfate solution overnight. Tissue was then electrophoretically cleared for 3–7 days at 80 V (Life Canvas), passively cleared for an additional 2 days, then washed in PBS containing 0.2% Triton-X and 0.02% sodium azide at least 6 times at 37 °C. Cleared samples were refractive-index-matched using RapiClear (Sunjin Labs) and imaged on a custom-built light-sheet microscope using a 10× objective and 5-µm step size or an LaVision Ultramicroscope with a 0.63× zoom macro lens with a step size of 5 µm. Images were visualized using Vision4D (Arivis). For automated whole-brain registration and cell-segmentation analysis, images were loaded onto Arivis Vision4D software, and neurons were segmented using a built-in supervised pixel-based classifier package based on Ilastik (‘Trainable Segmenter’). Segmentation masks were converted to binary cell masks. Raw light-sheet microscopes images and cell masks were registered to a common reference space defined by the Allen Institute’s Reference Atlas and analysed in a region-based manner using our MIRACL package . Fos after pacing Mice were injected retro-orbitally with AAV9-mTNT-ChRmine-oScarlet or vehicle at three to four weeks of age. At four weeks after injection, mice were handled and acclimatized to fresh clean cages and optical-pacing equipment for a minimum of seven days before pacing experiments. On the day of labelling, mice were allowed to acclimatize to optical-pacing equipment for at least 2 h in a fresh clean cage with food and water, stimulated for 15 min and euthanized 30 min after stimulation by perfusion with ice-cold PBS and 4% PFA under heavy anaesthesia. Tissue was post-fixed in 4% PFA on ice for an additional 24 h (brain) before staining and imaging. Post-fixed brains were cut with a vibratome into 65-µm coronal slices. Heart and other organs were sliced at 200-µm thickness. Tissue slices were stored in 70% ethanol at −20 °C. Established protocols for third-generation hairpin chain reaction (HCR) in situ hybridization were used for coronal slice . In situ hybridization probes (ChRmine, Fos and Slc6a2 ) were designed by and purchased from Molecular Instruments. Hybridization was performed overnight in hybridization buffer (Molecular Instruments) at 4 nM probe concentration. The next day, slices were washed (three times in wash buffer at 37 °C then twice in 2× SSCT at room temperature; 30 min each) and then incubated in amplification buffer. Dye-conjugated hairpins (B1-647, B3-488 and B5-546) were heated to 95 °C for 1 min and then cooled to 4 °C. Hairpin amplification was performed by incubating individual slices in 50 µl of amplification buffer with B1, B3 and B5 probes at concentrations of 240 nM overnight in the dark. Samples were stained with DAPI, washed three times with 5× SSCT for 30 min each and then equilibrated in exPROTOS (125 g iohexol, 3 g diatrizoic acid and 5 g N -methyl- d -glucamine dissolved in 100 ml deionized water with the refractive index adjusted to 1.458) (ref. ), a high-refractive-index mounting solution, then imaged. Slices were imaged on a confocal microscope (Olympus FV3000). At 48 h post-fixation, hearts were sectioned into 200-µm slices. For staining, slices were first incubated for 10 min in blocking solution (3% normal donkey serum (NDS) in PBST), followed by primary antibody staining overnight at 4 °C using the following antibodies: anti-vimentin (ab24525), anti-cardiac troponin I (ab188877) or anti-PGP9.5 (ab108986), purchased from Abcam at 1:200 dilution in blocking solution. Slices were then washed twice in PBST, then stained with secondary antibodies (1 mg ml −1 ) at 1:500 dilution for 3 h at room temperature using the following: F(ab’)2 anti-chicken 488 (703-546-155) and anti-rabbit 647 (711-606-152) purchased from Jackson ImmunoResearch Laboratories. The slices were then stained with DAPI and washed three times with PBST (30 min per wash). Sections were mounted onto slides and mounted with exPROTOS. Slices were imaged on a confocal microscope (Olympus FV3000). For all surgeries, mice were anesthetized with 1–2% isoflurane, and placed in a stereotaxic apparatus (Kopf Instruments) on a heating pad (Harvard Apparatus). Fur was removed from the scalp, the incision site was cleaned with betadine and a midline incision was made. Sterile surgical techniques were used, and mice were injected with sustained-release buprenorphine for post-operative recovery. Mice were allowed to recover for at least two weeks after surgery before behavioural experiments. For intracranial optogenetic experiments, virus was injected using a 33-gauge beveled needle and a 10-µl Nano-fil syringe (World Precision Instruments), controlled by an injection pump (Harvard Apparatus). Five hundred nanolitres of AAVdj-hSyn::iC++-eYFP or AAVdj-hSyn::eYFP (5×10 11 vg ml −1 ) was injected at 150 nl per min and the syringe was left in place for at least 10 min before removal. The following coordinates were used (relative to Bregma): posterior insula (−0.58 (anterior–posterior (AP)), ±4.2 (medial–lateral (ML)), −3.85 (dorsal–ventral (DV)); mPFC (1.8 (AP), ±0.35 (ML), −2.9 (DV)). Optical fibres (0.39 NA, 200 µm; Thorlabs) were implanted 200 µm above virus injection coordinates. Fibres were secured to the cranium using Metabond (Parkell). Mice were allowed to recover for at least two weeks before behavioural testing. The mice with or without cardiac-targeted ChRmine expression were implanted with custom-made headplates, reference electrodes and cyanoacrylate-adhesive-based ‘clear-skull caps’ as previously described . After recovery, mice were water-restricted and habituated to head fixation, but they were allowed to drink water to satiate thirst before recording sessions. Craniotomies were made with a dental drill at least several hours before recording sessions and were sealed with Kwik-Cast (World Precision Instruments). Exposed craniotomies before, during and after recordings were kept moist with frequent application of saline until sealed with Kwik-Cast. Before recordings, the mice were placed into the pacemaker vests and reliable pacing was confirmed by ECG under brief anaesthesia with isoflurane. Then the mice were head-fixed and allowed to recover. Next, one or two (for simultaneous bilateral recordings) four-shank Neuropixels 2.0 probes mounted on a multi-probe manipulator system (New Scale Technologies) and controlled by SpikeGLX software (Janelia Research Campus) were inserted through the craniotomies at variable angles (0–20°) depending on the recording geometry. Typically the probes were aimed to touch the skull around the insula, which could be inferred from probe bending or changes in local field potential, and then were retracted around 100 µm and allowed to sit in place for at least 15 min before recordings. Recordings were performed along each of the four shanks sequentially while mice received 5 s of optical stimulation (900 bpm (15 Hz)) with inter-trial intervals of at least 15–25 s. Probes were cleaned with trypsin between recording sessions. Spike sorting was performed by Kilosort 2.5 and auxiliary software as previously described . After recordings, the brains were perfused, cleared, imaged and registered to the Allen Brain Atlas as previously described . Using the traces of lipophilic dye CM-DiI or DiD (which coated the probes before each insertion) and electrophysiological features, the atlas coordinates of the recorded single units were determined. The spikes from single units were aligned to pacing onset, and the visualized peri-stimulus time histograms were calculated by subtracting 5 s baseline firing rate, 10 ms binning and 500 ms half-Gaussian filtering. The population-averaged firing rate of each region was calculated by combining z -scores (before filtering) over time for all single units in the region of interest. Specifically, we used hierarchical bootstrap to combine data from multiple levels as previously described . For each condition, 100 bootstrap datasets were generated, and their mean and s.d. represented the mean and s.e.m. of the initial dataset. For statistical tests comparing ChRmine and control groups, the one-sided P value for the null hypothesis (the ChRmine firing rate subtracted by the control firing rate is zero) was calculated as the fraction of these subtracted values from the pairs of the resampled means (averaged over the time window of interest) that were smaller than zero. For optogenetic inhibition of iC++, a 473-nm laser (Omicron Laserage) was used to deliver constant light at 2–3 mW measured from the tip. Laser shutters were controlled using a Master-8 receiving synchronized input from behaviour apparatus and control software (Ethovision). The target number of subjects used in each experiment was determined on the basis of numbers in previously published studies. No statistical methods were used to predetermine sample size or randomize. Criteria for excluding mice from analysis are listed in the methods. Mean ± s.e.m. was used to report statistics. The statistical test used, definition of n and multiple-hypothesis correction where appropriate are described in the figure legends. Unless otherwise stated, all statistical tests were two-sided. Significance was defined as alpha = 0.05. All statistical analyses were performed in GraphPad Prism 9. Further information on research design is available in the linked to this article. Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41586-023-05748-8. Reporting Summary Supplementary Video 1 In vitro optical pacing of cardiomyocytes . Representative brightfield movie of contracting cardiomyocytes expressing ChRmine. Optical stimulation (light ON) was applied at 5 Hz, with 10-ms pulse width at 585 nm.
High molecular diagnostic yields and novel phenotypic expansions involving syndromic anorectal malformations
df5d0cde-6767-40f4-9024-92f67658d179
9995338
Pathology[mh]
In this issue if EJHG, Belanger Deloge et al. present the diagnostic yield of exome analysis among individuals with anorectal malformations (ARM). ARM comprise congenital malformations of the hindgut and represent the most common malformations of the lower digestive tract. The overall prevalence ranges from two to five per 10,000 births . Mild phenotypes as cutaneous perineal fistula may easily be missed, especially among affected females, which may partly explain male/female ratios being 1.2–1.6 . In 2005, the Krickenbeck Conference on ARM developed standards for an “International Classification system” describing up to 10 distinct subtypes ranging from anal stenosis to severe and complex cloacal malformations . All of these ARMs may occur isolated (non-syndromic ARMs), in combination with one or more co-occurring anomalies, or as part of a genetic syndrome (syndromic ARMs). Previous studies found up to 75% of individuals to present with additional anomalies . Most of these co-occurring anomalies belong to the congenital anomaly spectrum of the VATER/VACTERL association. The VATER/VACTERL association refers to the nonrandom co-occurrence of at least three of the following component features: vertebral defects (V), ARMs (A), cardiac defects (C), tracheoesophageal fistula with or without esophageal atresia (TE), renal malformations (R), and limb defects (L) . In accordance with the case classification guidelines for the National Birth Defects Prevention Study , individuals with ARM with a chromosomal or single gene disorder, a defined clinical syndrome, mental retardation, and/or dysmorphisms have syndromic ARM. While about 10% of syndromic ARM might be explained by chromosomal disorders, the overall contribution of single gene disorders remains elusive . Until today about 30 known monogenic syndromes have been described with ARM as an inherent phenotypic feature e.g., Baller-Gerold syndrome (#218600, RECQL4 ), Kabuki syndrome (#147920, KMT2D ), Opitz-Kaveggia syndrome (#305450, MED12 ), Townes-Brocks syndrome 1 (#107480, SALL1 ). In their study, Belanger Deloge et al. describe the exome analysis of 130 individuals with ARM, identified in a clinical database of about 17,000 individuals referred for exome analysis. In 45 of these individuals a definitive or probable diagnosis was made (34.6%). Moreover, Belanger Deloge et al. identified eight phenotypic expansions of know genetic syndromes comprising Helmsmoortel-van der Aa syndrome (# 615873, ADNP ), Bardet-Biedl syndrome 1 (# 209900, BBS1 ), Rubinstein-Taybi syndrome 1 (# 180849, CREBBP ), Rubinstein-Taybi syndrome 2 (# 613684, EP300 ), Fanconi anemia, complementation group C (# 227645, FANCC ), Kabuki syndrome 2 (# 300867, KDM6A ), Luscan-Lumish syndrome ( SETD2 -related disorder) (# 616831, SETD2 ), and Coffin-Siris syndrome 4 (# 614609, SMARCA4 ). These findings suggest that single gene disorders underly a much larger proportion of syndromic ARMs than previously thought. On the contrary, Belanger Deloge et al. suggest that tests designed to identify monogenic etiologies may have lower diagnostic yields in individuals with ARM in the context of the VATER/VACTERL association (22.8% vs 44.1%). The authors suggest that the contribution of epigenetic and environmental factors might play a more important role in the formation of the VATER/VACTERL association than previously thought. However, to date, no consistent environmental risk factor has been identified that could be specifically responsible for the development of the VATER/VACTERL association . In addition, Solomon et al. provided several lines of evidence that dominant single-gene disorders may underlie a certain number of multiply affected families with VATER/VACTERL association, in which environmental risk factors are unlikely to play a significant role. Hitherto, the search for genetic risk factors for congenital birth defects has mostly focused on the protein-coding genome not encountering the multiplicity of regulatory regions and the respective non-coding RNAs residing in disease loci . One reason, why the hunt for non-coding RNAs or regulatory elements has been carried out with hand brakes applied in regards to the VATER/VACTERL association, might be the difficulties to provide functional proof of the anticipated genetic alterations in embryonic animal models . However, there are several examples that suggest that these regions must not be neglected any longer. De Pontual et al. described hemizygous germline deletions of MIR17HG , encoding the miR-17∼92 polycistronic miRNA cluster, to cause Feingold syndrome (# 164280), an autosomal dominant syndrome comprising microcephaly, short stature, and digital anomalies. Interestingly, less penetrant defects within the phenotypic spectrum of Feingold syndrome include learning disabilities of variable degree, esophageal and duodenal atresias (observed in 30–55% of cases), and cardiac and renal malformations, representing several component features of VATER/VACTERL spectrum. Studying the genetic basis of congenital limb malformations, which also belong to the VATER/VACTERL association spectrum, Flötmann et al. identified several disease-causing CNVs that interfered with normal gene regulation by either altering enhancer dosage or changing the architecture of so called topologically associating domains. Finally, Long et al. showed very recently, that upregulation of miR-92a-2-5p are implicated in the formation of ARMs in a rat model. Hence, what is the hardest of all? What seems easiest to you. To see with your eyes what lies before you - the non-coding genome (Johann Wolfgang von Goethe).
Rheumatoid arthritis study of the Egyptian College of Rheumatology (ECR): nationwide presentation and worldwide stance
f5c346e1-8572-44db-8960-5a3d02415bed
9995404
Internal Medicine[mh]
Rheumatoid arthritis (RA) is a chronic systemic autoimmune disease primarily affecting small synovial joints usually symmetrically. Symptoms for more than 6 months establish the diagnosis of RA . An intricate network of cytokines and cells trigger synovial cell proliferation and cause damage to both cartilage and bone . Alone the laboratory test for RA cannot confirm a diagnosis that is commonly challenging. A complete clinical approach is necessary to diagnose and avoid debilitating joint damage . Yet, auto-antibodies signify a hallmark of RA, with the rheumatoid factor (RF) and anti-cyclic citrullinated (anti-CCP) peptides being the most acknowledged. Seropositive patients present a certain disease course. With the recent improvements in diagnosis and the discovery of new autoantibodies, the group of seronegative patients is persistently shrinking . Using applicable disease activity measures can help in clinical practice to take on treat-to-target strategies in RA patients . There has been a rising importance for the early and demanding diagnosis and treatment of RA with the goal of reducing disability and mortality . To improve the clinical outcome in RA, various therapeutic approaches are required , although current management recommendations may still support a 'one-size-fits-all' treatment strategy . Early treatment with disease-modifying anti-rheumatic drugs (DMARDs) is standard, yet many patients progress to disability with substantial morbidity over time . The arrival of biologics has changed the treatment of RA due to their remarkable impact on disease manifestations and their ability to diminish joint damage . With the development of biologics and Janus kinase (JAK) inhibitors , these agents are being used by a rising number of patients including those with a mild disease. However, cost and safety issues remain key determinant . Personalized medicine is necessary to select special treatment strategies for certain clinical or molecular phenotypes of patients and key factors of RA disease such as epidemiology, clinical presentations and treatment options should be presented. In the milieu of the restricted information on the epidemiology and treatment patterns of RA across Egypt, the aim of the present study was to present the spectrum of RA in Egypt and compare it to other studies from around the world to provide broad-based characteristics to this particular population. Study population and design This cross-sectional study included a large cohort of 10,364 adult RA patients (new and existing cases) fulfilling the American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) classification criteria that were recruited from 26 specialized rheumatology departments and centers representing 22 major governorates across the country by members of the Egyptian College of Rheumatology (ECR) during the period from September 2018 till December 2021. Any patients with another rheumatic disease or below the age of 18 were excluded. The patients’ in the corresponding university-teaching hospitals provided informed consents to participate and the study was approved by the local ethics committee, in accordance to the 1964 Helsinki declaration and its later amendments. Measures and outcomes Patients were subjected to full history taking and clinical examination. Juvenile-onset RA (JoRA) cases were considered for those who developed the disease before the age of 18 years. It is noteworthy that co-morbidities or manifestations relied on the records of the files. Presence of rheumatoid factor (RF) and/or anti-cyclic citrullinated peptide (anti-CCP) were determined. The use of medications to treat RA was described. Disease activity score (DAS28) and health assessment questionnaire (HAQ) were assessed. Statistical analysis Data were collected on a standardized data sheet and stored in an electronic database. Data missing completely at random (MCAR) as for the RF, anti-CCP and anti-nuclear antibody (ANA) positivity was handled by running a complete-case analysis (CCA), where all persons with missing values were excluded from the analysis of this test and imputation was not used. Statistical Package for Social Sciences (SPSS) version 25 was used. Variables were presented as frequencies and percentages or mean and standard deviation. A comparison was done using Chi-square test, Mann Whitney U tests or analysis of variance (ANOVA). P value < 0.05 was considered significant. This cross-sectional study included a large cohort of 10,364 adult RA patients (new and existing cases) fulfilling the American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) classification criteria that were recruited from 26 specialized rheumatology departments and centers representing 22 major governorates across the country by members of the Egyptian College of Rheumatology (ECR) during the period from September 2018 till December 2021. Any patients with another rheumatic disease or below the age of 18 were excluded. The patients’ in the corresponding university-teaching hospitals provided informed consents to participate and the study was approved by the local ethics committee, in accordance to the 1964 Helsinki declaration and its later amendments. Patients were subjected to full history taking and clinical examination. Juvenile-onset RA (JoRA) cases were considered for those who developed the disease before the age of 18 years. It is noteworthy that co-morbidities or manifestations relied on the records of the files. Presence of rheumatoid factor (RF) and/or anti-cyclic citrullinated peptide (anti-CCP) were determined. The use of medications to treat RA was described. Disease activity score (DAS28) and health assessment questionnaire (HAQ) were assessed. Data were collected on a standardized data sheet and stored in an electronic database. Data missing completely at random (MCAR) as for the RF, anti-CCP and anti-nuclear antibody (ANA) positivity was handled by running a complete-case analysis (CCA), where all persons with missing values were excluded from the analysis of this test and imputation was not used. Statistical Package for Social Sciences (SPSS) version 25 was used. Variables were presented as frequencies and percentages or mean and standard deviation. A comparison was done using Chi-square test, Mann Whitney U tests or analysis of variance (ANOVA). P value < 0.05 was considered significant. The study included 10,364 RA patients recruited from 22 governorates across Egypt. Their mean age was 44.8 ± 11.7 years. They were 8750 females and 1614 males (F:M 5.4:1). Characteristics of the patients and gender differences are presented in Table . 209 (2%) were Jo-RA. Steroids were received by 71.3% of the patients. DMARDs were received in the following descending frequency: methotrexate (MTX) (78%), hydroxychloroquine (HCQ) (73.6%), leflunomide (LFN) (54.8%), sulfasalazine (SAZ)(37.2%), cyclophosphamide (CYC) (2.4%), azathioprine (AZA)(2%), cyclosporine A (CSA)(0.5%) and mycophenolate mofetil (MMF)(0.46%). Steroids and DMARDs received were comparable between genders except for HCQ (male: 77.6% vs females 73%; p = 0.002). Biologic therapy was received by 11.6% with a significantly higher frequency by males vs females (15.7% vs 10.9%, p = 0.001). Biologic therapies received were etanercept (30.4%), adalimumab (18.4%), golimumab (14%), rituximab (7.9%), infliximab (3.3%), tofacitinib (1.6%), certolizumab (1%), upadacitinib (0.8%), baricitinib (0.39%), abatacept (0.39%) and undefined (17.8%). Patients also received low dose aspirin (4.6%), colchicine (1.3%) and oral anticoagulants (1.1%). Certain variables according to the geo-location are presented in Table and graphically presented in Figs. and . The age at onset, gender distribution, disease activity, RF and anti-CCP positivity were significantly varied. The least age at onset, F:M, RF and anti-CCP positivity were present in Upper Egypt, while the highest DAS28 was reported in Canal cities and Sinai. The HAQ was significantly increased in Upper Egypt with the least disability in Canal cities and Sinai. Biological therapy intake was higher in Lower Egypt (46.3%), followed by the Capital (33.1%), Upper Egypt (20.3%) and the Canal cities and Sinai (0.2%) ( p < 0.0001). This cross-sectional study presented the socio-demographic, clinical, and therapeutic profile of 10,364 RA patients recruited across Egypt. In the present work the mean age at onset of RA patients in Egypt was 38 years which was significantly lower in females. The F:M was 5.4:1. The age at onset, gender distribution and disease characteristics of RA patients in countries from different continents were compared to the current study (Table ). Interestingly, the age at onset was lower than that in other countries and nations while it was comparable with that from Arab countries and Turkey . A potential explanation could be related to the lower average age of the populations in the Middle Eastern countries . However, genetic and environmental factors cannot be excluded. The higher F:M was comparable to large registries from Latin America thus raising the subject about an increasing shift in the ratio. Once more, the BMI in the RA patients of the current study were similar to that reported from Turkey . RA, the most common inflammatory rheumatic disease, is no exception, with a F:M > 4 before 50 years old and < 2 after the age of 60 . Furthermore, with the increasing incidence of spondyloarthritis (SpA) worldwide, it could have been that more male patients were misdiagnosed as having RA. The misdiagnosis of SpA as RA leads to a delayed SpA diagnosis and inadequate therapeutic outcomes. Typical SpA-related clinical manifestations were present in RA patients. The advancements and accessibility of imaging modalities pave way for a more precise classification . In this work, associated bronchial asthma and thyroid dysfunction, a family history of RA, Sjögren's syndrome, fibromyalgia syndrome and disease activity were significantly increased in females. It is notable that a lower frequency of females was receiving biologic therapy. On the contrary, males were significantly more smoking, had more renal manifestations, higher serum uric acid, more frequent positivity of RF and anti-CCP. Regarding the various clinical manifestations reported in this work, they were further compared to those from other countries. Interstitial lung disease (ILD) is a well-known potentially life-threatening complication in RA . The enduring appraisal of the complex relationships between smoking, COPD, and other factors in RA-associated ILD is important . In this work, the reported frequency of smoking in RA patients was lower (8.2%) than that from other studies from the UK (21.8%) , European Union (EU) and Canada (17.6%) as well as Turkey (16.8%) . In this work, neurological manifestations were reported at a low frequency. The frequencies of depression and anxiety were doubled in early RA than in long-standing disease. RA patients with short disease duration and functional limitation were more likely to suffer from depression and anxiety . In this study, the reported frequency of cardiovascular manifestations was low. However, there is a considerable rise in mortality and morbidity in RA due to cardiovascular disease (CVD). The augmented risk for heart disease is related to disease activity and chronic inflammation with traditional risk factors and RA-related characteristics playing a central role . RA patients had higher rates of obesity than the general population and this was strongly associated with physical dysfunction . The BMI in this work was higher than that reported from other nations such as the UK and EU . Compared to osteoarthritis (OA), RA patients were significantly more frequently diabetic and smokers but had lower prevalence of obesity and dyslipidemia . The frequency of metabolic syndrome in RA patients is doubled and raises the risks of stroke and heart disease . The frequency of diabetes mellitus in this work was similar to the USA and Latin American registries, CVD was comparable to the USA CORRONA study and chest involvement was in line to the Korean registry (KORRONA). In this work, the RF was positive in 73.7% while the anti-CCP was positive in 66.7%. The frequency of RF was comparable to that from a large Colombian study on 68,247 cases and to the CORONNA study from USA . It was lower than Asian studies from Korea (86.8%) and China (84.7%). Moreover, the frequency of anti-CCP positivity was lower than that reported in a Korean work (83.9%) but higher than the registries from Colombia (24%) and from the EU (32.7%). Anti-CCP and RF combined detection improves the diagnostic efficiency of RA, providing a potential strategy for early clinical screening . The frequency of remission is three times higher in sero-negative patients with RA. However, the rate of remission does not depend on the serological status as almost two thirds of patients achieve remission in the first 6 months of DMARDs therapy. Anti-CCP and RF titers at the onset of the disease do not influence remission . There was moderate disability in the present cases as measured by the HAQ. The functional capacity (physical and psychosocial) is a central treatment aspect to consider when the RA therapeutic strategy is personalized . The average HAQ score reported in a population-based study was 0.49, and in RA was 1.2 . The disease activity score in the present work was similar to that reported from the EU , higher than that from Turkey and the USA while it was lower than that from the UK and China . The medications received by the patients of the current study were diverse. In this study, more males were receiving HCQ and biologic therapy and with a lower disease activity. In early RA, targets can be achieved when the baseline level of diseases activity is low, with male gender and shorter disease duration . In this work, MTX was received by 77.9%. Using MTX before initiating biologic therapy may contribute to a cost-effective RA care . Variables related to MTX failure such as female gender, higher BMI, smoking, higher disease activity and diabetes can aid in predicting the disease process and outcome of treatment . 54.8% of cases received leflunomide while 37.1% received sulfasalazine. Leflunomide is comparable to sulfasalazine in MTX-failed RA patients with similar safety profile . 11.6% of the current patients were on biologics while in Korea a 6 times fold usage was reported . Across the country there was a significant difference in the age at onset, gender distribution, disease activity, RF and anti-CCP positivity. A potential converse causal link between educational accomplishment and the risk of RA has been noticed . National Registries are essential to direct current practice. RA registries in the Middle East and North Africa (MENA) region are rarely presented . On comparing the findings to countries from other continents, variations were easily noted. In a study from Morocco on 225 RA cases, the age of onset (44 years), F:M (7.1:1), DAS28 (5.2 ± 1), RF positivity (90.5%), anti-CCP positivity (88.8%) were higher than the current findings however, those patients were all receiving biologic therapy . In a study on 300 RA patients from Palestine, treatment with biologic therapy, younger age, having work, higher income, absence of morning stiffness and absence of co-morbidities were significantly associated with better quality of life and less disability . In the work from a tertiary care hospital in KSA on 288 RA patients, the majority (88%) were females with a F:M 7.3:1. In agreement to this work, hypertension was the most common co-morbidity followed by diabetes and almost all of their patients had high disease activity at presentation time . Compared to patients in Western countries, South Korean patients with RA, even those with better physical function, seem to have a lower quality of life . In a study conducted by the Korean College of Rheumatology (KCR) on 2422 patients with a F:M 6.8:1, 19.4% were overweight and 16.1% obese, 13.6% smoked, 11.6% had dyslipidemia, 28% were hypertensive and 4.5% were diabetic. RF and anti-CCP were positive in 82.6% and 86.9%, respectively. The mean DAS28 was 4.7 ± 1.6, 79.9% were receiving steroids, 93.2% MTX, 68.8% HCQ and 46.3% LFN while 61.7% were on biologics . In a large RA registry in the UK, of 27,607 patients, 70.6% were female (F:M 2.4:1) and their mean BMI was 27.3 . In a study from 11 registries from 9 European countries: France, Sweden, Czech, UK, Denmark, Italy, Germany and Portugal on 130,315 RA patients; for biologic naive patients the age at onset was 56.4 years and F:M 2.6:1 and for those who received anti-TNF the age at onset was 46.5 years and F:M was 3:1 . In a large nationwide US study, the F:M was 2.4:1. Obesity was present in 15.1%, diabetes in 20.4% and dyslipidemia in 48% . Although this is currently the largest data of RA patients from across Egypt, there is a desperate need for effective and applicable national management strategies and guidelines. It seems that still across the country the diagnostic tests are not strictly considered for all patients. In spite that the medications received are mostly alike among the major cities, there is a disperse intake of biologic therapy being higher along a North to South gradient. In conclusion, the spectrum of RA phenotype in Egypt is variable across the country with an increasing shift in the F:M ratio. The age at onset was lower than in other countries.
Digitally supported shared decision-making and treat-to-target in rheumatology: a qualitative study embedded in a multicenter randomized controlled trial
dd11308a-2885-4a10-897e-1fc98fe1cba8
9995411
Internal Medicine[mh]
Rheumatoid arthritis (RA) is a chronic inflammatory disease that requires lifelong medical care. Patient-reported outcomes (PRO) represent a cornerstone in the management of RA patients. This is exemplified by the patient’s global self-assessment (PGA) of disease activity on a 0–100 scale where 100 means maximal activity, being part of the composite gold standard to evaluate disease activity, the Disease Activity Score 28 (DAS-28) . Besides the PGA, a variety of RA-specific PRO are used in clinical routine . Some PRO, such as the Rheumatoid Arthritis Impact of Disease score (RAID) and Rheumatoid Arthritis Disease Activity Index-Five (RADAI-5), cover different facets of disease and have validated cut-off values for low, medium and high disease activity, potentially allowing patients and clinicians to get a quick disease activity overview. Currently, disease activity is mainly evaluated during face-to-face appointments and no PROs are collected in between visits. Due to the declining number of rheumatologists , the recommended tight face-to-face monitoring is increasingly difficult to implement in clinical routine. Additionally, RA disease activity fluctuates, often causing disease flares in between appointments. Electronic PRO (ePRO) enable continuous remote monitoring and could improve monitoring of disease activity by capturing otherwise overlooked changes of disease activity . Shaw et al. further reported that discussion of ePRO data leads to an improved patient–provider relationship . A major challenge, not limited to RA and rheumatology is poor ePRO adherence [ , – ]. Poor adherence can stem from a multitude of factors, including but not limited to lack of perceived benefit , age and high disease activity among patients or lack of integration into clinical workflows and electronic health records among professionals . In a recent review, Wiegel et al. concluded that to optimize adherence to tele-monitoring with ePRO, mixed-method studies are needed . Despite the advantages of ePRO, only a minority of rheumatologists is currently using them in clinical routine in Germany . The aim of the prospective multicenter AORTA (AbatOn for RheumaToid Arthritis) trial was to investigate the benefit of using an ePRO web-app, ABATON RA, to support shared decision-making (SDM) and treat-to-target (T2T) in RA patients. The aim of this embedded qualitative study was to investigate patient and physician experiences, perceived drawbacks and benefits of using the ePRO web app, ABATON RA, to digitally support SDM and T2T. The participants of this qualitative study were RA patients who were randomized to the intervention group (IG) of the AORTA trial and physicians in rheumatology care. AORTA trial AORTA is a three-armed, partially blinded multicenter randomized controlled trial (RCT) with four visits, each being 3 months apart. The IG patients used the web app ABATON RA to implement ePRO, SDM and T2T. At baseline, rheumatologists presented their patients 6 ePRO to choose from, including RAID , RADAI-5 , Health Assessment Questionnaire (HAQ) , Funktionsfragebogen Hannover (FFbH) , pain (0–10 numeric ratings scale) and disease activity (PGA) (0–10 numeric ratings scale) and together with the patient set a therapy goal using this ePRO for the next visit, see Fig. . Each week the patient is reminded to complete the selected ePRO and these data are then discussed at the next face-to-face visit with the treating rheumatologist. In the placebo group (PG), patients have access to a sham version of the app to collect two sleep ePRO, the Regensburger Insomnie Skala (RIS) and the Epworth Sleepiness Scale (ESS) . In the control group (CG), patients had no access to the app. Physicians had no access to ePRO results in the PG and CG and no ePRO-based SDM or T2T was carried out. Web-App: ABATON RA The ABATON RA app is a medical device developed and maintained by ABATON GmbH (Berlin, Germany). All digital administered questionnaires, forms and monitoring instruments were pre-configured. Patients get invited by their local care team via a short messaging system (SMS) invite which contains a personalized link to set a password. Using this password and the mobile phone number patients can theoretically login to their account on any device, as ABATON RA is a web app. Patients were however instructed to use the app on their smartphones only. Patients can set if they want to be reminded of new questionnaires via push notifications or SMS. A reminder logic is implemented to remind the patients 3 consecutive days if they have not filled out the questionnaires and stop as soon as the due questionnaire is completed. Results are immediately available to the patient (Fig. a) and the treating healthcare team via the web-based dashboard (Fig. b), including graphical trends. Qualitative study To explore the user experiences with the app, we conducted qualitative phone interviews with IG RA patients from one center (University Hospital Erlangen, Germany) and participating physicians from three German centers (University Hospital Erlangen, Hospital Bad-Bramstedt and Rheumatological-immunological medical practice Templin) that had used the software for at least 3 months. Participants were selected using purposive sampling , to include a heterogeneous sample in regard of age, sex, education and professions of the patients interviewed. Patients of the IG-group selected as potential interview participants were asked during routine appointments if they were interested in participating in a qualitative phone interview. Participants did not receive financial incentives. All patients approached agreed and provided written informed consent. The principal investigator passed the patients’ contact information to the study team for the qualitative study. Interviews were conducted by two health services researchers (FM and SM), and one medical student (K.H.) using two analogous open-ended interview guides that were developed to elicit patients’ and physicians’ perspectives on app-supported rheumatology care. The interview guides were developed by F.M., one physician (J.K.) and one app developer (M.G.). The interview guides (Supplemental Material 1 and 2) included the following three main topics: 1. Study procedure and participants’ experiences; 2. Description of the app and its usability; 3. Impact of the app on rheumatology care. Initial exploratory questions were then specified by follow-up questions. We conducted pilot interviews to test and refine the interview guides. No revisions were necessary. Additional sociodemographic data were collected, including gender, age, diagnosis, education and occupation or medical practice. To reduce the risk of infection and lower participant burden, the interviews were conducted via telephone. The interviews were audio-recorded and transcribed verbatim. Qualitative analysis of the interviews was performed iteratively by F.M. and S.M. based on Kuckartz’s structured qualitative content analysis using MAXQDA software (Verbi GmbH). Relevant text passages from the interview material were coded according to a deductive–inductive procedure. Categories were developed based on the research questions and merged into a coding tree, which was then discussed by the members of the study team. At this stage, data collection had already been completed. The coding tree was applied to the entire interview material and partially extended with new codes, which emerged from the interview material. To ensure traceability, F.M. and S.M. independently applied the final coding tree (Supplemental Material 3) to the entire material. For the presentation of the results, representative quotes of the transcripts were selected, translated into English and included into the manuscript, while long quotes were visually set off from the main text. AORTA is a three-armed, partially blinded multicenter randomized controlled trial (RCT) with four visits, each being 3 months apart. The IG patients used the web app ABATON RA to implement ePRO, SDM and T2T. At baseline, rheumatologists presented their patients 6 ePRO to choose from, including RAID , RADAI-5 , Health Assessment Questionnaire (HAQ) , Funktionsfragebogen Hannover (FFbH) , pain (0–10 numeric ratings scale) and disease activity (PGA) (0–10 numeric ratings scale) and together with the patient set a therapy goal using this ePRO for the next visit, see Fig. . Each week the patient is reminded to complete the selected ePRO and these data are then discussed at the next face-to-face visit with the treating rheumatologist. In the placebo group (PG), patients have access to a sham version of the app to collect two sleep ePRO, the Regensburger Insomnie Skala (RIS) and the Epworth Sleepiness Scale (ESS) . In the control group (CG), patients had no access to the app. Physicians had no access to ePRO results in the PG and CG and no ePRO-based SDM or T2T was carried out. The ABATON RA app is a medical device developed and maintained by ABATON GmbH (Berlin, Germany). All digital administered questionnaires, forms and monitoring instruments were pre-configured. Patients get invited by their local care team via a short messaging system (SMS) invite which contains a personalized link to set a password. Using this password and the mobile phone number patients can theoretically login to their account on any device, as ABATON RA is a web app. Patients were however instructed to use the app on their smartphones only. Patients can set if they want to be reminded of new questionnaires via push notifications or SMS. A reminder logic is implemented to remind the patients 3 consecutive days if they have not filled out the questionnaires and stop as soon as the due questionnaire is completed. Results are immediately available to the patient (Fig. a) and the treating healthcare team via the web-based dashboard (Fig. b), including graphical trends. To explore the user experiences with the app, we conducted qualitative phone interviews with IG RA patients from one center (University Hospital Erlangen, Germany) and participating physicians from three German centers (University Hospital Erlangen, Hospital Bad-Bramstedt and Rheumatological-immunological medical practice Templin) that had used the software for at least 3 months. Participants were selected using purposive sampling , to include a heterogeneous sample in regard of age, sex, education and professions of the patients interviewed. Patients of the IG-group selected as potential interview participants were asked during routine appointments if they were interested in participating in a qualitative phone interview. Participants did not receive financial incentives. All patients approached agreed and provided written informed consent. The principal investigator passed the patients’ contact information to the study team for the qualitative study. Interviews were conducted by two health services researchers (FM and SM), and one medical student (K.H.) using two analogous open-ended interview guides that were developed to elicit patients’ and physicians’ perspectives on app-supported rheumatology care. The interview guides were developed by F.M., one physician (J.K.) and one app developer (M.G.). The interview guides (Supplemental Material 1 and 2) included the following three main topics: 1. Study procedure and participants’ experiences; 2. Description of the app and its usability; 3. Impact of the app on rheumatology care. Initial exploratory questions were then specified by follow-up questions. We conducted pilot interviews to test and refine the interview guides. No revisions were necessary. Additional sociodemographic data were collected, including gender, age, diagnosis, education and occupation or medical practice. To reduce the risk of infection and lower participant burden, the interviews were conducted via telephone. The interviews were audio-recorded and transcribed verbatim. Qualitative analysis of the interviews was performed iteratively by F.M. and S.M. based on Kuckartz’s structured qualitative content analysis using MAXQDA software (Verbi GmbH). Relevant text passages from the interview material were coded according to a deductive–inductive procedure. Categories were developed based on the research questions and merged into a coding tree, which was then discussed by the members of the study team. At this stage, data collection had already been completed. The coding tree was applied to the entire interview material and partially extended with new codes, which emerged from the interview material. To ensure traceability, F.M. and S.M. independently applied the final coding tree (Supplemental Material 3) to the entire material. For the presentation of the results, representative quotes of the transcripts were selected, translated into English and included into the manuscript, while long quotes were visually set off from the main text. Participant characteristics From August to December 2021, we conducted qualitative interviews with 10 RA IG patients and from February to May 2022 with five physicians, see Table . Mean age of interviewed patients and physicians was 51 (range 27–73) and 34 (range 28–49) years, respectively. Half of the interviewed patients were female (5/10; 50%). Patients reported diverse occupational and educational backgrounds . Interviews with patients lasted an average of 28 (8–77) minutes. Most physicians were assistant physicians ( n = 4), and female (3/5; 60%) . Interviews lasted an average of 25 (10–47) minutes. Themes The analysis followed three key themes: (i) App user experiences; (ii) perceived drawbacks of app-supported rheumatology care; and (iii) perceived benefits of app-supported rheumatology care. The results of the key themes are presented separately for patients and physicians. App user experiences Patient perspective Patients described the app as easy to use. They highlighted the user interface of the app: “I find the app brilliant, because the questions are presented beautifully. Not flashy and colorful, but really neutral.” (P8, pos. 123) . Patients reported that they use the app primarily after being reminded that a new questionnaire is available for completion ( “when I have to fill out a questionnaire again.” (P5, pos. 18) ); or to track their disease progression (“It's always interesting to take a look: 'How was it in March or April?'” (P2, pos. 234) ). Most patients described the app as helpful in gaining an overview of their own disease activity. Some reported that the use of the app gave them a feeling of support or security, took away fear or had a motivating effect: “The ABATON RA app is such a hold for me, it makes me feel calmer. Because I see that [the disease] is slowing down, it’s working, [the medication] is kicking in and it's great and everything's in the green.” (P8, pos. 128). Yet, most patients reported opportunities for improvement: Some of the questions asked (e.g., weekly Funktionsfragebogen Hannover (FFbH) ) were difficult to understand or ambiguous: “'Is there any difficulty in turning a faucet on and off?' Well, I do not know where there are still faucets nowadays that you have to turn on and off.” (P10, pos. 25). Other patients criticized the high degree of standardization and repetition of the questionnaires, while calling for more specific or differentiated answer options, e.g., to link changes in the disease state and lifestyle changes: “Last time the doctor told me that my score was very good in July. And I was in rehab, but he can't know that. I can't enter it anywhere. It wouldn't be bad if you could simply enter something like that as a patient.” (P6, pos. 103) . Finally, patients proposed that users themselves should be able to determine the times at which they are reminded of questionnaires. Physician perspective Overall, the interviewed physicians described the app as well-structured and easy to use, while some mentioned initial difficulties: “I had a few technical difficulties at the beginning. Those diminished, once you understand a little bit how it works.” (HCP 2, pos. 45) . Physicians reported that they use the app to prepare for the consultation, after the consultation, to follow up disease status after medication changes, and most importantly during consultation: “I ultimately rebuild the whole consultation, usually I always ask ‘How was it last week?’. I have a certain pattern and now with ABATON RA it's completely different. You can start by saying ‘Let's take a look at the course of your illness or the last three months.’ and then look at the screen together. So just turning the screen around is something completely new that I've never done before. (…) So it's somehow easier to get into a conversation with the patient and the patients also feel better understood, because often you don't see the symptoms at the doctor's visit. And then the patient can show you: ‘Look over here, two weeks ago I felt bad and then again four weeks ago’. Thus he can also refer to it. So I think the patient also feels better understood if he can show you something, as if he then sits in front of me with a bad conscience and says ‘yes, I'm currently doing well, but three weeks ago I was doing badly, but I can't really show you anything now, like that’.” (HCP 1, pos. 26–27). The participating physicians consider the app as an additional aid for most patients to gain better overview of their disease and increase treatment adherence and motivation; while other patients lack motivation to use the app: “And yes, then the patients simply do not fill out these questionnaires and then mention technical difficulties as an excuse. But then it works again during the consultation. So technical difficulties are often used as a bit of an excuse for not using it.” (HCP 1, pos. 37) . This might also be due to the high level of standardization and repetition of the questions: “Many patients complain that it's always the same, but that's exactly the whole purpose of the app, isn't it? And that works super reliably.” (HCP 2, pos. 23). Physicians reported that the use of the app ultimately re-defines the roles in the relationship between patients and their treating medical staff, as those can follow and audit the medical documentation: “In other words, patients do take a look at what you document in the app. And when it comes to medication, for example, you really do have access to the same data. In standard care, patients have no access to our medical documentation. And [with the app] we really do share the same data. So it's just very unusual, because normally you're somehow a bit untouchable. You can document whatever you want, and it's really the first time that ‘I don't just check the patient’ to see whether he's taken his medication or had any vaccinations. Often one is nevertheless in such a control function, but with the app also vice versa; whether I have also documented the whole thing cleanly.” (HCP 1, pos. 47) Perceived drawbacks of app-supported rheumatology care Patient perspective Patients also reported limitations of app-supported ePRO documentation. For example, due to the high degree of standardization, the app was perceived as too superficial to encourage self-reflection: “But as it is right now, the app is rather for regular communication with the doctor, so that he knows how I am, not bad, but for self-reflection it is not enough for me.” (P6, pos. 89) . Therefore, the patient perceived the benefit of the app actually only on the doctor's side: “Yes, it doesn't change the care at all, I think. (…) The doctor looks at it. I believe that he knows everything better. And then I think it's important for the doctor and not for me. He has to explain it to me. That's how I see it.” (P6, pos. 60) . One participant described that constant pursuit of one's own disease activity can potentially lead to negative thoughts: “If you dwell too often on your own disease activity, it can be associated with negative thoughts. You may be more likely to get into such a negative vortex. [The app] will keep reminding you of your disease.” (P9, pos. 101) . Another drawback reported was that the entered information might be inaccurate due to recall bias: “But ultimately it's like this, you tend to answer from the gut: Yes, I'm fine today. One tends to remember less about how it was five or seven days ago.” (P7, pos. 35) . While patients reported that particularly individuals with a smartphone, technical skills, high health literacy and disease knowledge would be suitable to use the app, persons who do not meet these characteristics are left out: “If someone does not work with a smartphone, he has no idea how to do it, he needs guidance.” (P3, pos. 42) . This also applies to patients who do not have access to technical devices: “And there are also a lot of people who simply don't have the money. (…) Having a compatible device actually also involves a lot of money” (P6, pos. 36). Physician perspective A central drawback reported by the interviewed physicians was to become very focused on app data and get biased before the appointment, no longer perceiving the patients and their needs holistically: “Of course, you have to be careful not to become too much of a data junkie and then ignore everything else, right?” (HCP 5, pos. 61). In line with the patient perspective, interviewed physicians reported that ePRO are subjective and prone to error: “You just have to answer the questionnaires honestly. There are certainly some who are only doing it to please me and perhaps don't take the answers seriously.” (HCP 1, pos. 39) . The interviewed physicians considered it a limitation that the app was only available for rheumatoid arthritis. Moreover, general limitations of ePRO were pointed out: “The problem, which always exists, is that you can't really differentiate on the basis of the questions and the app, is it rheumatoid arthritis or is it fibromyalgia, which a lot of patients have.” (HCP 3, pos. 33) . Another drawback reported was that ePRO app use is not incorporated in the remuneration system for rheumatological care in Germany: “So it would not be feasible at the moment because, of course, if the patients don't visit because they're doing well, we can't earn any money or bill them for anything. And also this monitoring of patient input is currently not yet remunerated, i.e., only minimally.” (HCP 1, pos. 57) . In addition, one physician reported that more and more mhealth applications are finding their way into clinical routine, hence integrating them all into everyday practice and electronic health records is a challenge: “That's always my nightmare. Every patient waves his smartphone because he has collected some kind of data, and then I have to compare and evaluate them all. That's my nightmare, of course. That's why I have an interest in making sure that the data is interoperable and can be easily merged, right? (...) Well, that starts with the interface definition. This must somehow be integrated into my practice management system. And then, as I said, the daily work routine is very complex. And I need the information at a glance.” (HCP 5, pos. 41). Finally, as mentioned by patients, physicians reported that patients expect the discussion of the entered ePRO, which may mean extra work: “Because, of course, for patients, using the app at home sometimes feels a bit pointless, when the findings or the individual results are not discussed. In this respect, all patients who have used the system regularly actually respond to this. And they then also demand that you look into the app together and also help them in interpreting the data.” (HCP 2, pos. 57). Perceived benefits of app-supported rheumatology care Patient perspective Major benefits of app-supported rheumatology care reported by the patients are the possibility to continuously monitor their health, to receive a clear overview of the disease progression, as well as, to provide the treating medical staff with better and more comprehensive insights. Patients emphasized that they could use the app to show or even prove their disease activity, specifically deterioration of the health status, to their medical staff: “And I mean, the doctor always immediately sees all the information that I constantly enter as well as how I've been doing in the last few weeks. That is positive, too. You can also prove that.” (P4, pos. 65) . According to most patients, app use encourages reflection on their own disease, which is why they described the app to be helpful for other diseases and medical areas as well: Multiple sclerosis, pain management, medication management, as well as other rheumatological diseases. In addition, patients reported that using the app, they save paper and time at the rheumatology ward, which they usually need to fill out paper-based PRO-questionnaires. Physician perspective Overall, the benefits of app-supported rheumatology care reported by patients were consistent with those of interviewed physicians: “Frequent documentation, closer monitoring of the clinical patient, that's something on the one hand. Of course, also the agreement of a common therapy goal - by simply being able to define the patient outcome, which is also understandable for the patient. The patient then understands his starting point and perhaps also where he can reach or perhaps where he should stay. And then there is also compliance promotion, patient education involved.” (HCP 2, pos 25) Furthermore, physicians emphasized that the use of the app can lead to time savings in rheumatology care: “I rather feel that it leads to a saving of time, because you talk relatively concretely about complaints, relapses that have occurred. Or it's quite clear in the ABATON RA app if everything was just fine. You don't even have to look any further. Because you can just see that in the graph. The patient kept answering these questionnaires.” (HCP 2, pos. 33) . Thus the app might ultimately promote a more effective rheumatology care delivery: “Well, it's more effective because I open the program before I call the patient in, I look at the data and I know before the patient comes in whether it's going well or not and whether we probably have to change the therapy or not.” (HCP 3, pos. 31) . Physicians emphasized that the app could be used to implement need-adapted rheumatology care: “(…) in such a way that I see mainly patients who just deteriorated. But then to be able to see them more quickly and perhaps also patients who are demonstrably doing well, who perhaps only need to be spoken to briefly by telephone or not at all and simply only once a year. So more flexible patient management.” (HCP 1, pos. 55) . Physicians also reported that app-based continuous documentation of ePRO could be helpful in other medical domains, such as multiple sclerosis, heart failure, chronic kidney disease, diabetes, or pre- and post-operative in orthopedics. From August to December 2021, we conducted qualitative interviews with 10 RA IG patients and from February to May 2022 with five physicians, see Table . Mean age of interviewed patients and physicians was 51 (range 27–73) and 34 (range 28–49) years, respectively. Half of the interviewed patients were female (5/10; 50%). Patients reported diverse occupational and educational backgrounds . Interviews with patients lasted an average of 28 (8–77) minutes. Most physicians were assistant physicians ( n = 4), and female (3/5; 60%) . Interviews lasted an average of 25 (10–47) minutes. The analysis followed three key themes: (i) App user experiences; (ii) perceived drawbacks of app-supported rheumatology care; and (iii) perceived benefits of app-supported rheumatology care. The results of the key themes are presented separately for patients and physicians. App user experiences Patient perspective Patients described the app as easy to use. They highlighted the user interface of the app: “I find the app brilliant, because the questions are presented beautifully. Not flashy and colorful, but really neutral.” (P8, pos. 123) . Patients reported that they use the app primarily after being reminded that a new questionnaire is available for completion ( “when I have to fill out a questionnaire again.” (P5, pos. 18) ); or to track their disease progression (“It's always interesting to take a look: 'How was it in March or April?'” (P2, pos. 234) ). Most patients described the app as helpful in gaining an overview of their own disease activity. Some reported that the use of the app gave them a feeling of support or security, took away fear or had a motivating effect: “The ABATON RA app is such a hold for me, it makes me feel calmer. Because I see that [the disease] is slowing down, it’s working, [the medication] is kicking in and it's great and everything's in the green.” (P8, pos. 128). Yet, most patients reported opportunities for improvement: Some of the questions asked (e.g., weekly Funktionsfragebogen Hannover (FFbH) ) were difficult to understand or ambiguous: “'Is there any difficulty in turning a faucet on and off?' Well, I do not know where there are still faucets nowadays that you have to turn on and off.” (P10, pos. 25). Other patients criticized the high degree of standardization and repetition of the questionnaires, while calling for more specific or differentiated answer options, e.g., to link changes in the disease state and lifestyle changes: “Last time the doctor told me that my score was very good in July. And I was in rehab, but he can't know that. I can't enter it anywhere. It wouldn't be bad if you could simply enter something like that as a patient.” (P6, pos. 103) . Finally, patients proposed that users themselves should be able to determine the times at which they are reminded of questionnaires. Physician perspective Overall, the interviewed physicians described the app as well-structured and easy to use, while some mentioned initial difficulties: “I had a few technical difficulties at the beginning. Those diminished, once you understand a little bit how it works.” (HCP 2, pos. 45) . Physicians reported that they use the app to prepare for the consultation, after the consultation, to follow up disease status after medication changes, and most importantly during consultation: “I ultimately rebuild the whole consultation, usually I always ask ‘How was it last week?’. I have a certain pattern and now with ABATON RA it's completely different. You can start by saying ‘Let's take a look at the course of your illness or the last three months.’ and then look at the screen together. So just turning the screen around is something completely new that I've never done before. (…) So it's somehow easier to get into a conversation with the patient and the patients also feel better understood, because often you don't see the symptoms at the doctor's visit. And then the patient can show you: ‘Look over here, two weeks ago I felt bad and then again four weeks ago’. Thus he can also refer to it. So I think the patient also feels better understood if he can show you something, as if he then sits in front of me with a bad conscience and says ‘yes, I'm currently doing well, but three weeks ago I was doing badly, but I can't really show you anything now, like that’.” (HCP 1, pos. 26–27). The participating physicians consider the app as an additional aid for most patients to gain better overview of their disease and increase treatment adherence and motivation; while other patients lack motivation to use the app: “And yes, then the patients simply do not fill out these questionnaires and then mention technical difficulties as an excuse. But then it works again during the consultation. So technical difficulties are often used as a bit of an excuse for not using it.” (HCP 1, pos. 37) . This might also be due to the high level of standardization and repetition of the questions: “Many patients complain that it's always the same, but that's exactly the whole purpose of the app, isn't it? And that works super reliably.” (HCP 2, pos. 23). Physicians reported that the use of the app ultimately re-defines the roles in the relationship between patients and their treating medical staff, as those can follow and audit the medical documentation: “In other words, patients do take a look at what you document in the app. And when it comes to medication, for example, you really do have access to the same data. In standard care, patients have no access to our medical documentation. And [with the app] we really do share the same data. So it's just very unusual, because normally you're somehow a bit untouchable. You can document whatever you want, and it's really the first time that ‘I don't just check the patient’ to see whether he's taken his medication or had any vaccinations. Often one is nevertheless in such a control function, but with the app also vice versa; whether I have also documented the whole thing cleanly.” (HCP 1, pos. 47) Perceived drawbacks of app-supported rheumatology care Patient perspective Patients also reported limitations of app-supported ePRO documentation. For example, due to the high degree of standardization, the app was perceived as too superficial to encourage self-reflection: “But as it is right now, the app is rather for regular communication with the doctor, so that he knows how I am, not bad, but for self-reflection it is not enough for me.” (P6, pos. 89) . Therefore, the patient perceived the benefit of the app actually only on the doctor's side: “Yes, it doesn't change the care at all, I think. (…) The doctor looks at it. I believe that he knows everything better. And then I think it's important for the doctor and not for me. He has to explain it to me. That's how I see it.” (P6, pos. 60) . One participant described that constant pursuit of one's own disease activity can potentially lead to negative thoughts: “If you dwell too often on your own disease activity, it can be associated with negative thoughts. You may be more likely to get into such a negative vortex. [The app] will keep reminding you of your disease.” (P9, pos. 101) . Another drawback reported was that the entered information might be inaccurate due to recall bias: “But ultimately it's like this, you tend to answer from the gut: Yes, I'm fine today. One tends to remember less about how it was five or seven days ago.” (P7, pos. 35) . While patients reported that particularly individuals with a smartphone, technical skills, high health literacy and disease knowledge would be suitable to use the app, persons who do not meet these characteristics are left out: “If someone does not work with a smartphone, he has no idea how to do it, he needs guidance.” (P3, pos. 42) . This also applies to patients who do not have access to technical devices: “And there are also a lot of people who simply don't have the money. (…) Having a compatible device actually also involves a lot of money” (P6, pos. 36). Physician perspective A central drawback reported by the interviewed physicians was to become very focused on app data and get biased before the appointment, no longer perceiving the patients and their needs holistically: “Of course, you have to be careful not to become too much of a data junkie and then ignore everything else, right?” (HCP 5, pos. 61). In line with the patient perspective, interviewed physicians reported that ePRO are subjective and prone to error: “You just have to answer the questionnaires honestly. There are certainly some who are only doing it to please me and perhaps don't take the answers seriously.” (HCP 1, pos. 39) . The interviewed physicians considered it a limitation that the app was only available for rheumatoid arthritis. Moreover, general limitations of ePRO were pointed out: “The problem, which always exists, is that you can't really differentiate on the basis of the questions and the app, is it rheumatoid arthritis or is it fibromyalgia, which a lot of patients have.” (HCP 3, pos. 33) . Another drawback reported was that ePRO app use is not incorporated in the remuneration system for rheumatological care in Germany: “So it would not be feasible at the moment because, of course, if the patients don't visit because they're doing well, we can't earn any money or bill them for anything. And also this monitoring of patient input is currently not yet remunerated, i.e., only minimally.” (HCP 1, pos. 57) . In addition, one physician reported that more and more mhealth applications are finding their way into clinical routine, hence integrating them all into everyday practice and electronic health records is a challenge: “That's always my nightmare. Every patient waves his smartphone because he has collected some kind of data, and then I have to compare and evaluate them all. That's my nightmare, of course. That's why I have an interest in making sure that the data is interoperable and can be easily merged, right? (...) Well, that starts with the interface definition. This must somehow be integrated into my practice management system. And then, as I said, the daily work routine is very complex. And I need the information at a glance.” (HCP 5, pos. 41). Finally, as mentioned by patients, physicians reported that patients expect the discussion of the entered ePRO, which may mean extra work: “Because, of course, for patients, using the app at home sometimes feels a bit pointless, when the findings or the individual results are not discussed. In this respect, all patients who have used the system regularly actually respond to this. And they then also demand that you look into the app together and also help them in interpreting the data.” (HCP 2, pos. 57). Perceived benefits of app-supported rheumatology care Patient perspective Major benefits of app-supported rheumatology care reported by the patients are the possibility to continuously monitor their health, to receive a clear overview of the disease progression, as well as, to provide the treating medical staff with better and more comprehensive insights. Patients emphasized that they could use the app to show or even prove their disease activity, specifically deterioration of the health status, to their medical staff: “And I mean, the doctor always immediately sees all the information that I constantly enter as well as how I've been doing in the last few weeks. That is positive, too. You can also prove that.” (P4, pos. 65) . According to most patients, app use encourages reflection on their own disease, which is why they described the app to be helpful for other diseases and medical areas as well: Multiple sclerosis, pain management, medication management, as well as other rheumatological diseases. In addition, patients reported that using the app, they save paper and time at the rheumatology ward, which they usually need to fill out paper-based PRO-questionnaires. Physician perspective Overall, the benefits of app-supported rheumatology care reported by patients were consistent with those of interviewed physicians: “Frequent documentation, closer monitoring of the clinical patient, that's something on the one hand. Of course, also the agreement of a common therapy goal - by simply being able to define the patient outcome, which is also understandable for the patient. The patient then understands his starting point and perhaps also where he can reach or perhaps where he should stay. And then there is also compliance promotion, patient education involved.” (HCP 2, pos 25) Furthermore, physicians emphasized that the use of the app can lead to time savings in rheumatology care: “I rather feel that it leads to a saving of time, because you talk relatively concretely about complaints, relapses that have occurred. Or it's quite clear in the ABATON RA app if everything was just fine. You don't even have to look any further. Because you can just see that in the graph. The patient kept answering these questionnaires.” (HCP 2, pos. 33) . Thus the app might ultimately promote a more effective rheumatology care delivery: “Well, it's more effective because I open the program before I call the patient in, I look at the data and I know before the patient comes in whether it's going well or not and whether we probably have to change the therapy or not.” (HCP 3, pos. 31) . Physicians emphasized that the app could be used to implement need-adapted rheumatology care: “(…) in such a way that I see mainly patients who just deteriorated. But then to be able to see them more quickly and perhaps also patients who are demonstrably doing well, who perhaps only need to be spoken to briefly by telephone or not at all and simply only once a year. So more flexible patient management.” (HCP 1, pos. 55) . Physicians also reported that app-based continuous documentation of ePRO could be helpful in other medical domains, such as multiple sclerosis, heart failure, chronic kidney disease, diabetes, or pre- and post-operative in orthopedics. Patient perspective Patients described the app as easy to use. They highlighted the user interface of the app: “I find the app brilliant, because the questions are presented beautifully. Not flashy and colorful, but really neutral.” (P8, pos. 123) . Patients reported that they use the app primarily after being reminded that a new questionnaire is available for completion ( “when I have to fill out a questionnaire again.” (P5, pos. 18) ); or to track their disease progression (“It's always interesting to take a look: 'How was it in March or April?'” (P2, pos. 234) ). Most patients described the app as helpful in gaining an overview of their own disease activity. Some reported that the use of the app gave them a feeling of support or security, took away fear or had a motivating effect: “The ABATON RA app is such a hold for me, it makes me feel calmer. Because I see that [the disease] is slowing down, it’s working, [the medication] is kicking in and it's great and everything's in the green.” (P8, pos. 128). Yet, most patients reported opportunities for improvement: Some of the questions asked (e.g., weekly Funktionsfragebogen Hannover (FFbH) ) were difficult to understand or ambiguous: “'Is there any difficulty in turning a faucet on and off?' Well, I do not know where there are still faucets nowadays that you have to turn on and off.” (P10, pos. 25). Other patients criticized the high degree of standardization and repetition of the questionnaires, while calling for more specific or differentiated answer options, e.g., to link changes in the disease state and lifestyle changes: “Last time the doctor told me that my score was very good in July. And I was in rehab, but he can't know that. I can't enter it anywhere. It wouldn't be bad if you could simply enter something like that as a patient.” (P6, pos. 103) . Finally, patients proposed that users themselves should be able to determine the times at which they are reminded of questionnaires. Physician perspective Overall, the interviewed physicians described the app as well-structured and easy to use, while some mentioned initial difficulties: “I had a few technical difficulties at the beginning. Those diminished, once you understand a little bit how it works.” (HCP 2, pos. 45) . Physicians reported that they use the app to prepare for the consultation, after the consultation, to follow up disease status after medication changes, and most importantly during consultation: “I ultimately rebuild the whole consultation, usually I always ask ‘How was it last week?’. I have a certain pattern and now with ABATON RA it's completely different. You can start by saying ‘Let's take a look at the course of your illness or the last three months.’ and then look at the screen together. So just turning the screen around is something completely new that I've never done before. (…) So it's somehow easier to get into a conversation with the patient and the patients also feel better understood, because often you don't see the symptoms at the doctor's visit. And then the patient can show you: ‘Look over here, two weeks ago I felt bad and then again four weeks ago’. Thus he can also refer to it. So I think the patient also feels better understood if he can show you something, as if he then sits in front of me with a bad conscience and says ‘yes, I'm currently doing well, but three weeks ago I was doing badly, but I can't really show you anything now, like that’.” (HCP 1, pos. 26–27). The participating physicians consider the app as an additional aid for most patients to gain better overview of their disease and increase treatment adherence and motivation; while other patients lack motivation to use the app: “And yes, then the patients simply do not fill out these questionnaires and then mention technical difficulties as an excuse. But then it works again during the consultation. So technical difficulties are often used as a bit of an excuse for not using it.” (HCP 1, pos. 37) . This might also be due to the high level of standardization and repetition of the questions: “Many patients complain that it's always the same, but that's exactly the whole purpose of the app, isn't it? And that works super reliably.” (HCP 2, pos. 23). Physicians reported that the use of the app ultimately re-defines the roles in the relationship between patients and their treating medical staff, as those can follow and audit the medical documentation: “In other words, patients do take a look at what you document in the app. And when it comes to medication, for example, you really do have access to the same data. In standard care, patients have no access to our medical documentation. And [with the app] we really do share the same data. So it's just very unusual, because normally you're somehow a bit untouchable. You can document whatever you want, and it's really the first time that ‘I don't just check the patient’ to see whether he's taken his medication or had any vaccinations. Often one is nevertheless in such a control function, but with the app also vice versa; whether I have also documented the whole thing cleanly.” (HCP 1, pos. 47) Patients described the app as easy to use. They highlighted the user interface of the app: “I find the app brilliant, because the questions are presented beautifully. Not flashy and colorful, but really neutral.” (P8, pos. 123) . Patients reported that they use the app primarily after being reminded that a new questionnaire is available for completion ( “when I have to fill out a questionnaire again.” (P5, pos. 18) ); or to track their disease progression (“It's always interesting to take a look: 'How was it in March or April?'” (P2, pos. 234) ). Most patients described the app as helpful in gaining an overview of their own disease activity. Some reported that the use of the app gave them a feeling of support or security, took away fear or had a motivating effect: “The ABATON RA app is such a hold for me, it makes me feel calmer. Because I see that [the disease] is slowing down, it’s working, [the medication] is kicking in and it's great and everything's in the green.” (P8, pos. 128). Yet, most patients reported opportunities for improvement: Some of the questions asked (e.g., weekly Funktionsfragebogen Hannover (FFbH) ) were difficult to understand or ambiguous: “'Is there any difficulty in turning a faucet on and off?' Well, I do not know where there are still faucets nowadays that you have to turn on and off.” (P10, pos. 25). Other patients criticized the high degree of standardization and repetition of the questionnaires, while calling for more specific or differentiated answer options, e.g., to link changes in the disease state and lifestyle changes: “Last time the doctor told me that my score was very good in July. And I was in rehab, but he can't know that. I can't enter it anywhere. It wouldn't be bad if you could simply enter something like that as a patient.” (P6, pos. 103) . Finally, patients proposed that users themselves should be able to determine the times at which they are reminded of questionnaires. Overall, the interviewed physicians described the app as well-structured and easy to use, while some mentioned initial difficulties: “I had a few technical difficulties at the beginning. Those diminished, once you understand a little bit how it works.” (HCP 2, pos. 45) . Physicians reported that they use the app to prepare for the consultation, after the consultation, to follow up disease status after medication changes, and most importantly during consultation: “I ultimately rebuild the whole consultation, usually I always ask ‘How was it last week?’. I have a certain pattern and now with ABATON RA it's completely different. You can start by saying ‘Let's take a look at the course of your illness or the last three months.’ and then look at the screen together. So just turning the screen around is something completely new that I've never done before. (…) So it's somehow easier to get into a conversation with the patient and the patients also feel better understood, because often you don't see the symptoms at the doctor's visit. And then the patient can show you: ‘Look over here, two weeks ago I felt bad and then again four weeks ago’. Thus he can also refer to it. So I think the patient also feels better understood if he can show you something, as if he then sits in front of me with a bad conscience and says ‘yes, I'm currently doing well, but three weeks ago I was doing badly, but I can't really show you anything now, like that’.” (HCP 1, pos. 26–27). The participating physicians consider the app as an additional aid for most patients to gain better overview of their disease and increase treatment adherence and motivation; while other patients lack motivation to use the app: “And yes, then the patients simply do not fill out these questionnaires and then mention technical difficulties as an excuse. But then it works again during the consultation. So technical difficulties are often used as a bit of an excuse for not using it.” (HCP 1, pos. 37) . This might also be due to the high level of standardization and repetition of the questions: “Many patients complain that it's always the same, but that's exactly the whole purpose of the app, isn't it? And that works super reliably.” (HCP 2, pos. 23). Physicians reported that the use of the app ultimately re-defines the roles in the relationship between patients and their treating medical staff, as those can follow and audit the medical documentation: “In other words, patients do take a look at what you document in the app. And when it comes to medication, for example, you really do have access to the same data. In standard care, patients have no access to our medical documentation. And [with the app] we really do share the same data. So it's just very unusual, because normally you're somehow a bit untouchable. You can document whatever you want, and it's really the first time that ‘I don't just check the patient’ to see whether he's taken his medication or had any vaccinations. Often one is nevertheless in such a control function, but with the app also vice versa; whether I have also documented the whole thing cleanly.” (HCP 1, pos. 47) Patient perspective Patients also reported limitations of app-supported ePRO documentation. For example, due to the high degree of standardization, the app was perceived as too superficial to encourage self-reflection: “But as it is right now, the app is rather for regular communication with the doctor, so that he knows how I am, not bad, but for self-reflection it is not enough for me.” (P6, pos. 89) . Therefore, the patient perceived the benefit of the app actually only on the doctor's side: “Yes, it doesn't change the care at all, I think. (…) The doctor looks at it. I believe that he knows everything better. And then I think it's important for the doctor and not for me. He has to explain it to me. That's how I see it.” (P6, pos. 60) . One participant described that constant pursuit of one's own disease activity can potentially lead to negative thoughts: “If you dwell too often on your own disease activity, it can be associated with negative thoughts. You may be more likely to get into such a negative vortex. [The app] will keep reminding you of your disease.” (P9, pos. 101) . Another drawback reported was that the entered information might be inaccurate due to recall bias: “But ultimately it's like this, you tend to answer from the gut: Yes, I'm fine today. One tends to remember less about how it was five or seven days ago.” (P7, pos. 35) . While patients reported that particularly individuals with a smartphone, technical skills, high health literacy and disease knowledge would be suitable to use the app, persons who do not meet these characteristics are left out: “If someone does not work with a smartphone, he has no idea how to do it, he needs guidance.” (P3, pos. 42) . This also applies to patients who do not have access to technical devices: “And there are also a lot of people who simply don't have the money. (…) Having a compatible device actually also involves a lot of money” (P6, pos. 36). Physician perspective A central drawback reported by the interviewed physicians was to become very focused on app data and get biased before the appointment, no longer perceiving the patients and their needs holistically: “Of course, you have to be careful not to become too much of a data junkie and then ignore everything else, right?” (HCP 5, pos. 61). In line with the patient perspective, interviewed physicians reported that ePRO are subjective and prone to error: “You just have to answer the questionnaires honestly. There are certainly some who are only doing it to please me and perhaps don't take the answers seriously.” (HCP 1, pos. 39) . The interviewed physicians considered it a limitation that the app was only available for rheumatoid arthritis. Moreover, general limitations of ePRO were pointed out: “The problem, which always exists, is that you can't really differentiate on the basis of the questions and the app, is it rheumatoid arthritis or is it fibromyalgia, which a lot of patients have.” (HCP 3, pos. 33) . Another drawback reported was that ePRO app use is not incorporated in the remuneration system for rheumatological care in Germany: “So it would not be feasible at the moment because, of course, if the patients don't visit because they're doing well, we can't earn any money or bill them for anything. And also this monitoring of patient input is currently not yet remunerated, i.e., only minimally.” (HCP 1, pos. 57) . In addition, one physician reported that more and more mhealth applications are finding their way into clinical routine, hence integrating them all into everyday practice and electronic health records is a challenge: “That's always my nightmare. Every patient waves his smartphone because he has collected some kind of data, and then I have to compare and evaluate them all. That's my nightmare, of course. That's why I have an interest in making sure that the data is interoperable and can be easily merged, right? (...) Well, that starts with the interface definition. This must somehow be integrated into my practice management system. And then, as I said, the daily work routine is very complex. And I need the information at a glance.” (HCP 5, pos. 41). Finally, as mentioned by patients, physicians reported that patients expect the discussion of the entered ePRO, which may mean extra work: “Because, of course, for patients, using the app at home sometimes feels a bit pointless, when the findings or the individual results are not discussed. In this respect, all patients who have used the system regularly actually respond to this. And they then also demand that you look into the app together and also help them in interpreting the data.” (HCP 2, pos. 57). Patients also reported limitations of app-supported ePRO documentation. For example, due to the high degree of standardization, the app was perceived as too superficial to encourage self-reflection: “But as it is right now, the app is rather for regular communication with the doctor, so that he knows how I am, not bad, but for self-reflection it is not enough for me.” (P6, pos. 89) . Therefore, the patient perceived the benefit of the app actually only on the doctor's side: “Yes, it doesn't change the care at all, I think. (…) The doctor looks at it. I believe that he knows everything better. And then I think it's important for the doctor and not for me. He has to explain it to me. That's how I see it.” (P6, pos. 60) . One participant described that constant pursuit of one's own disease activity can potentially lead to negative thoughts: “If you dwell too often on your own disease activity, it can be associated with negative thoughts. You may be more likely to get into such a negative vortex. [The app] will keep reminding you of your disease.” (P9, pos. 101) . Another drawback reported was that the entered information might be inaccurate due to recall bias: “But ultimately it's like this, you tend to answer from the gut: Yes, I'm fine today. One tends to remember less about how it was five or seven days ago.” (P7, pos. 35) . While patients reported that particularly individuals with a smartphone, technical skills, high health literacy and disease knowledge would be suitable to use the app, persons who do not meet these characteristics are left out: “If someone does not work with a smartphone, he has no idea how to do it, he needs guidance.” (P3, pos. 42) . This also applies to patients who do not have access to technical devices: “And there are also a lot of people who simply don't have the money. (…) Having a compatible device actually also involves a lot of money” (P6, pos. 36). A central drawback reported by the interviewed physicians was to become very focused on app data and get biased before the appointment, no longer perceiving the patients and their needs holistically: “Of course, you have to be careful not to become too much of a data junkie and then ignore everything else, right?” (HCP 5, pos. 61). In line with the patient perspective, interviewed physicians reported that ePRO are subjective and prone to error: “You just have to answer the questionnaires honestly. There are certainly some who are only doing it to please me and perhaps don't take the answers seriously.” (HCP 1, pos. 39) . The interviewed physicians considered it a limitation that the app was only available for rheumatoid arthritis. Moreover, general limitations of ePRO were pointed out: “The problem, which always exists, is that you can't really differentiate on the basis of the questions and the app, is it rheumatoid arthritis or is it fibromyalgia, which a lot of patients have.” (HCP 3, pos. 33) . Another drawback reported was that ePRO app use is not incorporated in the remuneration system for rheumatological care in Germany: “So it would not be feasible at the moment because, of course, if the patients don't visit because they're doing well, we can't earn any money or bill them for anything. And also this monitoring of patient input is currently not yet remunerated, i.e., only minimally.” (HCP 1, pos. 57) . In addition, one physician reported that more and more mhealth applications are finding their way into clinical routine, hence integrating them all into everyday practice and electronic health records is a challenge: “That's always my nightmare. Every patient waves his smartphone because he has collected some kind of data, and then I have to compare and evaluate them all. That's my nightmare, of course. That's why I have an interest in making sure that the data is interoperable and can be easily merged, right? (...) Well, that starts with the interface definition. This must somehow be integrated into my practice management system. And then, as I said, the daily work routine is very complex. And I need the information at a glance.” (HCP 5, pos. 41). Finally, as mentioned by patients, physicians reported that patients expect the discussion of the entered ePRO, which may mean extra work: “Because, of course, for patients, using the app at home sometimes feels a bit pointless, when the findings or the individual results are not discussed. In this respect, all patients who have used the system regularly actually respond to this. And they then also demand that you look into the app together and also help them in interpreting the data.” (HCP 2, pos. 57). Patient perspective Major benefits of app-supported rheumatology care reported by the patients are the possibility to continuously monitor their health, to receive a clear overview of the disease progression, as well as, to provide the treating medical staff with better and more comprehensive insights. Patients emphasized that they could use the app to show or even prove their disease activity, specifically deterioration of the health status, to their medical staff: “And I mean, the doctor always immediately sees all the information that I constantly enter as well as how I've been doing in the last few weeks. That is positive, too. You can also prove that.” (P4, pos. 65) . According to most patients, app use encourages reflection on their own disease, which is why they described the app to be helpful for other diseases and medical areas as well: Multiple sclerosis, pain management, medication management, as well as other rheumatological diseases. In addition, patients reported that using the app, they save paper and time at the rheumatology ward, which they usually need to fill out paper-based PRO-questionnaires. Physician perspective Overall, the benefits of app-supported rheumatology care reported by patients were consistent with those of interviewed physicians: “Frequent documentation, closer monitoring of the clinical patient, that's something on the one hand. Of course, also the agreement of a common therapy goal - by simply being able to define the patient outcome, which is also understandable for the patient. The patient then understands his starting point and perhaps also where he can reach or perhaps where he should stay. And then there is also compliance promotion, patient education involved.” (HCP 2, pos 25) Furthermore, physicians emphasized that the use of the app can lead to time savings in rheumatology care: “I rather feel that it leads to a saving of time, because you talk relatively concretely about complaints, relapses that have occurred. Or it's quite clear in the ABATON RA app if everything was just fine. You don't even have to look any further. Because you can just see that in the graph. The patient kept answering these questionnaires.” (HCP 2, pos. 33) . Thus the app might ultimately promote a more effective rheumatology care delivery: “Well, it's more effective because I open the program before I call the patient in, I look at the data and I know before the patient comes in whether it's going well or not and whether we probably have to change the therapy or not.” (HCP 3, pos. 31) . Physicians emphasized that the app could be used to implement need-adapted rheumatology care: “(…) in such a way that I see mainly patients who just deteriorated. But then to be able to see them more quickly and perhaps also patients who are demonstrably doing well, who perhaps only need to be spoken to briefly by telephone or not at all and simply only once a year. So more flexible patient management.” (HCP 1, pos. 55) . Physicians also reported that app-based continuous documentation of ePRO could be helpful in other medical domains, such as multiple sclerosis, heart failure, chronic kidney disease, diabetes, or pre- and post-operative in orthopedics. Major benefits of app-supported rheumatology care reported by the patients are the possibility to continuously monitor their health, to receive a clear overview of the disease progression, as well as, to provide the treating medical staff with better and more comprehensive insights. Patients emphasized that they could use the app to show or even prove their disease activity, specifically deterioration of the health status, to their medical staff: “And I mean, the doctor always immediately sees all the information that I constantly enter as well as how I've been doing in the last few weeks. That is positive, too. You can also prove that.” (P4, pos. 65) . According to most patients, app use encourages reflection on their own disease, which is why they described the app to be helpful for other diseases and medical areas as well: Multiple sclerosis, pain management, medication management, as well as other rheumatological diseases. In addition, patients reported that using the app, they save paper and time at the rheumatology ward, which they usually need to fill out paper-based PRO-questionnaires. Overall, the benefits of app-supported rheumatology care reported by patients were consistent with those of interviewed physicians: “Frequent documentation, closer monitoring of the clinical patient, that's something on the one hand. Of course, also the agreement of a common therapy goal - by simply being able to define the patient outcome, which is also understandable for the patient. The patient then understands his starting point and perhaps also where he can reach or perhaps where he should stay. And then there is also compliance promotion, patient education involved.” (HCP 2, pos 25) Furthermore, physicians emphasized that the use of the app can lead to time savings in rheumatology care: “I rather feel that it leads to a saving of time, because you talk relatively concretely about complaints, relapses that have occurred. Or it's quite clear in the ABATON RA app if everything was just fine. You don't even have to look any further. Because you can just see that in the graph. The patient kept answering these questionnaires.” (HCP 2, pos. 33) . Thus the app might ultimately promote a more effective rheumatology care delivery: “Well, it's more effective because I open the program before I call the patient in, I look at the data and I know before the patient comes in whether it's going well or not and whether we probably have to change the therapy or not.” (HCP 3, pos. 31) . Physicians emphasized that the app could be used to implement need-adapted rheumatology care: “(…) in such a way that I see mainly patients who just deteriorated. But then to be able to see them more quickly and perhaps also patients who are demonstrably doing well, who perhaps only need to be spoken to briefly by telephone or not at all and simply only once a year. So more flexible patient management.” (HCP 1, pos. 55) . Physicians also reported that app-based continuous documentation of ePRO could be helpful in other medical domains, such as multiple sclerosis, heart failure, chronic kidney disease, diabetes, or pre- and post-operative in orthopedics. In this qualitative study, we explored RA patients’ and physicians’ experiences using a new ePRO web app ABATON RA, including drawbacks, as well as benefits of app-supported rheumatology care. Overall, the results demonstrate the feasibility of digitally supported SDM and T2T, ease of use of the app and the overall dominance of observed benefits. Users appreciated having a better overview of disease activity. Some RA patients perceived the app as supportive for their care, i.e., making disease flares but also disease improvement visible, graphically and in numbers. Similarly, physicians felt better prepared for the appointment and treatment decisions. Compared to traditional paper-based PRO, users reported the potential of time-saving and paper reduction. The high level of repetition and standardization, as well as potentially inaccurate data and difficult-to-understand PRO questions were reported as limitations. Physicians feared to become too focused on ePRO data, stressed the lack of reimbursement and interoperability. Participants stressed that some patients might be left out due to a lack of technical skills and equipment. Collection and graphical display of disease activity has been previously identified as a main app function, desired by rheumatic patients and can support care in multiple ways . Capturing of flares allows patients and physicians a more complete picture of disease activity to enable better informed treatment decisions. Incorporation of PRO results into treatment decisions is by no means clinical routine, as we could demonstrate in a previous survey, where only 23% of German rheumatologists stated to review PRO results of every patient . The survey also highlighted the importance of interoperability and reimbursement to successfully implement ePRO. Qualitative results of a similar study also reported gaining insight into their disease activity course as the main benefit for patients. Interestingly, in this study, patients felt less dependent on their physicians and thought that ePRO use could lead to a reduction in the number of outpatient consultations, as mentioned by HCP1 in this study. The potential of saving resources by implementing ePRO is in line with previous studies [ – ]. The potential to safely reduce “unnecessary” visits using ePRO has been demonstrated in two RCTs , and is increasingly being adopted into clinical routine. Necessity of physical visits was based on ePRO cut-offs, exactly the purely data-driven approach HCP 5 feared. As ePRO are purely subjective, additional objective laboratory data could improve a data-driven monitoring approach. Previous studies showed the high interest of patients in self-collection of blood and a recent trial reported high-accuracy for RA-antibody and CRP levels . As reported in a similar study by Zuidema et al. , one of our participants described the app use to be confronting and continuous ePRO documentation potentially associated with negative thought of users. The authors also recommended screening patients for ePRO eligibility, with patient characteristics being in line with our study. Furthermore, similar to P3 , Navarro-Millán et al. reported that providing patients with social support might enhance PROs collection by helping them overcome barriers with using electronic devices and patients’ reservations about the value of these data . This study has some limitations. First, all patients were recruited in a single study site, thus results might not be generalizable. Second, even though all of the physicians interviewed were practicing in rheumatology care, only one physician was a finished-trained rheumatologist; the other four physicians were completing their residency training in rheumatology. Furthermore, participating physicians did not have access to any preliminary results of the AORTA trial and did only describe their individual study experiences. The choice of ePRO and reasons for it were not discussed. Recall bias cannot be excluded, e.g., due to time difference between app use and the interview. In addition, results may be biased toward the benefits as participants agreed to participate in the study in the first hand. A major strength of this study is the open and explorative study design and the diversity of the included patients (age, sex, education and occupation). This study shows that digitally supported SDM using ePRO is perceived as beneficial and feasible by RA patients, assistant and specialist physicians, who participated in qualitative interviews. Study participants of both user groups reported that the approach trialed could improve current rheumatology care. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 20 KB) Supplementary file2 (DOCX 21 KB) Supplementary file3 (DOCX 14 KB) Supplementary file4 (PDF 481 KB)
Genetics in Cardiology
a6654da4-e742-421e-9dd2-ed5fc521e6cc
9995547
Internal Medicine[mh]
COVID-19 information uptake amongst a rheumatology interested population
3bac2c40-4dc3-443a-8255-0ed55c3de3f2
9995713
Internal Medicine[mh]
The AlbertaRheumatology.com website was established in 2010 with an intended.audience of those interested in rheumatic disease in the province of Alberta, Canada. In March 2020, information on COVID-19 was first posted. In December 2020, a second page focused on COVID-19 vaccines was posted. Both pages underwent many revisions as the pandemic progressed and more information became available. Throughout this time, patients also submitted questions to the “Ask the Rheumatologist” on the topic of COVID-19, some of which were answered on the website. Google Analytics, a data analytic software, is embedded on the website and tracks the number of views, visit length, and visit geographical location. This data was collected and compared to non-COVID website resources. Ethics approval was waived; the data collected is anonymous and based on public website usage. Between January 1 2020 and December 31, 2022, COVID-19 resources on the AlbertaRheumatology website had 16,969 webpage visits, representing 3.17% of website page views during the time (total visits = 535,537 out of a total of 115 webpages on the website). Peak visits occurred in March–April 2020 (2325 visits), January to March 2021 (6521 visits), and September 2021 (1021 visits), to account for 58.1% of all COVID related visits (see Fig. ). 9303 (54.82%) of the visits were to the COVID-19 vaccine page and 6663 (39.27%) to the COVID-19 overview page, with the 1003 remaining visits to the ‘Ask the Rheumatologist’ area. Visit length averaged 4:08 min for COVID-19 vaccines and 2:11 min for the COVID-19 overview, compared to an average of only 1:15 min for all pages on the website. 70.0% of visitors to the COVID webpages were from the province of Alberta, 15.4% from other regions of Canada, while the remainder were international, compared to the overall website where only 32.3% of users are from Alberta and 49.5% from Canada (see Fig. ). The provision of COVID-19 information on the AlbertaRheumatology website appears to have been well received, with nearly 17,000 webpage visits recorded during the study time period. There were three clear peaks of usage noted, which correspond with phases of the COVID-19 pandemic in Alberta. The first peak relates to the first wave of the pandemic, the second peak to newly available vaccines, and the third peak to a significant COVID-19 wave in the province of Alberta along with COVID-19 booster vaccine availability. While other papers have reviewed the quality of COVID online information, their use has not been well described . It can be inferred the target audience of Albertans was successfully reached, with a majority of use being from this geographic location, and significantly higher than other webpages on the site. However, this study cannot determine the demographics of the end-user, how they interpreted the provided information, and if it impacted how they proceeded during the pandemic. While further study is clearly needed, this study suggests that web-based information such as this is worth producing, as engagement was very good amongst the identified geographic target audience.
B cell receptor repertoire analysis from autopsy samples of COVID-19 patients
135467d1-b008-4ee5-9cc8-8acca24b9654
9996338
Forensic Medicine[mh]
Introduction Although the coronavirus disease (COVID-19) pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has subsided, the pandemic is still raging in some countries owing to insufficient vaccine supply and the emergence of mutant SARS-CoV-2 strains. Randomized clinical trials of mRNA-based vaccines have reported up to 95% efficacies in the prevention of COVID-19 ( ). Recombinant monoclonal antibodies generated by the B cell receptor (BCR) repertoire of patients have been useful for treating the respiratory syncytial virus infection ( ). Rearrangement of the BCR encoding genes induces recombination of variable (V), diversity (D), and joining (J) segments of the third complementarity determining region (CDR3), leading to considerable diversification. Understanding the diversity of BCRs, their response to SARS-CoV-2 infection, and detection of specific BCRs may help in the development of therapeutic antibodies for patients with COVID-19. The life cycle of SARS-CoV-2 has been revealed ( ), and multiple studies have investigated the development of neutralizing monoclonal antibodies targeting the SARS-CoV-2 spike (S) protein ( – ). Candidate antibodies are usually obtained by analyzing BCRs in the peripheral blood mononuclear cells (PBMCs) obtained from patients with COVID-19 compared with those from healthy donors (HDs). Although the number of candidate antibodies against SARS-CoV-2 reported from multiple institutions is constantly being updated, the virus is constantly mutating; therefore, a combination of two or three neutralizing antibodies that can target different “weak” spots on the virus is more effective than a single neutralizing antibody. Casirivimab and Imdevimab (REGEN-COV) are the cocktail of two neutralizing antibodies that targets the receptor-binding domain (RBD) of the S protein to prevent binding to the S antigen, ACE2 ( , ). These antibodies have been rationally designed to bind distinct and non-overlapping regions of the RBD, resulting in simultaneous blockage ( ). We had provided an overview of RNA expression, immune cell populations, cytokine expression, and histopathological characteristics of the formalin fixation and paraffin-embedding (FFPE) lung lobes of a patient with COVID-19 ( , ), and the comprehensive analysis indicates the distribution of SARS-CoV-2 and the cellular and molecular differences among mild-to-severely inflamed microenvironment in different lung lobes. We further found that the severely inflamed lung lobes highly express aquaporin-3 (AQP3)-positive basal-like cells and alveolar type II cells, which proliferate abnormally to fill the alveolar space and stromal tissue that collapses upon SARA-CoV-2 infection ( ). The B-cells and plasma cells existed in the inflamed lung lobes; therefore, we hypothesized the presence of antibodies with neutralizing activity against SARS-CoV-2 more in the lung lobes. Here, BCR repertoire analysis was performed using a unique method than the conventional approach by analyzing of BCRs in FFPE lung lobes from the same patient and we developed several artificial antibodies using pairs of IgG heavy and light chains, which were frequently detected in the inflamed lung lobe. To realize the significance of the detected BCR repertoires, single-cell BCR (scBCR) repertoires obtained from the PBMCs of HDs who had recovered from COVID-19 or had received the mRNA vaccine or had not received the vaccine were also analyzed. Moreover, we evaluated SARS-CoV-2 neutralizing activity along with the artificial antibodies, and it was increased by mixing several artificial antibodies than each of them alone. The results of this study shed light on the development of vaccines and neutralizing antibodies using FFPE samples against future unknown infectious disease. Methods 2.1 Samples and findings Detailed information about the patient has been described previously ( ). A 79-year-old man was admitted to our intensive care unit with respiratory failure due to COVID-19. After a 16-day course of invasive mechanical ventilation, the patient died of multiple organ failure. Fixed lung lobes, retrieved from this patient, were embedded in paraffin, to form formalin-fixed paraffin-embedded (FFPE) tissue, according to standard methods. This study was approved by the Research Ethics Committee of Wakayama Medical University (approval no.: 2882), and verbal consent was obtained from relatives to use FFPE lung tissue for research. 2.2 Bulk BCR sequencing and analyzing Five tissue sections (10 μm thick) from the FFPE block in the left upper lobe (LUL) or right lower lobe (RLL) were cut, and RNA was obtained using NucleoSpin total RNA FFPE kit (Marcherey-Nagel GmbH & Co. KG, Düren, Germany) following the manufacturer’s instructions. BCR repertoire analysis was performed using the Archer Immunoverse-HS BCR IGJ/K/L kit (Invitrogen, Carlsbad, CO, U.S.A.) following the manufacturer’s protocol. In brief, RNA was transferred to BCR-specific reverse transcriptome priming tubes and incubated for 5 min at 65°C. After the first and second cDNA synthesis, dual-index sequence adaptors were attached to both ends of the strands, and sequencing libraries were obtained following amplifications. The final fragment size of the library was 203–322 bp. Sequencing was performed for samples with 10% PhiX control (Illumina, CA, U.S.A) using a NextSeq 500/550 Mid Output Kit v2.5 (Illumina; 300 cycles, 150/150 cycles, paired-end). The total sequence reads in the LUL and RLL were 88,897,047 and 45,823,220 reads, respectively. The sequencing data file was analyzed using the Archer Analysis software 6.2. All datasets were deposited under DDBJ DRA; accession number DRA013492 and BioProject accession number PRJDB13054. 2.3 Immunohistochemistry Tissue sections (4 µm thick) were cut, dewaxed, and rehydrated using xylene and graded alcohol. The sections were then inactivated by treatment with an antigen activator (pH 9.0) for 20 min at 95°C, and probed with anti-CD19 (Leica Biosystems, Wetzlar, Germany, NCL-L-CD19-163, 1:100) overnight at 4°C. Thereafter, the sections were treated with mouse anti-IgG antibodies for 30 min at 20°C and visualized following treatment with 3,3-diaminobenzidine for 10 min at 20°C. Subsequently, the sections were counterstained with hematoxylin. All the stained sections were examined under a fluorescence microscope (BZ-X710; KEYENCE, Osaka, Japan). 2.4 Single-cell BCR repertoire sequencing and analysis High-throughput scBCR repertoire analysis was performed using BD Rhapsody TCR/BCR profiling assays (Beckton, Dickinson and Company [BD], NJ, U.S.A.) with modifications ( ). This study was approved by the Research Ethics Committee of Wakayama Medical University (approval no.: 2961). Written informed consent for participation was obtained in accordance with the national legislation and the institutional requirements. Peripheral blood (8−10 mL) in BD Vacutainer CPT tubes (BD) was centrifuged at 1,500×g for 15 min to obtain PBMCs. After hemolysis with ammonium chloride, the B cells were negatively selected using a Miltenyi magnetic-activated cell sorting bead isolation kit (Miltenyi Biotec Inc., Bergisch Gladbach, Germany) ( ). A total of 350,000 cells were loaded into Nx1-seq as previously described ( ), and mRNA-captured barcoded beads were used to synthesize cDNA. To modify the VDJ library amplification protocol, cDNA was amplified using human B cell PCR primers. The average library size ranged from 795 bp to 1,220 bp. High-throughput sequencing was performed on samples with a 20% PhiX control using a NovaSeq 6000 S4 Reagent Kit v1.5 (Illumina; 300 cycles, 75/225 cycles; paired-end). The analysis pipelines were modified based on SevenBridges, which is an analysis pipeline for BD Rhapsody assays ( ). The fastq data were validated by the identification methods in which the error correction was equivalent to the SevenBridges, and the average of the filtered data was 87.3%. The data were subjected to a BLASTN (version 2.9.0+) search against the IMGT database ( https://www.imgt.org/ ) using the R2 sequence corresponding to each barcode in the R1 sequence as the query. The top five results per query (=1 read) were tabulated along with the barcode of the R1 sequence. The data that exceeded a certain threshold were omitted, i.e., if the number of queries exceeded the number of cells loaded into the Nx1-seq. For each of the heavy (V-D-J-C) and light (V-J-C) chain patterns, the data were aggregated as single or a pair in a frequency distribution. The percent ratio of total BCR immunoglobulin heavy (IgH) or light (IgK) chains was calculated; for example, a total of 23,565 BCR IgH chains were detected and the top 100 frequently detected BCR IgH chains were 7,516 in HD no. 1 who had recovered from COVID-19. All the sequenced BCR IgHs (20,565) included only one count that was detected, and it did not seem to be an important IgH. Therefore, we set the threshold above the highly detected top 100 BCR IgH chains. In this case, the most frequently detected IgH was 818 counts, and the 100 th was 25 counts. IGHV3-23-IGHD1-IGHJ4-IGHG2 , IGHV3-23-IGHD2-IGHJ4-IGHG2 , and IGHV3-23-IGHD1-IGHJ5-IGHG2 were detected in 818, 598, and 90 of total 7,516 IgH chains, respectively, indicating that the total ratio (%) of selected IGHV3-23 for the top 100 IgH chains was 20.0%. Computations were partially performed on the NIG supercomputer at RIOS National Institute of Genetics. All datasets were deposited under DDBJ DRA accession number DRA013491 and BioProject accession number PRJDB13013. 2.5 Plasmid construction and IgG production The pFUSE-Fc plasmid (400 mg) was supplied by the Department of Pathology, Sapporo Medical University School of Medicine. The retroviral reprogramming plasmids pFUSE-hIgG1-Fc2 (InvivoGen, CA, U.S.A.) have been previously described ( ). pFUSE-Fc2 (IL2ss) plasmids facilitate the secretion of Fc-Fusion proteins from pFUSE-Fc-transfected cells. To obtain single-chain Fc fragments of IgH and IgK, we followed the principle of PCR assembly ( ), and the information of the inserted synthesized representative cDNA and the amplified primers are shown in . Briefly, the plasmid was digested with EcoRV and NcoI (Takara Bio, Shiga, Japan), and the targeted IgH and IgK domains were obtained using a two-step overlapping PCR. The PCR product was subcloned into the digested plasmid using In-Fusion HD Cloning kits (Takara Bio); the In-Fusion reaction mixture was transformed into competent cells, and individual isolated colonies were picked from the culture plate. Plasmid DNA was isolated using a Plasmid DNA purification kit (Macherey-Nagel GmbH& Co.), and 2.5 μg of the DNA transfected into 293 T cells using Lipofectamine 3000 (Thermo Fisher Scientific, MS, U.S.A.). Subsequently, the 293 T cells were grown in Dulbecco’s Modified Eagle Medium containing 10% fetal bovine serum and 1% penicillin-streptomycin solution with zeocin (300 μg/mL, InvivoGen), and the supernatant was collected from the culture well on day 2 and days 7–10. IgGs from the supernatant were collected from 3–5 culture dishes and purified using a spin column-based antibody purification kit (Protein G; Cosmo Bio, Tokyo, Japan). The density of IgG was measured using the NanoDrop One (Thermo Fisher Scientific). 2.6 Screening scFv specifically reacting with human SARS-CoV-2 spike protein using the scFv phage display library derived from naïve donors Isolation of single-chain Fv fragments (scFv) clones specifically reacting with the human SARS-CoV-2 spike protein was performed according to our previous report with some modifications ( ). Biotinylated human SARS-CoV-2 spike protein (#HAK-SPD _BIO-1, Hakarel Co., Ltd., Ibaraki, Japan) and biotinylated human ACE2 protein (#AC2-H82E6, AcroBiosystems, Inc., DE, U.S.A.) were used as antigens. First, the biotinylated ACE2 protein was mixed with the scFv phage display library constructed from naïve donors to remove scFv reacting with the ACE2 protein non-specifically (negative panning). Subsequently, the resultant library was mixed with SARS-CoV-2 protein to enrich for specific scFv (positive panning). After two-rounds of negative and positive panning, soluble scFv expression in Escherichia coli infected with the phage was induced. The resulting supernatant was immediately used for enzyme-linked immunosorbent assay (ELISA) screening. One scFv clone that reacted with SARS-CoV-2 but not with ACE2 was isolated. Sequences containing the IgH region, peptide linker, and IgK region are shown in . 2.7 Neutralization assay for SARS-CoV-2 To evaluate the neutralization effect of SARS-CoV-2, we utilized the SARS-CoV-2 Neutralization Antibody Detection Kit (MBL, Tokyo, Japan) and SARS-CoV-2 Neutralization Ab ELISA Kit (Invitrogen) following the manufacturer’s protocol. Briefly, we extracted and purified IgG (100 μg/mL) from each sample and applied 10 μg to the RBD of an ACE2-coated reaction plate. The plate was then incubated for 2 h at 24°C. The IgG in the plasma specimens from seven HDs was purified using a spin column-based antibody purification kit (Protein G; Cosmo Bio), and the density of IgG was measured using a NanoDrop One (Thermo Fisher Scientific). A positive control (10 μg) was added to each kit. After washing the ACE2 reaction solution (His-tagged human ACE2 protein), 100 μL of horseradish peroxidase-conjugated anti-His-tag monoclonal antibody was added and incubated for 30 min at 24°C. The absorbance of each sample at 450 nm was measured using a microplate reader. A 600 nm wavelength filter was used as a reference. The measured values in the blank, positive control, and each sample were often indistinguishable, which could be due to the long-term storage of the kits, albeit at the fridge temperature in lightproof containers, or the use of kits with different lot numbers. Therefore, we duplicated the measurements of each artificial antibody and IgG with a positive or negative control for each experiment. To eliminate any experimental errors due to differences in the measurement conditions or product lots, we measured each sample at least three times. To verify the performance of the SARS-CoV-2 neutralization activity kits, we measured the activity using two different kits and calculated the mean ± standard error (S.E) in each group. The inhibition rate (%) was calculated using the following equation [1]. [1] Inhibition rate ( % ) = ( O . D . value of Blank ) − ( O . D . value of Sample ) ( O . D . value of Blank ) − ( O . D . value of Positive ) × 100 The IgGs obtained from 3–5 different culture conditions were measured at least three times using the neutralization kit (MBL) twice to confirm the variations in data due to differences in the product lot. The neutralization assay was repeated using a different kit (Invitrogen). A total of 8–16 samples were measured for each condition, and the average inhibition rate was calculated. In some cases, the optical density (O.D.) value of the sample was lower than that of the positive control. A second assay was performed to evaluate the neutralizing antibody using the plaque reduction neutralizing test (PRNT) with live viruses ( , ). The SARS-CoV-2 S protein (319–541 aa) monoclonal antibody for neutralization (0.1–100 μg; Catalog #67758-1, Proteintech, IL, U.S.A.) was incubated with equal volumes of 100 plaque-forming units of SARS-CoV-2 (nCoV-19/JPN/TY/WK521/2020, National Institute of Infectious Disease, Japan) at 37°C for 1 h. Half of the virus-serum mixture was then infected with VeroE6/TMPRSS2 cells (JCRB1819, JCRB Cell Bank, Japan) at 37°C for 1 h and covered with agarose overlay. After 48 h of incubation, the cells were fixed with 10% formalin and stained with crystal violet. Virus-infected cells are lysed and form crystal violet-negative holes (plaques); hence, the effect of neutralizing activity was calculated by counting the number of plaques. The plaque count in the negative control groups was 49.5–65.0, and the 50% reduction of the plaque number (number of plaques< 25–32.5) in the positive control (SARS-CoV-2 S protein) was more than 10 μg. Therefore, we applied the same amount (10 μg) of purified IgG from each sample to the assay and calculated the neutralization activity (%). SARS-CoV-2 infection experiments were conducted twice in at least 2–3 culture dishes to ensure reproducibility. 2.8 Statistical information We conducted a within-subjects analysis of variance (ANOVA) on the conditions, following Welch’s t-test, and p < 0.05, which showed a significant difference between the conditions in . Samples and findings Detailed information about the patient has been described previously ( ). A 79-year-old man was admitted to our intensive care unit with respiratory failure due to COVID-19. After a 16-day course of invasive mechanical ventilation, the patient died of multiple organ failure. Fixed lung lobes, retrieved from this patient, were embedded in paraffin, to form formalin-fixed paraffin-embedded (FFPE) tissue, according to standard methods. This study was approved by the Research Ethics Committee of Wakayama Medical University (approval no.: 2882), and verbal consent was obtained from relatives to use FFPE lung tissue for research. Bulk BCR sequencing and analyzing Five tissue sections (10 μm thick) from the FFPE block in the left upper lobe (LUL) or right lower lobe (RLL) were cut, and RNA was obtained using NucleoSpin total RNA FFPE kit (Marcherey-Nagel GmbH & Co. KG, Düren, Germany) following the manufacturer’s instructions. BCR repertoire analysis was performed using the Archer Immunoverse-HS BCR IGJ/K/L kit (Invitrogen, Carlsbad, CO, U.S.A.) following the manufacturer’s protocol. In brief, RNA was transferred to BCR-specific reverse transcriptome priming tubes and incubated for 5 min at 65°C. After the first and second cDNA synthesis, dual-index sequence adaptors were attached to both ends of the strands, and sequencing libraries were obtained following amplifications. The final fragment size of the library was 203–322 bp. Sequencing was performed for samples with 10% PhiX control (Illumina, CA, U.S.A) using a NextSeq 500/550 Mid Output Kit v2.5 (Illumina; 300 cycles, 150/150 cycles, paired-end). The total sequence reads in the LUL and RLL were 88,897,047 and 45,823,220 reads, respectively. The sequencing data file was analyzed using the Archer Analysis software 6.2. All datasets were deposited under DDBJ DRA; accession number DRA013492 and BioProject accession number PRJDB13054. Immunohistochemistry Tissue sections (4 µm thick) were cut, dewaxed, and rehydrated using xylene and graded alcohol. The sections were then inactivated by treatment with an antigen activator (pH 9.0) for 20 min at 95°C, and probed with anti-CD19 (Leica Biosystems, Wetzlar, Germany, NCL-L-CD19-163, 1:100) overnight at 4°C. Thereafter, the sections were treated with mouse anti-IgG antibodies for 30 min at 20°C and visualized following treatment with 3,3-diaminobenzidine for 10 min at 20°C. Subsequently, the sections were counterstained with hematoxylin. All the stained sections were examined under a fluorescence microscope (BZ-X710; KEYENCE, Osaka, Japan). Single-cell BCR repertoire sequencing and analysis High-throughput scBCR repertoire analysis was performed using BD Rhapsody TCR/BCR profiling assays (Beckton, Dickinson and Company [BD], NJ, U.S.A.) with modifications ( ). This study was approved by the Research Ethics Committee of Wakayama Medical University (approval no.: 2961). Written informed consent for participation was obtained in accordance with the national legislation and the institutional requirements. Peripheral blood (8−10 mL) in BD Vacutainer CPT tubes (BD) was centrifuged at 1,500×g for 15 min to obtain PBMCs. After hemolysis with ammonium chloride, the B cells were negatively selected using a Miltenyi magnetic-activated cell sorting bead isolation kit (Miltenyi Biotec Inc., Bergisch Gladbach, Germany) ( ). A total of 350,000 cells were loaded into Nx1-seq as previously described ( ), and mRNA-captured barcoded beads were used to synthesize cDNA. To modify the VDJ library amplification protocol, cDNA was amplified using human B cell PCR primers. The average library size ranged from 795 bp to 1,220 bp. High-throughput sequencing was performed on samples with a 20% PhiX control using a NovaSeq 6000 S4 Reagent Kit v1.5 (Illumina; 300 cycles, 75/225 cycles; paired-end). The analysis pipelines were modified based on SevenBridges, which is an analysis pipeline for BD Rhapsody assays ( ). The fastq data were validated by the identification methods in which the error correction was equivalent to the SevenBridges, and the average of the filtered data was 87.3%. The data were subjected to a BLASTN (version 2.9.0+) search against the IMGT database ( https://www.imgt.org/ ) using the R2 sequence corresponding to each barcode in the R1 sequence as the query. The top five results per query (=1 read) were tabulated along with the barcode of the R1 sequence. The data that exceeded a certain threshold were omitted, i.e., if the number of queries exceeded the number of cells loaded into the Nx1-seq. For each of the heavy (V-D-J-C) and light (V-J-C) chain patterns, the data were aggregated as single or a pair in a frequency distribution. The percent ratio of total BCR immunoglobulin heavy (IgH) or light (IgK) chains was calculated; for example, a total of 23,565 BCR IgH chains were detected and the top 100 frequently detected BCR IgH chains were 7,516 in HD no. 1 who had recovered from COVID-19. All the sequenced BCR IgHs (20,565) included only one count that was detected, and it did not seem to be an important IgH. Therefore, we set the threshold above the highly detected top 100 BCR IgH chains. In this case, the most frequently detected IgH was 818 counts, and the 100 th was 25 counts. IGHV3-23-IGHD1-IGHJ4-IGHG2 , IGHV3-23-IGHD2-IGHJ4-IGHG2 , and IGHV3-23-IGHD1-IGHJ5-IGHG2 were detected in 818, 598, and 90 of total 7,516 IgH chains, respectively, indicating that the total ratio (%) of selected IGHV3-23 for the top 100 IgH chains was 20.0%. Computations were partially performed on the NIG supercomputer at RIOS National Institute of Genetics. All datasets were deposited under DDBJ DRA accession number DRA013491 and BioProject accession number PRJDB13013. Plasmid construction and IgG production The pFUSE-Fc plasmid (400 mg) was supplied by the Department of Pathology, Sapporo Medical University School of Medicine. The retroviral reprogramming plasmids pFUSE-hIgG1-Fc2 (InvivoGen, CA, U.S.A.) have been previously described ( ). pFUSE-Fc2 (IL2ss) plasmids facilitate the secretion of Fc-Fusion proteins from pFUSE-Fc-transfected cells. To obtain single-chain Fc fragments of IgH and IgK, we followed the principle of PCR assembly ( ), and the information of the inserted synthesized representative cDNA and the amplified primers are shown in . Briefly, the plasmid was digested with EcoRV and NcoI (Takara Bio, Shiga, Japan), and the targeted IgH and IgK domains were obtained using a two-step overlapping PCR. The PCR product was subcloned into the digested plasmid using In-Fusion HD Cloning kits (Takara Bio); the In-Fusion reaction mixture was transformed into competent cells, and individual isolated colonies were picked from the culture plate. Plasmid DNA was isolated using a Plasmid DNA purification kit (Macherey-Nagel GmbH& Co.), and 2.5 μg of the DNA transfected into 293 T cells using Lipofectamine 3000 (Thermo Fisher Scientific, MS, U.S.A.). Subsequently, the 293 T cells were grown in Dulbecco’s Modified Eagle Medium containing 10% fetal bovine serum and 1% penicillin-streptomycin solution with zeocin (300 μg/mL, InvivoGen), and the supernatant was collected from the culture well on day 2 and days 7–10. IgGs from the supernatant were collected from 3–5 culture dishes and purified using a spin column-based antibody purification kit (Protein G; Cosmo Bio, Tokyo, Japan). The density of IgG was measured using the NanoDrop One (Thermo Fisher Scientific). Screening scFv specifically reacting with human SARS-CoV-2 spike protein using the scFv phage display library derived from naïve donors Isolation of single-chain Fv fragments (scFv) clones specifically reacting with the human SARS-CoV-2 spike protein was performed according to our previous report with some modifications ( ). Biotinylated human SARS-CoV-2 spike protein (#HAK-SPD _BIO-1, Hakarel Co., Ltd., Ibaraki, Japan) and biotinylated human ACE2 protein (#AC2-H82E6, AcroBiosystems, Inc., DE, U.S.A.) were used as antigens. First, the biotinylated ACE2 protein was mixed with the scFv phage display library constructed from naïve donors to remove scFv reacting with the ACE2 protein non-specifically (negative panning). Subsequently, the resultant library was mixed with SARS-CoV-2 protein to enrich for specific scFv (positive panning). After two-rounds of negative and positive panning, soluble scFv expression in Escherichia coli infected with the phage was induced. The resulting supernatant was immediately used for enzyme-linked immunosorbent assay (ELISA) screening. One scFv clone that reacted with SARS-CoV-2 but not with ACE2 was isolated. Sequences containing the IgH region, peptide linker, and IgK region are shown in . Neutralization assay for SARS-CoV-2 To evaluate the neutralization effect of SARS-CoV-2, we utilized the SARS-CoV-2 Neutralization Antibody Detection Kit (MBL, Tokyo, Japan) and SARS-CoV-2 Neutralization Ab ELISA Kit (Invitrogen) following the manufacturer’s protocol. Briefly, we extracted and purified IgG (100 μg/mL) from each sample and applied 10 μg to the RBD of an ACE2-coated reaction plate. The plate was then incubated for 2 h at 24°C. The IgG in the plasma specimens from seven HDs was purified using a spin column-based antibody purification kit (Protein G; Cosmo Bio), and the density of IgG was measured using a NanoDrop One (Thermo Fisher Scientific). A positive control (10 μg) was added to each kit. After washing the ACE2 reaction solution (His-tagged human ACE2 protein), 100 μL of horseradish peroxidase-conjugated anti-His-tag monoclonal antibody was added and incubated for 30 min at 24°C. The absorbance of each sample at 450 nm was measured using a microplate reader. A 600 nm wavelength filter was used as a reference. The measured values in the blank, positive control, and each sample were often indistinguishable, which could be due to the long-term storage of the kits, albeit at the fridge temperature in lightproof containers, or the use of kits with different lot numbers. Therefore, we duplicated the measurements of each artificial antibody and IgG with a positive or negative control for each experiment. To eliminate any experimental errors due to differences in the measurement conditions or product lots, we measured each sample at least three times. To verify the performance of the SARS-CoV-2 neutralization activity kits, we measured the activity using two different kits and calculated the mean ± standard error (S.E) in each group. The inhibition rate (%) was calculated using the following equation [1]. [1] Inhibition rate ( % ) = ( O . D . value of Blank ) − ( O . D . value of Sample ) ( O . D . value of Blank ) − ( O . D . value of Positive ) × 100 The IgGs obtained from 3–5 different culture conditions were measured at least three times using the neutralization kit (MBL) twice to confirm the variations in data due to differences in the product lot. The neutralization assay was repeated using a different kit (Invitrogen). A total of 8–16 samples were measured for each condition, and the average inhibition rate was calculated. In some cases, the optical density (O.D.) value of the sample was lower than that of the positive control. A second assay was performed to evaluate the neutralizing antibody using the plaque reduction neutralizing test (PRNT) with live viruses ( , ). The SARS-CoV-2 S protein (319–541 aa) monoclonal antibody for neutralization (0.1–100 μg; Catalog #67758-1, Proteintech, IL, U.S.A.) was incubated with equal volumes of 100 plaque-forming units of SARS-CoV-2 (nCoV-19/JPN/TY/WK521/2020, National Institute of Infectious Disease, Japan) at 37°C for 1 h. Half of the virus-serum mixture was then infected with VeroE6/TMPRSS2 cells (JCRB1819, JCRB Cell Bank, Japan) at 37°C for 1 h and covered with agarose overlay. After 48 h of incubation, the cells were fixed with 10% formalin and stained with crystal violet. Virus-infected cells are lysed and form crystal violet-negative holes (plaques); hence, the effect of neutralizing activity was calculated by counting the number of plaques. The plaque count in the negative control groups was 49.5–65.0, and the 50% reduction of the plaque number (number of plaques< 25–32.5) in the positive control (SARS-CoV-2 S protein) was more than 10 μg. Therefore, we applied the same amount (10 μg) of purified IgG from each sample to the assay and calculated the neutralization activity (%). SARS-CoV-2 infection experiments were conducted twice in at least 2–3 culture dishes to ensure reproducibility. Statistical information We conducted a within-subjects analysis of variance (ANOVA) on the conditions, following Welch’s t-test, and p < 0.05, which showed a significant difference between the conditions in . Results 3.1 Identified specific B cell repertoire in lower lung lobes The representative mRNA for B cells in the bulk RNA-seq data from each site of the lung lobe are shown in . The expression of the biomarker for normal and neoplastic B cells, CD19 mRNA ( ), varied among each site of the lung lobe. CD20 mRNA, which is expressed on the surface of normal and malignant B cells ( ), was increased in the lower lung lobes. In addition, the expression levels of the memory B cell marker, CD27 mRNA, and the positive regulators of BCR signaling, CD79a and CD79b mRNA ( ), were relatively higher in the lower lung lobes compared with left upper lobe (LUL). Immunostaining for CD19 showed that B cells remained relatively intact in the upper lobes where the alveoli were preserved, but were present as singles or in clusters with diverse shapes in the highly inflamed lower lobes ( ). A comprehensive bulk-based analysis of a diverse repertoire of BCRs from the LUL tissues was performed to identify the specific IgH and IgK genes expressed in response to SARS-CoV-2 infection ( ). The detection sensitivity of the BCR repertoire in FFPE samples was considerably low, and the total counts of IgH or IgK per 4,306,115 sequencing reads in the LUL specimens were recorded. Subsequently, 36 IgH genes and 28 IgK genes were identified in the LUL tissues ( ). IGHV1-69/IGHD4-23/IGHJ3/IGHG1 and IGHV3-23/IGHD6-19/IGHJ4/IGHG1 accounted for 9.2% and 6.9% of the total detected IgH chain genes in LUL, respectively, whereas IGKV2-28/IGKJ1/IGKC and IGKV3-20/IGKJ3/IGKC accounted for 8.2% and 7.5% of the total detected IgK chain genes, respectively ( ). Although the types of (D) and (J) of CDR3 were different, the heavy chains, IGHV1-69 and IGHV3-23 , were detected in 9.2% and 12.5% of the total, respectively, whereas the light chains, IGKV2-28 and IGKV3-20 , accounted for 31.3% and 23.7% of the total, respectively ( ). In 25%–30% of the LUL regions, we observed mildly thickened alveolar walls and hyaline membrane formation with mild inflammation and diffused alveolar damage, whereas the lower lung lobes showed fibrous and thickened alveolar walls with severe inflammation ( ). Therefore, it was difficult to identify multiple IgH genes from the RLL samples, as the sequencing library was excessively fragmented ( ); therefore, only two IgH genes and 18 IgK chain genes were identified in the RLL tissues ( ). However, 40% of the total detected IgK chain genes in RLL were IGKV2-28/IGKJ1/IGKC , and its expression levels were the highest in LUL. 3.2 Single-cell B cell receptor repertoire in PBMC from three categories of HDs Next, we performed scBCR sequencing (scBCR-seq) of two HDs who recovered from COVID-19 (HD-COVID-19) at least six months prior, two HDs who had been treated with an mRNA vaccine (mRNA-1273 or BNT162b2) twice at least one-two months prior (26 or 57 days after 2 nd vaccine treatment; HDs with vaccine) and three HDs without vaccine (HDs without vaccine) ( ). The trial was being conducted in the early 2020 and the first dose of the vaccine was still awaiting its turn in Japan. To determine SARS-CoV-2 infection status, serum samples from the seven HDs were measured using SARS-CoV-2 IgG antibody test kit and an iFlash 3000 chemiluminescence immunoassay analyzer ( – ). The detection values (AU/mL) of N and S proteins for SARS-CoV-2 in the HD-COVID-19 groups were above the threshold (5 AU/mL), and those of S1 protein in the HDs with vaccine group were high, indicating that the serum utilized in this experiment showed the state of recovery after natural infection with SARS-CoV-2 and after injection of the mRNA vaccine incorporating S protein ( ). Although there was no significant difference in the expression levels of BCR IgH and IgK chains in each group, the expression of IGHV3-23 in the HD-COVID-19 group and IGHV3-69 or IGKV1D-39 in HDs with vaccine group tended to be slightly higher ( ). The results of BCR pairs in each group are shown in , and paired identification analysis of BCR IgH and IgK chains showed that diversity existed among the donors. To investigate how many of the BCRs obtained from FFPE lung lobes were expressed in the PBMCs of HDs, we calculated the percentage ratio for each detected chain in each group ( ). The expression of IGHV1-69 , which was the most highly expressed gene in FFPE samples, was detected at low levels in BCRs from PBMC. As for light chains, IGKV4-1 tended to be slightly higher in the HDs without vaccine group; however, its expression was similar in all the groups. The sequencing results obtained by screening the scFv phage display library specifically reacting with the human SARS-CoV-2 spike protein indicated that IGHV3-23 and IGKV2D-28 were the candidates ( ). The pair of highly expressed IgH and IgK chains in FFPE samples ( IGHV1-69 / IGKV2-28 , IGHV1-69 / IGKV3-20 , IGHV3-23 / IGKV2-28 , and IGHV3-23 / IGKV3-20 ) were not specifically expressed in the PBMCs of HDs ( ). These results suggest that the four BCR pairs, especially IGHV1-69 chosen from FFPE lung lobes, may be novel antibody candidates. 3.3 Neutralizing activity test against SARS-CoV-2 We performed a SARS-CoV-2 neutralization assay for three artificial antibodies and IgG samples obtained from the plasma specimen of HDs. The same IgG concentration 10 μg of each sample was applied to two different the SARS-CoV-2 Neutralization kits. Compared to the control IgG, which was purified from the culture medium in empty plasmid-transfected HEK293 cells, the inhibitory effect of the three artificial antibodies was significantly enhanced ( p = 0.023, 0.0026, and 0.045) ( ). The highest average inhibition rate was 20.18 ± 8.35% (mean ± S.E.) for IGHV1-69/IGKV3-20 , with a maximum inhibition ratio of 102.77%. The lowest inhibition rate was 5.26% for the same artificial antibody, and the results varied even when the same concentration of IgG was collected from different cultures. Co-treatment with two different artificial antibodies enhanced the neutralization activity (37.35 ± 15.22%, mean ± S.E., p = 0.028). As for the IgG derived from the peripheral blood of each donor, SARS-CoV-2 suppression was significantly higher for IgG derived from HD-COVID-19 and HDs with vaccine groups. The average inhibition rates in the three HDs without vaccination (no.1; #1, no.2; #2, and no.3; #3), two HDs-COVID-19, and two HDs with vaccine groups were 0.37 ± 0.88, 7.81 ± 1.32, and 35.16 ± 0.88%, respectively (mean ± S.E.). The maximum inhibition ratios for HD-COVID-19 and HDs in the vaccine groups were 24.18% and 96.90%, respectively. However, the inhibitory effect was not enhanced by the addition of IgG from various serum sources, possibly because the amount of IgG against RBD in the neutralizing activity kit was not saturated, and the specific antibodies in each serum IgG were diluted or offset. A separate neutralization assay using live SARS-CoV-2 indicated that the neutralization ratios for control (empty plasmid), IGHV3-23/IGKV2-28 , IGHV1-69 / IGKV2-28 , IGKV1-69 / IGKV3-20 , and monoclonal SARS-CoV-2 neutralization antibody (positive control) were 16.00 ± 4.49, 20.75 ± 6.93, 34.91 ± 5.44, 32.05 ± 2.58, and 44.28 ± 6.28% (mean ± S.E.), respectively ( ). The maximum neutralizing ratio of IGHV1-69 / IGHV2-28 against live SARS-CoV-2 was 55.5%, similar to that of the positive control. Three of the two artificial antibodies had the potential to block SARS-CoV-2 significantly ( p = 0.028, 0.023). The dose-dependent neutralizing activity of the positive control and IGHV1-69/IGHV2-28 antibody is shown in . Identified specific B cell repertoire in lower lung lobes The representative mRNA for B cells in the bulk RNA-seq data from each site of the lung lobe are shown in . The expression of the biomarker for normal and neoplastic B cells, CD19 mRNA ( ), varied among each site of the lung lobe. CD20 mRNA, which is expressed on the surface of normal and malignant B cells ( ), was increased in the lower lung lobes. In addition, the expression levels of the memory B cell marker, CD27 mRNA, and the positive regulators of BCR signaling, CD79a and CD79b mRNA ( ), were relatively higher in the lower lung lobes compared with left upper lobe (LUL). Immunostaining for CD19 showed that B cells remained relatively intact in the upper lobes where the alveoli were preserved, but were present as singles or in clusters with diverse shapes in the highly inflamed lower lobes ( ). A comprehensive bulk-based analysis of a diverse repertoire of BCRs from the LUL tissues was performed to identify the specific IgH and IgK genes expressed in response to SARS-CoV-2 infection ( ). The detection sensitivity of the BCR repertoire in FFPE samples was considerably low, and the total counts of IgH or IgK per 4,306,115 sequencing reads in the LUL specimens were recorded. Subsequently, 36 IgH genes and 28 IgK genes were identified in the LUL tissues ( ). IGHV1-69/IGHD4-23/IGHJ3/IGHG1 and IGHV3-23/IGHD6-19/IGHJ4/IGHG1 accounted for 9.2% and 6.9% of the total detected IgH chain genes in LUL, respectively, whereas IGKV2-28/IGKJ1/IGKC and IGKV3-20/IGKJ3/IGKC accounted for 8.2% and 7.5% of the total detected IgK chain genes, respectively ( ). Although the types of (D) and (J) of CDR3 were different, the heavy chains, IGHV1-69 and IGHV3-23 , were detected in 9.2% and 12.5% of the total, respectively, whereas the light chains, IGKV2-28 and IGKV3-20 , accounted for 31.3% and 23.7% of the total, respectively ( ). In 25%–30% of the LUL regions, we observed mildly thickened alveolar walls and hyaline membrane formation with mild inflammation and diffused alveolar damage, whereas the lower lung lobes showed fibrous and thickened alveolar walls with severe inflammation ( ). Therefore, it was difficult to identify multiple IgH genes from the RLL samples, as the sequencing library was excessively fragmented ( ); therefore, only two IgH genes and 18 IgK chain genes were identified in the RLL tissues ( ). However, 40% of the total detected IgK chain genes in RLL were IGKV2-28/IGKJ1/IGKC , and its expression levels were the highest in LUL. Single-cell B cell receptor repertoire in PBMC from three categories of HDs Next, we performed scBCR sequencing (scBCR-seq) of two HDs who recovered from COVID-19 (HD-COVID-19) at least six months prior, two HDs who had been treated with an mRNA vaccine (mRNA-1273 or BNT162b2) twice at least one-two months prior (26 or 57 days after 2 nd vaccine treatment; HDs with vaccine) and three HDs without vaccine (HDs without vaccine) ( ). The trial was being conducted in the early 2020 and the first dose of the vaccine was still awaiting its turn in Japan. To determine SARS-CoV-2 infection status, serum samples from the seven HDs were measured using SARS-CoV-2 IgG antibody test kit and an iFlash 3000 chemiluminescence immunoassay analyzer ( – ). The detection values (AU/mL) of N and S proteins for SARS-CoV-2 in the HD-COVID-19 groups were above the threshold (5 AU/mL), and those of S1 protein in the HDs with vaccine group were high, indicating that the serum utilized in this experiment showed the state of recovery after natural infection with SARS-CoV-2 and after injection of the mRNA vaccine incorporating S protein ( ). Although there was no significant difference in the expression levels of BCR IgH and IgK chains in each group, the expression of IGHV3-23 in the HD-COVID-19 group and IGHV3-69 or IGKV1D-39 in HDs with vaccine group tended to be slightly higher ( ). The results of BCR pairs in each group are shown in , and paired identification analysis of BCR IgH and IgK chains showed that diversity existed among the donors. To investigate how many of the BCRs obtained from FFPE lung lobes were expressed in the PBMCs of HDs, we calculated the percentage ratio for each detected chain in each group ( ). The expression of IGHV1-69 , which was the most highly expressed gene in FFPE samples, was detected at low levels in BCRs from PBMC. As for light chains, IGKV4-1 tended to be slightly higher in the HDs without vaccine group; however, its expression was similar in all the groups. The sequencing results obtained by screening the scFv phage display library specifically reacting with the human SARS-CoV-2 spike protein indicated that IGHV3-23 and IGKV2D-28 were the candidates ( ). The pair of highly expressed IgH and IgK chains in FFPE samples ( IGHV1-69 / IGKV2-28 , IGHV1-69 / IGKV3-20 , IGHV3-23 / IGKV2-28 , and IGHV3-23 / IGKV3-20 ) were not specifically expressed in the PBMCs of HDs ( ). These results suggest that the four BCR pairs, especially IGHV1-69 chosen from FFPE lung lobes, may be novel antibody candidates. Neutralizing activity test against SARS-CoV-2 We performed a SARS-CoV-2 neutralization assay for three artificial antibodies and IgG samples obtained from the plasma specimen of HDs. The same IgG concentration 10 μg of each sample was applied to two different the SARS-CoV-2 Neutralization kits. Compared to the control IgG, which was purified from the culture medium in empty plasmid-transfected HEK293 cells, the inhibitory effect of the three artificial antibodies was significantly enhanced ( p = 0.023, 0.0026, and 0.045) ( ). The highest average inhibition rate was 20.18 ± 8.35% (mean ± S.E.) for IGHV1-69/IGKV3-20 , with a maximum inhibition ratio of 102.77%. The lowest inhibition rate was 5.26% for the same artificial antibody, and the results varied even when the same concentration of IgG was collected from different cultures. Co-treatment with two different artificial antibodies enhanced the neutralization activity (37.35 ± 15.22%, mean ± S.E., p = 0.028). As for the IgG derived from the peripheral blood of each donor, SARS-CoV-2 suppression was significantly higher for IgG derived from HD-COVID-19 and HDs with vaccine groups. The average inhibition rates in the three HDs without vaccination (no.1; #1, no.2; #2, and no.3; #3), two HDs-COVID-19, and two HDs with vaccine groups were 0.37 ± 0.88, 7.81 ± 1.32, and 35.16 ± 0.88%, respectively (mean ± S.E.). The maximum inhibition ratios for HD-COVID-19 and HDs in the vaccine groups were 24.18% and 96.90%, respectively. However, the inhibitory effect was not enhanced by the addition of IgG from various serum sources, possibly because the amount of IgG against RBD in the neutralizing activity kit was not saturated, and the specific antibodies in each serum IgG were diluted or offset. A separate neutralization assay using live SARS-CoV-2 indicated that the neutralization ratios for control (empty plasmid), IGHV3-23/IGKV2-28 , IGHV1-69 / IGKV2-28 , IGKV1-69 / IGKV3-20 , and monoclonal SARS-CoV-2 neutralization antibody (positive control) were 16.00 ± 4.49, 20.75 ± 6.93, 34.91 ± 5.44, 32.05 ± 2.58, and 44.28 ± 6.28% (mean ± S.E.), respectively ( ). The maximum neutralizing ratio of IGHV1-69 / IGHV2-28 against live SARS-CoV-2 was 55.5%, similar to that of the positive control. Three of the two artificial antibodies had the potential to block SARS-CoV-2 significantly ( p = 0.028, 0.023). The dose-dependent neutralizing activity of the positive control and IGHV1-69/IGHV2-28 antibody is shown in . Discussion Recombinant monoclonal antibodies generated from the BCR repertoire of patients have been useful for treating respiratory syncytial viral infections. Understanding the diversity of BCRs, their response to SARS-CoV-2 infection, and detection of specific BCRs may aid in the development of therapeutic antibodies for patients with COVID-19. Researchers from various disciplines have identified specific BCRs related to COVID-19 ( – , , , ). The antibody-mediated defense against viruses has been defined in the serum, with IgG being the main immunoglobulin isotype. However, peripheral blood may not be the adequate site where antibodies directly prevent SARS-CoV-2 infection. Instead, the antibodies cooperate with antiviral substances released by the respiratory epithelium in the respiratory mucosa to capture viruses ( ). A multiparametric immunomorphological analysis in the lung tissue of patients with COVID-19 revealed a strong infiltration of B cells and plasma cells without T cell infiltration, suggesting good local production of antibodies ( ). In this study, artificial antibodies against IgH and IgK chains of BCR, identified in FFPE lung tissues, were developed. We hypothesized that BCRs in the lung tissue of patients who died of pneumonia caused by COVID-19 would have more direct neutralizing activity against SARS-CoV-2 than antibodies in the serum. Several BCR IgH and IgK chains have been detected in the upper lung lobe, following signs of mild inflammation and SARS-CoV-2 detection ( ). IGHV1-69 , IGHV3-23 , IGKV2-28 , and IGKV3-20 , which were the most frequently detected IgH chains ( ), were compared with the BCR repertoires associated with COVID-19 reported in other studies. From a list of 294 SARS-CoV-2 RBD-targeting antibodies, IGHV3-53 is the most frequently detected antibody in samples obtained from patients with COVID-19 ( , ). IGHV1-69 and IGHV3-23 were among the top 7 and 10 antibodies detected, respectively, and some of the BCR heavy chains detected in the lung lobe were also frequently detected ( , ). Bulk BCR heavy chain analysis in the PBMCs from 19 patients showed that IGHV4 families expression levels were higher compared with those from HDs, whereas IGHV1-69 and IGHV3-23 were expressed at higher levels in the HD groups ( ). However, it was indicated that IGHV1-69 was highly reactive against RBD- and IGHV3-23 against N-terminal domain (NTD)-sorted mAbs for SARS-CoV-2. A previous study comparing PBMCs from 12 COVID-19 recovery patients and six HDs indicated that IGHV3-23 was significantly increased in the COVID-19 group, and IGHV1-69 was expressed at similar levels between the groups ( ). Similar results have been reported using single-cell RNA-Seq of PBMCs, which showed that IGHV3 families are expressed at high levels in patients with COVID-19 ( ). In contrast, Zhou et al. indicates that IGHV1-69 is the most frequently used IGHV gene of 54 monoclonal neutralizing antibodies established in BCRs of PBMC from three COVID-19 donors ( ). In addition, BCR repertoire analysis by single-cell RNA-seq of blood samples revealed that IGHV3-23 is highly represented both in patients and HDs ( ), indicating that the response to SARS-CoV-2 might be commonly elicited after vaccination. In this study, paired PBMC-derived BCR repertoires of five convalescent COVID-19 HDs showed that the expression of IGHV1-69 / IGKV2-28 and IGHV1-69 / IGKV3-20 was weak in both patients with COVID-19 and HDs. In contrast, IGHV3-23 / IGKV2-28 and IGHV3-23 / IGKV3-20 levels were higher in both the groups. Thus, IGHV3-23 was equally expressed in the B cells derived from the peripheral blood of patients with COVID-19 and HDs. It is unclear to what extent IGHV1-69 , which was prominently expressed in the B cells of the lung lobe, was also expressed in the B cells in peripheral blood from the same patient since we did not perform BCR repertoire analysis in PBMC. Although the reports of BCR repertoires related to SARS-CoV-2 vary by the degree of symptoms, duration of disease onset, and race, data have been accumulated. Two representative IgH chains IGHV1-69 and IGHV3-23 from our BCR repertoires analysis in FFPE tissue conducted in early 2020 suggest the possibility of simulating subsequent large-scale reliable studies. Similar to other reports, we also examined BCR repertoires in the peripheral blood of two HDs who had been infected with COVID-19 and had recovered at least six months prior (HD-COVID-19), two HDs who had been treated with an mRNA vaccine (mRNA-1273 or BNT162b2) at least two months prior (HDs with vaccine), and three HDs who had not received the vaccine (HDs without vaccine). There are five classes of immunoglobulins: IgA, IgD, IgE, IgG, and IgM, and the expression of IGHG1 and IGHG2, the subtype of IgG, tended to be higher in the HD-COVID-19 group, whereas IGHM was higher in the HDs with vaccine group ( ). The HD-COVID-19 group tended to express IgL subtype IGLC2, whereas the HDs with vaccine group tended to express IGLC3, and no markedly increased immunoglobulins were found in the HDs without vaccine group. Our results differed from the immunoglobulin subtype analysis of the BCR repertoire of PBMC in 16 patients with COVID-19 and eight HDs previously reported and were more similar to the expression distribution in the HDs ( ). However, IGHG1 tended to be highly expressed in the patients with COVID-19. In our analysis, IGHV3-23 , IGKV3D-20 , and IGKV2-29 expression levels were higher in the BCR of PBMC from the HD-COVID-19 group, while IGHV3-69 and IGKV1D-39 were detected in HDs with vaccine, and IGKV4-1 in HDs without vaccine ( ). Our findings of a higher expression of IGHV3-23 in PBMC from patients with COVID-19 are consistent with a previous report ( ); however, conflicting findings have been reported elsewhere ( , ). Wang et al. systematically performed BCR analysis in PBMCs from HDs with no history of SARS-CoV-2 infection 2 months after the third dose of mRNA-1273 or BNT162b2 mRNA vaccines ( ). The IgG heavy chain, IGHV3-53 and IGHV3-30 , and IgG light chain, IGKV1-39 and IGKV1-33 , were expressed at higher levels in HDs with vaccine groups. Here, IGHV3-53 , IGHV3-30 , IGKV1-39 , and IGKV1-33 expression levels in the HDs with vaccine group were similar to others. The discrepancy may be due to the different vaccines utilized and the possibility of Japanese-specific BCR repertories. To the best of our knowledge, there have been no reports of paired PBMC-derived BCR repertoires in HDs with SARS-CoV-2 mRNA vaccination. The major BCR pairs in each sample are presented in and only one HD-COVID-19 and two HDs with vaccine tended to have a higher percentage of specific BCR pairs. The exact same BCR pairs in each group was not detected, making the interpretation difficult. The increase in BCR pairs with IGHV3-69-1 due to vaccine treatment may be meaningful, but further studies are warranted. The artificial antibodies produced in this study showed a certain level of neutralizing activity against SARS-CoV-2, and it was more effective when mixed with other antibodies. We selected BCR pairs with high detection frequencies of IgH and IgK chains because BCR analysis from FFPE-derived BCRs could only be performed with bulk-based methods. It is speculated that, not only the BCR pairs we selected and synthesized in this study, but also other combinations of BCRs, may have additional neutralizing activity against SARS-CoV-2. An improved technique is needed to obtain high-quality RNA in adequate quantities from FFPE samples, as not all BCRs can be isolated effectively owing to the limited sample volume and sequencing depth. Although the SARS-CoV-2 neutralizing activity of artificial antibodies synthesized from lung FFPE tissue was superior to vaccine-naïve plasma IgG and similar to 2 nd vaccine-treated groups, the SARS-CoV-2 neutralizing effect of the plasma specimens in peripheral blood enhances with the number of mRNA vaccine intakes and the timing of the analysis after vaccination ( ). Enriched IgG from peripheral blood may contain a high proportion of antibodies to SARS-CoV-2 due to vaccine injection, but also antibodies to a variety of other antigens. We believe the BCR repertoire analysis in lung tissue from severe cases of COVID-19 may have the highest proportion of antibodies to SARS-CoV-2. Further validation is needed to determine which method-derived antibodies have higher neutralizing effect against SARS-CoV-2. In conclusion, the BCRs repertoire analysis from the lung tissue where SARS-CoV-2 was present and B-cells were enriched may contribute to produce the effective artificial antibodies or narrow down the candidates of specific BCRs from various BCR repertoires in peripheral blood against various pathogens. Autopsies from raw tissue tend to be avoided in outbreaks of infection where secondary infections is initially suspected, as in the case of SARS-CoV-2. We believe that performing FFPE lung tissue-derived BCR analysis will be an important clue in confronting unknown viral threats in the future. The datasets presented in this study can be found in online repositories. The names of the repositories and accession numbers can be found below: DDBJ DRA - accession number DRA013492; NCBI SRA - accession number PRJDB13054. The studies involving human participants were reviewed and approved by the Research Ethics Committee of Wakayama Medical University (approval no.: 2882, 2961). The patients/participants provided their written informed consent to participate in this study. SI, TTs, MKi, TI, KY, MKo, and SH conceived the experiment. KMi, SM, and TK obtained samples and prepared the FFPE block. HM, ST collected whole blood samples from healthy donors with informed consent. YS performed high-throughput sequencing, and SI, TO, MKi, SS, KI, SH developed and performed data analysis. SI, SU, TIt, MKo, KMa, TTo, HY, SH interpreted the data. SI, TTs, MKi and SH wrote the manuscript with contributions from all authors. All authors contributed to the article and approved the submitted version.
Prospective, early longitudinal assessment of lymphedema-related quality of life among patients with locally advanced breast cancer: The foundation for building a patient-centered screening program
f6234ff1-5ac4-46cc-8fdb-5f4a4d62ae20
9996356
Patient-Centered Care[mh]
Introduction Breast cancer-related lymphedema (BCRL) is a chronic, debilitating side effect of axillary lymph node dissection (ALND) and regional nodal irradiation (RNI) that negatively affects physical, social, and psychologic function as well as work productivity. BCRL can also pose significant financial costs to patients and the healthcare system.[ , , ] Because BCRL has broad effects on breast cancer survivors, the National Comprehensive Cancer Network recommends routine screening for BCRL at follow-up oncologic visits. Although some patients with locally advanced disease can be spared ALND to decrease the risk of BCRL, , for many, ALND and RNI remain a standard of care, and thus the risks associated with these interventions remain common. Despite decades of research, understanding the effects of BCRL on patients and how best to communicate with them about risks and the need for preventive and therapeutic strategies remain undefined. The reported incidence of BCRL ranges from 20% to 60.3% among high-risk populations,[ , , , , ] which complicates making informed decisions. Even patients who are satisfied with their oncologic care are less satisfied with information provided on the physical, psychological, and social sequelae of BCRL. The optimal strategy for lymphedema education regarding risks, precautions, signs and symptoms, and exercises to optimize long-term compliance and durability are still unknown, and it is thought that patient worry plays a role in the adoption of risk-reducing behaviors. Disappointingly, one randomized intervention that included education and a visit with a lymphedema specialist showed no difference in the development of lymphedema, which the authors hypothesized was due to poor adherence. An unmet need remains to better understand the reason behind patients’ lack of compliance and to better design patient support that ultimately leads to long-term use of preventive and therapeutic interventions for BCRL. This study sought to prospectively examine the incidence and sequelae of BCRL in a racially diverse cohort of patients with locally advanced breast cancer who underwent ALND as standard of care. We measured gross arm volume change and examined symptoms, loss of productivity, and compliance with recommended interventions via serial patient-reported outcomes. We sought to evaluate if patient-reported measures of lymphedema correlate better than objective measures of lymphedema with patient-reported deficits in HRQOL in order to assess whether patient perceptions and understanding of their care should guide screening programs as we hypothesized. In seeking to design an optimal lymphedema screening program within an oncology clinic, understanding how lymphedema broadly affects patients and why they discontinue recommended therapy can be a first step toward improving compliance with preventive and therapeutic interventions and improving outcomes. Materials and methods 2.1 Study design This prospective study was part of a lymphedema screening initiative that was designed to screen breast cancer patients for lymphedema preoperatively and after undergoing ALND among patients who intended to continue postoperative follow-up visits at the Nellie B. Connally Breast Center at The University of Texas MD Anderson Cancer Center (NCT05056207). Part of this initiative included efforts to coordinate a referral to rehabilitative medicine after ALND so that a lymphedema-certified therapist could provide education on lymphedema prevention exercises. Patients were consecutively recruited between 2018 and 2019. Patients received radiotherapy based on guidelines published by the National Comprehensive Cancer Network, and if regional nodal irradiation was delivered it included the undissected axilla, internal mammary nodes, and supraclavicular lymph nodes. Lymphedema screening was performed in coordination with other oncologic follow-up visits at 0–3 months, >3–6 months, >6–9 months, >9–12 months, and >12–16 months postoperatively. At each screening visit, arm volume was measured by perometry by a medical assistant, and patients were asked to complete questionnaires on aspects of BRCL. Patients were advised to wear sleeves/compressive garments by lymphedema-certified therapists if they showed early signs or symptoms of BCRL as part of routine clinical praactice. 2.2 Ethical considerations All participants voluntarily provided written informed consent to enroll in this institutional review board approved study. All participants received care consistent with standard of care guidelines. Patient confidentiality and privacy were protected as data were de-identified and stored on a secure server at all times. Only essential investigators had access to study data. There was no potential for harm in this survey-based study. 2.3 Patient assessments Arm volume was documented as the mean of three measurements per upper extremity at each visit, measured with a horizontal Perometer 400NT (Perosystem). Because not all patients had undergone perometry before surgery, changes in arm volume were defined as: (volume affected arm)/(volume unaffected arm) at a given timepoint. Mild-moderate lymphedema was objectively defined as an increase of ≥5% by perometry and/or physical examination findings by an oncologist. Patients were stratified according to having ever self-reported lymphedema and having ever been diagnosed with objective lymphedema during the study period. The intensity and distress of physical and psychosocial lymphedema symptoms were assessed with the Lymphedema Symptom Intensity and Distress-Arm (LSIDS-A), a validated tool for assessing self-reported arm lymphedema symptoms in breast cancer survivors. Items from the LSIDS-A are grouped into 7 symptom clusters: soft tissue sensation, neurological sensation, function, biobehavioral, resource, sexuality, and activity. The Work Productivity Activity and Impairment (WPAI) questionnaire was used to assess impairment with regard to work within and outside the home due to breast cancer. Responses to questions on the WPAI questionnaire are used to calculate 4 outcomes: absenteeism, presenteeism, overall work impairment due to health, and percent activity impairment due to health. Finally, we used a Lymphedema Screening Initiative Questionnaire (LSIQ) to assess patients’ understanding of lymphedema, their perception and satisfaction with lymphedema screening and physical therapy interventions, and their adherence to lymphedema prevention and treatment-related activities. This questionnaire was developed with feedback from 2 breast cancer advocates with lymphedema and was evaluated for comprehensiveness and comprehensibility by 7 other patients with breast cancer. Cognitive interviewing techniques were also used to ask open-ended questions to determine if other pertinent lymphedema-related items were addressed in the questionnaire and if its wording was appropriate prior to usage in this cohort. 2.4 Statistical analysis The distributions of clinical, demographic, and therapeutic characteristics were calculated for all enrolled patients and for patient groups stratified by self-reported or objective lymphedema status. Continuous variables are reported as mean (standard deviation [SD]) or median (interquartile range), as appropriate. Categorical variables are reported as counts and percentages. Comparisons between stratified groups were made with a two-sample t -test or Mann-Whitney U test for continuous variables and Chi-square tests or Fisher's exact tests for categorical variables. Trends in patient-reported outcomes over time were assessed with a linear mixed-effects model with random intercepts, accounting for associations between repeated measurements in the same subject. Time was treated as a fixed effect and the subject as a random effect. Associations between WPAI and LSIDS-A and lymphedema status were also tested with a linear mixed-effects model but with lymphedema status as the fixed effect. An “LSIQ ever” variable was created for each LSIQ question to assess the percentage of patients who ever agreed to a particular question across the study window. Associations between the LSIQ-ever variable and ever having self-reported or objective lymphedema was evaluated with Chi-square or Fisher's exact tests. A two-tailed p value of <0.05 was considered statistically significant in all analyses. Statistical analyses were done with R (version 4.1.1). Study design This prospective study was part of a lymphedema screening initiative that was designed to screen breast cancer patients for lymphedema preoperatively and after undergoing ALND among patients who intended to continue postoperative follow-up visits at the Nellie B. Connally Breast Center at The University of Texas MD Anderson Cancer Center (NCT05056207). Part of this initiative included efforts to coordinate a referral to rehabilitative medicine after ALND so that a lymphedema-certified therapist could provide education on lymphedema prevention exercises. Patients were consecutively recruited between 2018 and 2019. Patients received radiotherapy based on guidelines published by the National Comprehensive Cancer Network, and if regional nodal irradiation was delivered it included the undissected axilla, internal mammary nodes, and supraclavicular lymph nodes. Lymphedema screening was performed in coordination with other oncologic follow-up visits at 0–3 months, >3–6 months, >6–9 months, >9–12 months, and >12–16 months postoperatively. At each screening visit, arm volume was measured by perometry by a medical assistant, and patients were asked to complete questionnaires on aspects of BRCL. Patients were advised to wear sleeves/compressive garments by lymphedema-certified therapists if they showed early signs or symptoms of BCRL as part of routine clinical praactice. Ethical considerations All participants voluntarily provided written informed consent to enroll in this institutional review board approved study. All participants received care consistent with standard of care guidelines. Patient confidentiality and privacy were protected as data were de-identified and stored on a secure server at all times. Only essential investigators had access to study data. There was no potential for harm in this survey-based study. Patient assessments Arm volume was documented as the mean of three measurements per upper extremity at each visit, measured with a horizontal Perometer 400NT (Perosystem). Because not all patients had undergone perometry before surgery, changes in arm volume were defined as: (volume affected arm)/(volume unaffected arm) at a given timepoint. Mild-moderate lymphedema was objectively defined as an increase of ≥5% by perometry and/or physical examination findings by an oncologist. Patients were stratified according to having ever self-reported lymphedema and having ever been diagnosed with objective lymphedema during the study period. The intensity and distress of physical and psychosocial lymphedema symptoms were assessed with the Lymphedema Symptom Intensity and Distress-Arm (LSIDS-A), a validated tool for assessing self-reported arm lymphedema symptoms in breast cancer survivors. Items from the LSIDS-A are grouped into 7 symptom clusters: soft tissue sensation, neurological sensation, function, biobehavioral, resource, sexuality, and activity. The Work Productivity Activity and Impairment (WPAI) questionnaire was used to assess impairment with regard to work within and outside the home due to breast cancer. Responses to questions on the WPAI questionnaire are used to calculate 4 outcomes: absenteeism, presenteeism, overall work impairment due to health, and percent activity impairment due to health. Finally, we used a Lymphedema Screening Initiative Questionnaire (LSIQ) to assess patients’ understanding of lymphedema, their perception and satisfaction with lymphedema screening and physical therapy interventions, and their adherence to lymphedema prevention and treatment-related activities. This questionnaire was developed with feedback from 2 breast cancer advocates with lymphedema and was evaluated for comprehensiveness and comprehensibility by 7 other patients with breast cancer. Cognitive interviewing techniques were also used to ask open-ended questions to determine if other pertinent lymphedema-related items were addressed in the questionnaire and if its wording was appropriate prior to usage in this cohort. Statistical analysis The distributions of clinical, demographic, and therapeutic characteristics were calculated for all enrolled patients and for patient groups stratified by self-reported or objective lymphedema status. Continuous variables are reported as mean (standard deviation [SD]) or median (interquartile range), as appropriate. Categorical variables are reported as counts and percentages. Comparisons between stratified groups were made with a two-sample t -test or Mann-Whitney U test for continuous variables and Chi-square tests or Fisher's exact tests for categorical variables. Trends in patient-reported outcomes over time were assessed with a linear mixed-effects model with random intercepts, accounting for associations between repeated measurements in the same subject. Time was treated as a fixed effect and the subject as a random effect. Associations between WPAI and LSIDS-A and lymphedema status were also tested with a linear mixed-effects model but with lymphedema status as the fixed effect. An “LSIQ ever” variable was created for each LSIQ question to assess the percentage of patients who ever agreed to a particular question across the study window. Associations between the LSIQ-ever variable and ever having self-reported or objective lymphedema was evaluated with Chi-square or Fisher's exact tests. A two-tailed p value of <0.05 was considered statistically significant in all analyses. Statistical analyses were done with R (version 4.1.1). Results A total of 247 patients were enrolled with a mean number of follow-up lymphedema assessments (including at a minimum either perometry or a patient-reported outcome tool) per patient of 1.55 (median 1, range 1–4). These enrolled patients were identified from a total of 635 patients screened, of whom the remainder did not consent on study for the following reasons: 71 were non-English speakers, 82 had not undergone ALND, 62 declined interest in enrollment and 173 were not able to be contacted in coordination with their medical visit. The mean follow-up time was 8.57 months (median 8 months, range 1–16 months). The majority of patients (91%, n = 244) saw a lymphedema-certified therapist during the study period. Baseline, pre-operative perometry measurements were not available for 92 patients (37%). Almost all patients (236 of 247, 96%) underwent perometry measurements. 100% of patients filled out the WPAI and LSIDS at least once, and all but one patient responded to the LSIQ. 96% of patients (236) responded in the LSIQ to a question regarding self-identification of lymphedema. 3.1 Patient characteristics Patient demographic, clinical, and pathologic characteristics are summarized in . Most participants (162 [66%]) were overweight or obese, 27 (11%) identified as Black, 18 (7%) identified as Asian, and 36 (15%) identified as Hispanic. All participants underwent ALND as part of cancer treatment, and the mean number of lymph nodes removed was 22.3. Seventy-five patients (30%) had clinical N2–N3 disease, 204 (83%) underwent mastectomy, and 151 (61%) had some form of breast reconstructive surgery during the study period. Fifteen patients (6%) underwent lymphovenous bypass surgery, for whom 8 underwent this procedure as a preventive intervention and 7 for treatment of lymphedema. Also, 199 patients (81%) received neoadjuvant chemotherapy and 224 (91%) received radiation therapy ( ), for whom 96% (214 patients) were treated using standard fractionation radiotherapy. 3.2 Univariate analysis of factors associated with BCRL status Patients who self-reported lymphedema were more likely to have a higher number of lymph nodes removed ( p = 0.02). Both self-reported and objective lymphedema were associated with a higher number of involved lymph nodes ( p = 0.02 and p = 0.004 respectively), having undergone a surgical procedure to treat lymphedema (both p= 0.026) and increased likelihood of reporting use of an intermittent pneumatic compression device to treat lymphedema ( p = 0.03 and p = 0.02, respectively) ( ). Segmental mastectomy was associated with a lower rate of objective lymphedema ( p = 0.02). 3.3 Lymphedema diagnosis and knowledge A total of 113 patients (46%) self-reported having lymphedema at some timepoint ( ). Forty-four percent (50) of patients with self-reported lymphedema had objective lymphedema based on perometry measurements. Of patients ever found to have lymphedema based on objective measures, 31 (38%) never had self-reported lymphedema. Among patients who were not found to have lymphedema by objective measures, 59 (41%) still self-reported as having lymphedema. The incidence of patient self-reported lymphedema increased with the follow-up time after surgery, from 19% during 0–3 months after surgery up to 45% after 12 months (p < 0.001) ( ). 60% (146 of 245 respondents) reported ever having known about lymphedema before meeting with their medical team, and 92% (226 of 245 respondents) reported that they had ever received education regarding their lifetime risk of lymphedema because of their breast cancer ( ). Fear of lymphedema was reported by 178 study respondents (73%) during the follow-up period ( ), a proportion that remained relatively stable over time ( ). Most patients (232 [95%]) responded that their health concerns about lymphedema were understood ( ). Over time, a higher percentage of patients reported being aware of how to identify signs of lymphedema (from 76% to 91%, p = 0.014) ( ). 3.4 Lymphedema treatment recommendations and patient-reported compliance More than half of patients reported knowing about lymphedema prior to meeting their breast cancer team (146 [60% of 245 respondents]). Most patients (208 [89% of 234 respondents]) reported having been provided with education at some point about exercises to prevent lymphedema ( ) . Most patients reported performing lymphedema-prevention exercises, although the proportion of patients performing none of these exercises increased over time from 7% to 18%, and the proportion of patients reporting performing lymphedema prevention exercises daily decreased from 39% to 21% ( p = 0.002) ( ). No association was found between either patient-reported lymphedema ( p = 0.751) or objective, mild-moderate lymphedema ( p = 0.293) and self-reported performance of exercise to prevent lymphedema ( ). Fear of lymphedema was associated with exercise compliance, with 161 patients (92% of 175) ever expressing fear of lymphedema performing preventive exercises vs. 48 patients (80% of 60) among those who did not ( p = 0.02). Both patient-reported lymphedema and objective, mild-moderate lymphedema were associated with having been advised to use a compressive garment and ever wearing one (both p < 0.001). Most patients (95 [83% of 114]) who recalled ever having been advised to wear a compressive sleeve/garment reported having worn these at some point during the study period ( ). Fear of lymphedema was more prevalent among those patients who reported ever having worn a compressive sleeve/garment compared to those who did not wear those garments (51% vs 28%, p = 0.002). 3.5 Symptoms and productivity impairment associations with lymphedema Upper-extremity symptoms related to soft tissue sensation intensity, neurological sensation, function, and activity were reported to be worst within 3 months of surgery and improved thereafter during the study period ( ). Reports of biobehavioral, resource, and sexuality concerns were stable during the study period. Patient-reported lymphedema was associated with a higher incidence of patient-reported soft tissue sensation concerns ( p < 0.001), biobehavioral concerns ( p = 0.006), and resource/insurance concerns ( p = 0.001) but not with patient-reported problems regarding physical function, sexuality, or general activity ( ). The intensity ( p = 0.042) and distress ( p = 0.001) associated with changes in neurological sensation were worse among those with self-reported lymphedema. Objective, mild-moderate lymphedema was also associated with soft tissue sensation concerns ( p = 0.025), including related intensity ( p = 0.017) and distress ( p = 0.008), but not with any other symptoms or measures of work and activity impairment ( ). Work productivity factors including absenteeism, presenteeism, overall work impairment, and activity impairment all improved after the immediate postoperative period, as did patient-reported loss of productivity ( ). At 12–16 months after ALND, breast cancer-associated side effects had not fully resolved with regard to regular daily activities, work productivity, or hours missed from work. We found statistically significant positive associations between patient self-reported lymphedema and absenteeism ( p = 0.039), work impairment ( p = 0.043), and activity impairment ( p = 0.006) ( ). Patients requiring more intensive lymphedema treatments such as intermittent pneumatic compression (IPC) at 9–12 months after surgery had greater work impairment than those using compressive garments or bandaging ( p = 0.026), whereas no such association was found in the immediate (0–3 months) postoperative period ( p = 0.747). Worse impairment in productivity measured by the WPAI and worse soft tissue, neurological, biobehavioral, and resource/insurance concerns measured by the LSIDS-A were seen among those with self-reported lymphedema, whereas fewer such correlations were noted among those with an objective diagnosis (perometer and physical examination–based) ( ). There was no association between patient-reported nor objective lymphedema and fear of lymphedema ( ). 3.6 Patient impressions of their care team Patients reported a high level of satisfaction with their overall cancer care (96% [236 of 246 patients]) and physical/occupational therapy care (94% [191 of 204 patients who saw a therapist]), neither of which varied by lymphedema status ( p = 0.338 and p = 0.606, respectively, ). Most patients (227 [93%]) stated that their medical team answered their questions about lymphedema ( ); 199 patients (81%) reported that the lymphedema screening received helped alleviated their fears of lymphedema, and 179 (73%) reported that the medical team did ( ). Patient characteristics Patient demographic, clinical, and pathologic characteristics are summarized in . Most participants (162 [66%]) were overweight or obese, 27 (11%) identified as Black, 18 (7%) identified as Asian, and 36 (15%) identified as Hispanic. All participants underwent ALND as part of cancer treatment, and the mean number of lymph nodes removed was 22.3. Seventy-five patients (30%) had clinical N2–N3 disease, 204 (83%) underwent mastectomy, and 151 (61%) had some form of breast reconstructive surgery during the study period. Fifteen patients (6%) underwent lymphovenous bypass surgery, for whom 8 underwent this procedure as a preventive intervention and 7 for treatment of lymphedema. Also, 199 patients (81%) received neoadjuvant chemotherapy and 224 (91%) received radiation therapy ( ), for whom 96% (214 patients) were treated using standard fractionation radiotherapy. Univariate analysis of factors associated with BCRL status Patients who self-reported lymphedema were more likely to have a higher number of lymph nodes removed ( p = 0.02). Both self-reported and objective lymphedema were associated with a higher number of involved lymph nodes ( p = 0.02 and p = 0.004 respectively), having undergone a surgical procedure to treat lymphedema (both p= 0.026) and increased likelihood of reporting use of an intermittent pneumatic compression device to treat lymphedema ( p = 0.03 and p = 0.02, respectively) ( ). Segmental mastectomy was associated with a lower rate of objective lymphedema ( p = 0.02). Lymphedema diagnosis and knowledge A total of 113 patients (46%) self-reported having lymphedema at some timepoint ( ). Forty-four percent (50) of patients with self-reported lymphedema had objective lymphedema based on perometry measurements. Of patients ever found to have lymphedema based on objective measures, 31 (38%) never had self-reported lymphedema. Among patients who were not found to have lymphedema by objective measures, 59 (41%) still self-reported as having lymphedema. The incidence of patient self-reported lymphedema increased with the follow-up time after surgery, from 19% during 0–3 months after surgery up to 45% after 12 months (p < 0.001) ( ). 60% (146 of 245 respondents) reported ever having known about lymphedema before meeting with their medical team, and 92% (226 of 245 respondents) reported that they had ever received education regarding their lifetime risk of lymphedema because of their breast cancer ( ). Fear of lymphedema was reported by 178 study respondents (73%) during the follow-up period ( ), a proportion that remained relatively stable over time ( ). Most patients (232 [95%]) responded that their health concerns about lymphedema were understood ( ). Over time, a higher percentage of patients reported being aware of how to identify signs of lymphedema (from 76% to 91%, p = 0.014) ( ). Lymphedema treatment recommendations and patient-reported compliance More than half of patients reported knowing about lymphedema prior to meeting their breast cancer team (146 [60% of 245 respondents]). Most patients (208 [89% of 234 respondents]) reported having been provided with education at some point about exercises to prevent lymphedema ( ) . Most patients reported performing lymphedema-prevention exercises, although the proportion of patients performing none of these exercises increased over time from 7% to 18%, and the proportion of patients reporting performing lymphedema prevention exercises daily decreased from 39% to 21% ( p = 0.002) ( ). No association was found between either patient-reported lymphedema ( p = 0.751) or objective, mild-moderate lymphedema ( p = 0.293) and self-reported performance of exercise to prevent lymphedema ( ). Fear of lymphedema was associated with exercise compliance, with 161 patients (92% of 175) ever expressing fear of lymphedema performing preventive exercises vs. 48 patients (80% of 60) among those who did not ( p = 0.02). Both patient-reported lymphedema and objective, mild-moderate lymphedema were associated with having been advised to use a compressive garment and ever wearing one (both p < 0.001). Most patients (95 [83% of 114]) who recalled ever having been advised to wear a compressive sleeve/garment reported having worn these at some point during the study period ( ). Fear of lymphedema was more prevalent among those patients who reported ever having worn a compressive sleeve/garment compared to those who did not wear those garments (51% vs 28%, p = 0.002). Symptoms and productivity impairment associations with lymphedema Upper-extremity symptoms related to soft tissue sensation intensity, neurological sensation, function, and activity were reported to be worst within 3 months of surgery and improved thereafter during the study period ( ). Reports of biobehavioral, resource, and sexuality concerns were stable during the study period. Patient-reported lymphedema was associated with a higher incidence of patient-reported soft tissue sensation concerns ( p < 0.001), biobehavioral concerns ( p = 0.006), and resource/insurance concerns ( p = 0.001) but not with patient-reported problems regarding physical function, sexuality, or general activity ( ). The intensity ( p = 0.042) and distress ( p = 0.001) associated with changes in neurological sensation were worse among those with self-reported lymphedema. Objective, mild-moderate lymphedema was also associated with soft tissue sensation concerns ( p = 0.025), including related intensity ( p = 0.017) and distress ( p = 0.008), but not with any other symptoms or measures of work and activity impairment ( ). Work productivity factors including absenteeism, presenteeism, overall work impairment, and activity impairment all improved after the immediate postoperative period, as did patient-reported loss of productivity ( ). At 12–16 months after ALND, breast cancer-associated side effects had not fully resolved with regard to regular daily activities, work productivity, or hours missed from work. We found statistically significant positive associations between patient self-reported lymphedema and absenteeism ( p = 0.039), work impairment ( p = 0.043), and activity impairment ( p = 0.006) ( ). Patients requiring more intensive lymphedema treatments such as intermittent pneumatic compression (IPC) at 9–12 months after surgery had greater work impairment than those using compressive garments or bandaging ( p = 0.026), whereas no such association was found in the immediate (0–3 months) postoperative period ( p = 0.747). Worse impairment in productivity measured by the WPAI and worse soft tissue, neurological, biobehavioral, and resource/insurance concerns measured by the LSIDS-A were seen among those with self-reported lymphedema, whereas fewer such correlations were noted among those with an objective diagnosis (perometer and physical examination–based) ( ). There was no association between patient-reported nor objective lymphedema and fear of lymphedema ( ). Patient impressions of their care team Patients reported a high level of satisfaction with their overall cancer care (96% [236 of 246 patients]) and physical/occupational therapy care (94% [191 of 204 patients who saw a therapist]), neither of which varied by lymphedema status ( p = 0.338 and p = 0.606, respectively, ). Most patients (227 [93%]) stated that their medical team answered their questions about lymphedema ( ); 199 patients (81%) reported that the lymphedema screening received helped alleviated their fears of lymphedema, and 179 (73%) reported that the medical team did ( ). Discussion Although screening for lymphedema is a first step toward appropriate referral for treatment interventions, how best to do this to support preventive and therapeutic interventions and to meet patients’ broader lymphedema-related concerns has been under-studied. We sought to define patient-reported lymphedema symptoms in a cohort at high risk of developing lymphedema. Interestingly, in our study, although the number of lymph nodes removed was not correlated with the presence of objective lymphedema, the number of positive lymph nodes was strongly correlated with the presence of objective lymphedema ( ), suggesting that the location and pathology of affected nodes may play a larger role in objective lymphedema development. Additionally, this study was unique in identifying a lack of association between obesity and presence of lymphedema ( ) because lymphedema has been consistently shown to be associated with obesity [ , , , ]. The lack of significant associations between patient-reported or objective lymphedema status and most variables ( ) suggests that it is difficult to predict who will suffer from lymphedema among patients undergoing ALND. Thus, it is more important than ever to understand potential motivators for patient compliance with preventive and therapeutic interventions. Our study included 247 patients, a sample size which is similar to or greater than that of similar studies investigating associations with BCRL , , . Though estimates of the incidence of BCRL vary widely [ , , , , ], the results from this study suggest that BCRL is more common than previous conservative estimates. This may be explained in part by the variety of definitions of BCRL used in various studies. However, this study found that the incidence of BCRL was high as defined both by patient-report and objective measurement, especially in this racially diverse cohort. We provide a contemporary assessment of the impact of lymphedema on patient reported outcomes within the first year of axillary lymph node dissection. We identified that changes in work impairment, soft tissue and neurological sensations, and financial concerns were all associated with patient-reported lymphedema, items which should be particularly targeted in lymphedema screening efforts. This was the first study to examine the role of patient reported fear of lymphedema and its impact on noncompliance with BCRL interventions. Knowledge of preventive exercises was not associated with increased utilization of these preventive interventions, highlighting the importance of not only evaluating alternative prevention interventions but also determining means to increase patient adherence. Based on our findings, we propose that greater psychosocial supports for breast cancer patients following ALND during their oncology follow-up visits will be important for improving lymphedema outcomes. Lymphedema may cause or exacerbate emotional distress, , and those with greater lymphedema-related distress have been found to have worse physical and mental health outcomes.[ , , , ] Guidelines suggest that clinical monitoring of breast cancer survivors should include psychosocial assessment, and our study identified several key areas in which a lymphedema screening program should target psychosocial support for patients, including addressing high and persistent fear of lymphedema. The associations we found between symptom domains and objective versus patient-reported diagnoses of lymphedema also highlight the importance of including patient symptomatology as a key component of lymphedema screening. Facilitating patient compliance with both preventive and therapeutic interventions to target BCRL are critical to improving long-term outcomes for patients, as better compliance to BCRL self-care modalities such as wearing compression garments and performing therapeutic exercises has been shown to be associated with prolonged maintenance of arm volume and subsequently decreased BCRL progression . However, preventive exercise compliance decreased over time since ALND in this study. The CALGB70305 trial reported no difference in lymphedema outcomes between education only and a focused education and prevention intervention; while the authors posited that poor adherence may have been the cause of this lack of benefit, a gold standard in prevention interventions has yet to be defined. Others have also found that a greater understanding of BCRL risk management may be associated with better adherence to recommended therapeutic and preventive strategies, as has providing refresher information in the clinic months after surgery. Better understanding of factors regarding patient compliance over time with recommended therapies will be a key component to future trials examining preventive interventions. We found that fear of BCRL was associated with a higher incidence of patient-reported compliance with pursuing exercises and therapies to prevent or treat lymphedema. However, preventive exercise compliance still decreased over time since ALND. While fear and psychosocial factors likely play a major role, these results show that fear alone may not be driving adherence to exercises and therapeutic garment use. Complex interactions between fear, satisfaction with lymphedema-related care, and understanding of personalized risk and sequelae of developing BCRL may play a role in how patients perceive the burden of BCRL; this may impact their motivation and adherence to interventions that have been shown to be effective . A recent qualitative study examining patient experiences with BCRL highlighted the importance of healthcare professionals providing appropriate support for self-management of lymphedema, and our study lends further credence to the need for additional supports for patients in the clinic. In our study, patient-reported lymphedema appeared to have greater correlation with physical and emotional impairments compared to objective measures of lymphedema, which highlights the need for improved methods for diagnosis BCRL. This brings in to question the role of the objectively measured lymphedema in the absence of patient reported symptoms as a part of screening programs, as a sizable percent of participants with objective lymphedema did not self-report lymphedema. Coupled with the fact that patient-reported lymphedema was more strongly associated with several outcome measures including symptoms and activity impairment, this suggests that patient-report of lymphedema should play a bigger role in lymphedema screening going forward. Published data on lymphedema incidence and patient-reported outcomes present patients and providers with a broad range of expectations. In the Alliance Z1071 study of 488 patients who underwent ALND and developed lymphedema, the 12-month cumulative incidence of lymphedema (quantified as a ≥10% increase in limb volume) was 30.7% (95% CI 26.4–35.8), while 12.0% (95% CI 9.1–15.8) and 13.6% (95% CI 10.5–17.6) of patients reported symptoms of heaviness or swelling at that time, respectively. A group of 263 patients followed prospectively at Massachusetts General Hospital after ALND and RNI reported a 5-year cumulative incidence of moderate, objective lymphedema, defined as a relative limb volume change compared to preoperative baseline of ≥10%, of 30.1%, with a 12-month value of approximately 10%. Our 12-month values (using a mild-moderate limb volume difference comparing the affected to the unaffected limb of >5%) were higher for objectively measured lymphedema (36%) and could reflect that our cohort was at a higher risk of developing lymphedema based on their stage of disease and oncologic interventions required. Patients even with minimal limb volume changes have still been found to have impaired health-related quality of life (HRQOL), and others have found that upper extremity limb volumetric changes and grossly observed lymphatic changes do not sufficiently capture the burden faced from the patient perspective with regard to HRQOL. Newer patient-reported outcome tools that measure arm lymphedema outcomes have emerged and may help to better understand how BCRL fully affects patients. In our study, lymphedema symptoms were the worst during the earliest time period ( ). Though some of these observed effects may be due to peri-operative inflammation, the early timepoint was not excluded from analysis because of the importance of understanding the holistic patient experience with regards to their symptoms and how they experience lymphedema post-ALND. Our findings may also underscore the need for more sensitive means of diagnosing early-onset BCRL, including both external and internal physiological assessments. , Further, additional study is needed to manage the subjective symptoms of those without objective findings. Our study had multiple limitations. Pre-operative perometry measurements were not available for 37.2% of patients, so comparisons were made between affected and unaffected arm rather than to a pre-operative baseline. This limitation may have impacted the diagnosis of objective lymphedema, as a non-trivial proportion of patients have been found to have asymmetry in the upper extremity volume at the time of diagnosis, prior to any local-regional intervention, and as such, this may have impacted our diagnosis of objective lymphedema. Patients were followed for approximately 1 year after ALND, and thus the chronic effects of BCRL, which likely peak after that first year, , , , were not measured, meaning that we could not determine if earlier-onset patient-reported BCRL was ultimately associated with grossly observed BCRL later after surgery, as has been reported in the literature. Chronic lymphedema is quite likely to affect work productivity , and overall symptoms, and we could not discern those effects. Additionally, numerous other diagnosis and treatment-related factors outside of lymphedema can affect productivity and work impairment ( ), and thus any observed effects likely cannot be attributed to lymphedema alone. Many patients also only had one assessment, which makes it challenging to interpret patterns in outcomes over time post-operatively. Second, we did not assess lymphedema (either by self-report or objective measures) nor patient-reported outcomes before any oncologic intervention had begun, making it impossible to conclude whether patients returned to their baseline status. Third, there is the potential for sampling bias because our population included high numbers of patients at high risk of developing lymphedema because of extensive ALND and having received RNI. This complicates commenting on outcomes in a more varied patient population. Fourth, it is also possible that an intervention involving patient-reported outcome measures inquiring about lymphedema-related symptoms may in and of itself have impacted patients’ thoughts and fears around lymphedema. Additionally, the structure of the questions used when assessing patient-reported outcomes (agree/disagree) could have introduced acquiescence bias, thus potentially overestimating the effect size of various outcome variables. Conclusions In our high-risk cohort of patients with locally advanced breast cancer, nearly half reported having lymphedema after ALND and almost three-quarters reported fear of lymphedema. Patient-reported BCRL seemed to be more strongly associated with impairment in work productivity and decreased HRQOL than were objective measures of lymphedema. Fear of BCRL seems to drive patient compliance with preventive and therapeutic interventions and adherence wanes over time, all of which are factors that need to be incorporated in screening and preventive interventions. Additional strategies are needed to develop effective lymphedema patient screening, including screening for patient-report of their lymphedema status, as objectively measured lymphedema alone may miss many patients with impairment and other distress and symptoms. Screening programs must also focus on better supporting patients’ psychological and functional needs to improve long-term compliance with at-home interventions and to improve HRQOL outcomes. This study was supported by a Hearst Clinical Innovator Award from the 10.13039/100007313 University of Texas MD Anderson Cancer Center and the Center for Radiation Oncology Research, from support of the 10.13039/100003582 AIM Shared Resource, and received an Early Career Oncologist Award from the 104th Annual 10.13039/100007917 ARS Meeting. SFS has funding from the Emerson Collective Foundation and contracted research agreements with Alpha Tau, Exact Sciences, TAE Life Sciences, and Artios Pharmaceuticals. WAW receives personal fees from Exact Sciences and Epic Sciences. 10.13039/100009558 BDS has grant funding from 10.13039/100007210 Varian Medical Systems and royalty and equity interest in Oncora Medical. RL serves as a consultant for Monte Rosa Therapeutics.
Mycorrhiza-mediated recruitment of complete denitrifying
f1f6c286-486b-4532-9ea9-8c7e66ef7fe5
9996866
Microbiology[mh]
Nitrous oxide (N 2 O) is a very powerful and long-lived greenhouse gas with 273 times the global warming potential of CO 2 and is the most important ozone-depleting substance present in the atmosphere . However, constraining the global atmospheric N 2 O budget remains challenging as N 2 O fluxes at the soil-atmosphere interface are highly dynamic and variable, characterized by “hot spots” and “hot moments” at microscales that are often < 1 cm 3 in volume and associated with crop residue patches in agriculture . Estimates of N 2 O emission factors of crop residues vary widely, ranging from 0.17 to 2.9% , depending on residue properties and multiple environmental factors such as C/N ratio, soil type, water-filled pore space (WFPS) and temperature . The high spatiotemporal dynamics of N 2 O fluxes are due to the complex microbial processes underlying N 2 O production and consumption, and how these are affected by other biotic and abiotic factors . As such, uncovering microbial interactions at the microscale level that mediate the episodic N 2 O emissions is critical for the development of mitigation strategies. The production of N 2 O in soils is driven mainly by microbial driven processes such as nitrification and denitrification . Denitrification is regarded as the predominant N 2 O source from agricultural soils including soils where crop residues are returned, as the provision of degradable organic matter stimulates microbial respiration, resulting in oxygen depletion and soil anaerobiosis . Denitrification is a facultative process that enables the maintenance of microbial respiration. It involves a multistep reaction catalyzed by multiple enzymes and the relevant functional genes that used to characterize microbes, as denitrifiers are highly diverse and complex. Denitrifiers can produce N 2 O using two types of dissimilatory nitrite reductase encoded by the nirS and nirK genes that catalyze the reduction of soluble NO 2 − to gaseous NO, followed by rapid conversion to N 2 O as a detoxification approach . Complete denitrifiers also synthesize the N 2 O reductase (nosZ) encoded by the nosZ gene and yield N 2 as the end product of denitrification, which is an important biotic sink for N 2 O . The nosZ protein phylogeny has two distinct groups, clade I and the newly described clade II. The clade I nosZ -possessing microorganisms are more likely to be complete denitrifiers, as 83% of genomes with clade I nosZ also possess nirS and/or nirK genes . In contrast, the majority of microorganisms possessing clade II nosZ appear to be non-denitrifying N 2 O reducers which represent another important N 2 O sink without contributing to N 2 O production [ – ]. Hence, soil N 2 O emissions at the soil-atmosphere interface are highly dynamic, resulting from simultaneously occurring production and consumption processes. An in-depth understanding of the mechanisms by which soil microbial guilds govern the balance of these key processes is important for the development of effective N 2 O mitigation strategies. Microbial N 2 O production and consumption in crop residue patches and surrounding soil are part of a complex suite of processes carried out by a consortium of microbiomes, including plant-associated microbes such as arbuscular mycorrhizal fungi (AMF). AMF are key organisms with a dual niche in host roots and in the bulk soil beyond the rhizosphere . The extraradical fungal hyphae represent an important component and can proliferate into micropores inaccessible to plant roots and increase carbon flow into the soil , generating a unique microhabitat-hyphosphere, an extension of the rhizosphere where hyphae and other microbes interact intensively in a similar manner to rhizosphere hotspots . This was shown by the positive feedback between AMF and hyphosphere phosphate-solubilizing bacteria in enhancing the mineralization of organic phosphorus . Hyphal exploration of residue patches may prime decomposition and increase nitrogen acquisition from plant residues . In addition, AMF hyphae reduce N 2 O emission from residue-affected soil [ – ], which is attributable to AMF-mediated substrate changes and/or the alteration of the hyphosphere microbiome, for instance ammonium-oxidizing microbes or denitrifiers [ , , ]. Previous studies have shown that AMF indirectly affect denitrifying microorganisms by promoting water absorption or promoting soil aggregation . However, direct evidence in support of AMF interacting with the hyphosphere microbiome, especially complete denitrifiers, remains ambiguous. Given that AMF receive 4–20% of total photosynthetic C from plants and that hyphae form a network redistributing C into unexplored nonrhizosphere zones , this knowledge gap has important implications for the potential exploitation of the soil microbiome in terms of the development of suitable management practices to increase nutrient use efficiency while mitigating N 2 O emission. This is especially important in sustainable agriculture because current intensive agricultural practices result in a substantial decline in AMF diversity and abundance and hence hamper their potential to mitigate N 2 O emission. Here, we have tested the underlying mechanisms responsible for AMF hyphae-mediated reduction of N 2 O emission, with special emphasis on the microbial taxa capable of complete denitrification in the hyphosphere. We first identified the major players and main pathways by integrating quantitative real-time PCR of the functional genes and amplicon sequencing based on DNA and RNA analysis. We then isolated the most responsive bacterial genus ( Pseudomonas in this case) and tested the chemotaxis, growth and N 2 O consumption of the isolated strains in response to hyphal exudates using in vitro cultures. Subsequently, the target strain was reinoculated into sterilized residue patch soils to validate the results of the in vitro cultures. Finally, we tested whether a positive correlation between AMF abundance and nosZ gene copies occurred in agricultural fields. We hypothesized that bacteria colonizing hyphae, e.g. nosZ -type complete denitrifiers, were the major players responsible for reduced N 2 O emissions. Specifically, hyphal exudates, in particular carboxylates, elicited the recruitment of complete denitrifiers by AMF hyphae and stimulated their functions in the hyphosphere. We envisage that a mixture of hyphosphere microbes in conjunction with hyphal metabolites have great potential to reduce N 2 O emission. Part 1: N 2 O emissions and denitrifying communities in response to AMF hyphae Two pot experiments (pot expts 1 and 2) were conducted to examine whether N 2 O production in faba bean ( Vicia faba L.) residue patches declined in the presence of AMF hyphae. We also analyzed the abundance and structure of N 2 O producers and N 2 O reducers in all patches with and without AMF hyphae. Pot expt 1: N 2 O emissions as affected by the presence of AMF Microcosm setup Microcosms with two chambers, one root chamber for plant growth (3 × 10 × 15 cm 3 ) and one hyphal chamber for hyphal growth (7 × 10 × 15 cm 3 ), were constructed (Fig. ). The two chambers were separated by a 30-μm or a 0.45-μm mesh that allowed or prevented AMF hyphae access to the hyphal chamber. In all cases, the roots were not allowed to access the hyphal chamber. In each hyphal chamber, we introduced a patch with a 30-μm pore nylon mesh bag (4 × 7.5 cm 2 , 5 cm high) that could be filled with residues. A gas probe was inserted into the patch to collect gas samples to measure the N 2 O concentration as an indicator of N 2 O production in the patch (Fig. ). Plant growth substrate and AMF inoculum The soil was collected from bare arable land at Quzhou Experimental Station (36° 52′ N, 114° 01′ E, 40 m a.s.l.) in Quzhou County, Hebei Province, North China. The soil was air-dried, sieved (< 2 mm) and mixed 1:1 (w/w) with sand to serve as a growth substrate in the pot experiment. The substrate was γ-irradiated with a maximum dose of 32 kGy to eliminate indigenous AMF. The root chamber was inoculated with the AMF Funneliformis mosseae (HK01). Details are shown in the . Gas probe and residue patches Gas probe A stainless-steel tube (15 cm high, 15 cm 3 volume) was sealed gas-tight at its end. Two opposing windows (2 × 6 cm 2 ) were opened 0.5 cm from the end of the tube. These windows were covered with a polyvinylidene difluoride (PVDF) membrane (0.22 μm) that was air-permeable but water-impermeable (Fig. ). Residue patches Patch materials consisted of a mixture of 13 g DW (dry weight) substrate with either 2 g DW milled residues (sterilized or unsterilized) or 2 g substrate as control. The base of each gas probe was wrapped within each patch bag. Faba bean stubble was used as residue (total carbon (TC) 36.91%, total nitrogen (TN) 3.19%, C:N ratio 11.6). The stubble was oven-dried at 40 °C to maintain the root microbiome (unsterilized residue, NS) and ground in a ball mill. Portions of milled stubble were also sterilized at 121 °C for 30 min for use as sterilized residue (S). Details are shown in the . Experimental design Pot expt 1 consisted of two factors: (1) patch type, sterilized or unsterilized faba bean residue patches (a mixture of 13 g substrate with either 2 g sterilized or unsterilized faba bean residue), and soil in the patch as the control (15 g substrate); and (2) AMF, presence or absence of AMF, where AMF hyphae access to the hyphal chamber was either allowed or denied. Each treatment had 8 replicates. The root chamber contained 450 g sterilized substrate and 50 g AMF inoculum. Nutrients were supplied to the root chamber to ensure sufficient nutrients for plant growth by adding 100 mg kg −1 N (Ca (NO 3 ) 2 ·4H 2 O), 20 mg kg −1 P (KH 2 PO 4 ), and 100 mg kg −1 K (K 2 SO 4 ). The hyphal chamber contained 1500 g sterilized substrate only. A sterile centrifuge tube (50 mL) was placed in the center of the hyphal chamber to reserve space for the subsequent addition of the patches. Two maize ( Zea mays L.) seeds were placed in the root chamber and thinned to one seedling after germination. The centrifuge tube was replaced with patch 30 days after maize planting. Each patch, enclosed in a 30-μm mesh bag and with a gas probe attached, was placed in the spot reserved by the centrifuge tube (Fig. ). Then, 5 mL of microbial filtrate derived from the substrate soil was added to each patch to equalize microbial communities other than AMF . The patch was then covered with 20 g substrate. Soil moisture was maintained at 60% of WFPS with deionized water by weighing the pots daily according to Li et al. . The microcosm experiment was conducted in a greenhouse at China Agricultural University, Beijing, at 25–30 °C (day) /18–22 °C (night) and 60–80% relative humidity. Addition of inorganic nitrogen fertilizers Thirty-six days after patch addition (66 days after maize planting), 7 mL of 15 mM (NH 4 ) 2 SO 4 (NH 4 + -N treatment) or 30 mM KNO 3 (NO 3 − -N treatment) solution was injected into each patch (corresponding to 0.196 mg N g −1 DW patch). This was performed by injecting 3.5 ml of solution twice with a 1-h interval between injections to minimize solution diffusion into the surrounding substrate. This resulted in four replicates of each nitrogen addition treatment. The gas collection details are shown in the . Pot expt 2: gene and transcript analysis of denitrifiers In pot expt 2, with a duration of 55 days, we investigated whether AMF affected the abundance and expression of the nosZ gene in residue patches. Two factors were analyzed: (1) presence or absence of AMF and (2) harvest time, corresponding to days 24 (T1) and 34 (T2) after patch placement. Each treatment had 5 replicates. The microcosm setup, plant growth substrate, AMF inoculum, and patches were similar to those of pot expt 1 with the following modifications. The patch effect was enlarged by modifying pot size (Fig. ). Details are shown in the . Sufficient nutrients were supplied to both chambers and patches by adding 200 mg kg −1 N (Ca (NO 3 ) 2 ·4H 2 O), 20 mg kg −1 P (KH 2 PO 4 ), and 100 mg kg −1 K (K 2 SO 4 ). Here, NO 3 − -N was added to all the chambers including the patch chambers to minimize N diffusion from the patch to the surrounding soil. The experimental procedure was similar to that in pot expt 1. A mixture of 200 g DW substrate with 2 g DW milled unsterilized residues was placed in the patch 21 days after maize planting. Microbial filtrates were added to each patch. The N 2 O concentrations in the headspace of the bottle were monitored from day 4 after patch placement onwards at 2-day intervals until day 32 by taking 10 mL of headspace gas from the patch chamber using a syringe, at 0 and 3 h after the chamber was closed. The sampling times were 09.00 am and 12.00 am, and this time interval was selected based on the R 2 (0.96) value found in a preliminary experiment (Table S ). The fluxes and cumulative N 2 O emissions were calculated using formulae described previously . Plant harvest and determination of soil physicochemical properties Pot expt 1 was harvested 6 days after the addition of inorganic nitrogen. Pot expt 2 was harvested twice, on day 24 (i.e., 45 days after maize planting) and day 34 (i.e., 55 days after maize planting) after patch placement. The details of the harvest procedure and determination of soil water content, dissolved organic carbon (DOC), total dissolved nitrogen (TDN), mineral N concentrations, hyphal length density (HLD), TC, TN, ammonium (NH 4 + -N), and nitrate (NO 3 − -N) concentrations are shown in the . DNA and RNA extraction, cDNA synthesis, real-time PCR, high-throughput sequencing, and shotgun metagenomic sequencing In the two pot experiments, soil DNA and RNA were extracted from 0.50 g and 2 g fresh soil using a fast DNA SPIN Kit (MP Biomedicals, Santa Ana, CA) and an RNA PowerSoil Total RNA Isolation Kit (Mo Bio, Carlsbad, CA), respectively, according to the manufacturers’ instructions. Complementary DNA (cDNA) was synthesized from the RNA samples (1 μg) using a PrimeScript RT Reagent Kit with gDNA Eraser that includes a genomic DNA elimination reaction. Real-time quantitative PCR (qPCR) of the nirK , nirS , and nosZ (clade I and II) genes were conducted using QuantStudio 6 Flex (Applied Biosystems, Waltham, MA). The primers F1aCu/R3Cu, Cd3aF/ R3cd, nosZ 2F/ nosZ 2R, and nosZ- II-F/ nosZ- II-R were used, and the primer sequence and thermal conditions are shown in Table S . The microbial communities harboring the marker genes nirK , nirS , and clade I nosZ were determined. Paired-end sequencing (2 × 300) was conducted through Illumina MiSeq PE high-throughput sequencing. To further explore the potential microbial functions in response to AMF, DNA samples from the second harvest in pot expt 2 were selected for shotgun metagenomics using the Illumina NovaSeq platform with a paired-end protocol . The details of DNA and RNA extraction, cDNA synthesis, qPCR, high-throughput sequencing, and metagenomic sequencing are shown in the . Part 2: in vitro experiments: chemotaxis, growth, and N 2 O production by isolated denitrifiers in response to hyphal exudates Isolation, identification, and genome sequencing analysis Denitrifier strains were isolated from patches in the presence/absence of AMF at the second harvest in pot expt 2 to examine the enriched denitrifier community in the hyphosphere. Fresh soil was vortexed and suspended in ddH 2 O. Then, 10 5 -fold dilutions of the soil suspension were spread on bromothymol blue (BTB) agar plates to isolate the denitrifiers . Each sample was prepared in triplicate. The plates were incubated at 30 °C for 1–3 days. Separate blue colonies were isolated and purified by repeated streaking on BTB plates. The total bacterial DNA of each isolate was extracted from 1 mL culture suspension with a genomic DNA extraction kit (Tiangen Biotech, Beijing, China). The bacterial primers 27F/1492R were used for 16S rDNA amplification, and sequencing was performed by Tsingke Biotech, Beijing, China. The PCR thermal conditions are shown in Table S . Following dereplication with a cut off value of 99% sequence similarity, the sequences were aligned with reference sequences in the National Center for Biotechnology Information (NCBI) GenBank database. A phylogenetic tree was then constructed by the neighbor-joining method with bootstrap analysis of 1000 replicates using MEGA version 5 . The bacterial primers nosZ 1527 F / nosZ 1773 R were used for nosZ gene amplification to examine whether the Pseudomonas isolates possessed the nosZ gene (Table S ). The target band was detected, sequenced and then identified using a BLAST search in GenBank in NCBI. Three Pseudomonas fluorescens isolates (JL1, JL2, and JL3) possessing the nosZ gene were screened. The draft genomes of the three strains were sequenced. Details are shown in the . Collection and analysis of hyphal exudates An in vitro two-chamber culture was established to collect hyphal exudates to examine the response of P. fluorescens JL1 to hyphal exudates (Fig. ). The AMF strain used, Rhizophagus irregularis MUCL 43194, was grown on axenically produced transformed carrot ( Daucus carota L.) roots. Growth and hyphal exudate harvesting were performed using a previously described protocol . The collection of hyphal exudates and the analysis of sugars, carboxylates, and amino acids in the hyphal exudates are shown in the . Analysis revealed concentrations of 7.16 mM TC and 2.35 mM TN in the exudate solutions. These values were used as references for subsequent experiments. Serum bottle assay A sealed serum bottle assay was conducted to examine the effects of hyphal exudates and major compounds on net N 2 O production by P. fluorescens JL1. Hyphal exudate was applied as one treatment. Fructose, trehalose, citrate, malate, glutamine, or glutamic acid was selected as the carbon source treatment because these compounds were detected at high concentrations in hyphal exudates. Glucose was used as the control. There were 8 treatments in total. The same liquid MSR medium as that used for the collection of hyphal exudates (see ) was used to dissolved specific carbon source. The carbon and nitrogen contents in the medium were adjusted to the same level as those in the hyphal exudate solutions (7.16 mM C and 2.35 mM N). The hyphal exudate medium and specific compound medium were supplemented with 10% FeNaEDTA (relative to MSR medium) to ensure denitrification. The medium pH was then adjusted to 7.2, and the medium was filtered through an Acrodisc syringe filter (0.22-μm Super Membrane, Pall Corporation, Port Washington, NY) to obtain carbon-based medium (CB medium). The CB medium was supplemented with 92.84 mM glucose to reach an initial C concentration of 100 mM. NO 3 − -N was supplemented to reach a level of 10 mM to ensure denitrification. The pellet obtained from the centrifugation of 1 mL P. fluorescens JL1 suspension was re-suspended in 10 mL modified CB medium and transferred to a 120-mL anaerobic serum bottle. All serum bottles were shaken at 180 rpm and maintained at 30 °C. The gas was measured after 0.5, 1, 2, 3, 6, 8, 10, and 12 h. Each treatment was set up in triplicate. Details are shown in the . Assay of gene expression of denitrifiers Gene expression of the complete denitrifier P. fluorescens JL1 in response to hyphal exudates was determined. Citrate and malate were selected as carbon sources based on the results from the serum bottle assay. Glucose was used as the control. The experimental design was the same as in the serum bottle assay. At 0.5, 1, 2, 3, and 6 h, total RNA were extracted and relative changes in nirS and nosZ genes normalized by the 2 −ΔΔCt method . Details are shown in the . Chemotaxis assay M8 basal medium solidified with 0.3% agar was used to assay the chemotaxis of P. fluorescens JL1 to hyphal exudate and its main compounds. The carbon source was substituted with CB medium according to the serum bottle assay. Carbon-free CB medium was used as the control (CK), and the same in the growth assay below. M8 basal medium was autoclaved and cooled to ~ 50 °C, and CB medium was added prior to plate pouring. A final carbon concentration of 716 μΜ (according to 10% C in the hyphal exudates) was maintained in the medium. After thorough mixing, the medium was dispensed into culture plates. One microliter of P. fluorescens JL1 suspension (OD 600 value 0.20, see ) was placed on the center of the agar layer. The plates were placed in a 28 °C incubator. The area covered by each strain, i.e., the swimming motility zone (as depicted by radial growth), was monitored and photographed after 48 h. Growth assay CB medium (see the serum bottle assay) was used to assay the growth of P. fluorescens JL1 in response to hyphal exudates or to its main compounds. The medium was supplemented with 300 mg L −1 NH 4 + -N (NH 4 Cl) and 10% vitamins (relative to MSR medium) to assure sufficiency for bacterial assimilation. P. fluorescens JL1 suspension was inoculated into 250 μL CB medium and cultured in a 10 × 10-well honeycomb microplate (initial OD 600 value 0.05). The OD 600 value was measured every 2 h at 30 °C for 24 h using a Bioscreen C automated microbiology growth curve analysis system (Oy Growth Curves Ab, Turku, Finland), with 4 replicates per treatment. Part 3: inoculation experiment The effectiveness of P. fluorescens JL1 in reducing N 2 O emissions was validated by inoculating the strain into patches amended with different carbon sources in the −AMF treatment to compare their effects with the in situ hyphal exudates. The design of the microcosm, growth substrate, nutrient supplements, and patch composition were the same as in pot expt 2. The patch materials were sterilized to eliminate indigenous microorganisms after culturing in an incubator for 7 days at 25 °C and 60% of WFPS. Each patch was inoculated with P. fluorescens JL1 suspension at a final concentration of 10 8 CFU bacteria g −1 soil. The patches were placed 21 days after maize planting. Ten days after patch placement, 2 mL carbon source dissolved in sterile H 2 O (pH 7.5) were injected slowly into the center of the patches at 18:00 on the day before the onset of gas measurement. There were four treatments in the patches: (1) absence of AMF (−AMF) with H 2 O; (2) −AMF with 7.16 mmol glucose-C kg −1 soil; (3) −AMF with 7.16 mmol citrate-C kg −1 soil; and (4) presence of AMF (+AMF), with 4 replicates per treatment. Gas was monitored every 2 days from days 2 to 24 after patch placement. Eight milliliters of headspace gas was collected from the patch chamber using a syringe 0, 1.5, and 3 h after the chamber was closed. Then, 8 mL of N 2 was replenished quickly after every gas sampling to balance the air pressure in the patches. The sampling time was 9.00 am to 12.00 am. The soil moisture content was maintained at 60% WFPS by adjusting the weight of each pot with sterile H 2 O. RNA extraction, cDNA synthesis, and the relative change in nirS and nosZ genes were conducted and assessed as described above. The bacterial numbers in patches were counted according to the total number of colony-forming units (CFU g −1 soil) of bacteria . Part 4: measurements from a long-term intercropping field experiment Samples were collected from a long-term intercropping experiment to test whether a positive correlation between AMF abundance and nosZ gene copies occurred in agricultural ecosystems. We selected an intercropping experiment because intercropping has been shown to increase AMF abundance compared to monocultures . The long-term experiment started in 2010 at Baiyun Experimental Station, Gansu Province, Ningxia Hui Autonomous Region, Northwest China. The experiment was a split-plot completely randomized block design. Two planting patterns of faba bean monoculture and faba bean intercropped with maize at two P application rates (zero P or 40 kg P ha −1 year −1 ) were established, and each treatment was set up in triplicate. Details of the field management scheme have been published by Li et al. . Soil samples were collected when the faba bean was at the full-bloom stage. Soil samples close to faba bean plants were collected from the top 20 cm of the soil profile using a 35-mm-diameter auger. Five soil cores were collected randomly from each plot and combined to give one composite sample per plot of monocultures or intercropping. The composite samples were sieved through a 2-mm mesh. One portion was stored at −80 °C for molecular analysis, and the remainder was air-dried for the determination of HLD. Soil DNA extraction, real-time PCR of the nosZ gene, and HLD were conducted and assessed as described above. 2 O emissions and denitrifying communities in response to AMF hyphae Two pot experiments (pot expts 1 and 2) were conducted to examine whether N 2 O production in faba bean ( Vicia faba L.) residue patches declined in the presence of AMF hyphae. We also analyzed the abundance and structure of N 2 O producers and N 2 O reducers in all patches with and without AMF hyphae. Pot expt 1: N 2 O emissions as affected by the presence of AMF Microcosm setup Microcosms with two chambers, one root chamber for plant growth (3 × 10 × 15 cm 3 ) and one hyphal chamber for hyphal growth (7 × 10 × 15 cm 3 ), were constructed (Fig. ). The two chambers were separated by a 30-μm or a 0.45-μm mesh that allowed or prevented AMF hyphae access to the hyphal chamber. In all cases, the roots were not allowed to access the hyphal chamber. In each hyphal chamber, we introduced a patch with a 30-μm pore nylon mesh bag (4 × 7.5 cm 2 , 5 cm high) that could be filled with residues. A gas probe was inserted into the patch to collect gas samples to measure the N 2 O concentration as an indicator of N 2 O production in the patch (Fig. ). Plant growth substrate and AMF inoculum The soil was collected from bare arable land at Quzhou Experimental Station (36° 52′ N, 114° 01′ E, 40 m a.s.l.) in Quzhou County, Hebei Province, North China. The soil was air-dried, sieved (< 2 mm) and mixed 1:1 (w/w) with sand to serve as a growth substrate in the pot experiment. The substrate was γ-irradiated with a maximum dose of 32 kGy to eliminate indigenous AMF. The root chamber was inoculated with the AMF Funneliformis mosseae (HK01). Details are shown in the . Gas probe and residue patches Gas probe A stainless-steel tube (15 cm high, 15 cm 3 volume) was sealed gas-tight at its end. Two opposing windows (2 × 6 cm 2 ) were opened 0.5 cm from the end of the tube. These windows were covered with a polyvinylidene difluoride (PVDF) membrane (0.22 μm) that was air-permeable but water-impermeable (Fig. ). Residue patches Patch materials consisted of a mixture of 13 g DW (dry weight) substrate with either 2 g DW milled residues (sterilized or unsterilized) or 2 g substrate as control. The base of each gas probe was wrapped within each patch bag. Faba bean stubble was used as residue (total carbon (TC) 36.91%, total nitrogen (TN) 3.19%, C:N ratio 11.6). The stubble was oven-dried at 40 °C to maintain the root microbiome (unsterilized residue, NS) and ground in a ball mill. Portions of milled stubble were also sterilized at 121 °C for 30 min for use as sterilized residue (S). Details are shown in the . Experimental design Pot expt 1 consisted of two factors: (1) patch type, sterilized or unsterilized faba bean residue patches (a mixture of 13 g substrate with either 2 g sterilized or unsterilized faba bean residue), and soil in the patch as the control (15 g substrate); and (2) AMF, presence or absence of AMF, where AMF hyphae access to the hyphal chamber was either allowed or denied. Each treatment had 8 replicates. The root chamber contained 450 g sterilized substrate and 50 g AMF inoculum. Nutrients were supplied to the root chamber to ensure sufficient nutrients for plant growth by adding 100 mg kg −1 N (Ca (NO 3 ) 2 ·4H 2 O), 20 mg kg −1 P (KH 2 PO 4 ), and 100 mg kg −1 K (K 2 SO 4 ). The hyphal chamber contained 1500 g sterilized substrate only. A sterile centrifuge tube (50 mL) was placed in the center of the hyphal chamber to reserve space for the subsequent addition of the patches. Two maize ( Zea mays L.) seeds were placed in the root chamber and thinned to one seedling after germination. The centrifuge tube was replaced with patch 30 days after maize planting. Each patch, enclosed in a 30-μm mesh bag and with a gas probe attached, was placed in the spot reserved by the centrifuge tube (Fig. ). Then, 5 mL of microbial filtrate derived from the substrate soil was added to each patch to equalize microbial communities other than AMF . The patch was then covered with 20 g substrate. Soil moisture was maintained at 60% of WFPS with deionized water by weighing the pots daily according to Li et al. . The microcosm experiment was conducted in a greenhouse at China Agricultural University, Beijing, at 25–30 °C (day) /18–22 °C (night) and 60–80% relative humidity. Addition of inorganic nitrogen fertilizers Thirty-six days after patch addition (66 days after maize planting), 7 mL of 15 mM (NH 4 ) 2 SO 4 (NH 4 + -N treatment) or 30 mM KNO 3 (NO 3 − -N treatment) solution was injected into each patch (corresponding to 0.196 mg N g −1 DW patch). This was performed by injecting 3.5 ml of solution twice with a 1-h interval between injections to minimize solution diffusion into the surrounding substrate. This resulted in four replicates of each nitrogen addition treatment. The gas collection details are shown in the . Pot expt 2: gene and transcript analysis of denitrifiers In pot expt 2, with a duration of 55 days, we investigated whether AMF affected the abundance and expression of the nosZ gene in residue patches. Two factors were analyzed: (1) presence or absence of AMF and (2) harvest time, corresponding to days 24 (T1) and 34 (T2) after patch placement. Each treatment had 5 replicates. The microcosm setup, plant growth substrate, AMF inoculum, and patches were similar to those of pot expt 1 with the following modifications. The patch effect was enlarged by modifying pot size (Fig. ). Details are shown in the . Sufficient nutrients were supplied to both chambers and patches by adding 200 mg kg −1 N (Ca (NO 3 ) 2 ·4H 2 O), 20 mg kg −1 P (KH 2 PO 4 ), and 100 mg kg −1 K (K 2 SO 4 ). Here, NO 3 − -N was added to all the chambers including the patch chambers to minimize N diffusion from the patch to the surrounding soil. The experimental procedure was similar to that in pot expt 1. A mixture of 200 g DW substrate with 2 g DW milled unsterilized residues was placed in the patch 21 days after maize planting. Microbial filtrates were added to each patch. The N 2 O concentrations in the headspace of the bottle were monitored from day 4 after patch placement onwards at 2-day intervals until day 32 by taking 10 mL of headspace gas from the patch chamber using a syringe, at 0 and 3 h after the chamber was closed. The sampling times were 09.00 am and 12.00 am, and this time interval was selected based on the R 2 (0.96) value found in a preliminary experiment (Table S ). The fluxes and cumulative N 2 O emissions were calculated using formulae described previously . Plant harvest and determination of soil physicochemical properties Pot expt 1 was harvested 6 days after the addition of inorganic nitrogen. Pot expt 2 was harvested twice, on day 24 (i.e., 45 days after maize planting) and day 34 (i.e., 55 days after maize planting) after patch placement. The details of the harvest procedure and determination of soil water content, dissolved organic carbon (DOC), total dissolved nitrogen (TDN), mineral N concentrations, hyphal length density (HLD), TC, TN, ammonium (NH 4 + -N), and nitrate (NO 3 − -N) concentrations are shown in the . DNA and RNA extraction, cDNA synthesis, real-time PCR, high-throughput sequencing, and shotgun metagenomic sequencing In the two pot experiments, soil DNA and RNA were extracted from 0.50 g and 2 g fresh soil using a fast DNA SPIN Kit (MP Biomedicals, Santa Ana, CA) and an RNA PowerSoil Total RNA Isolation Kit (Mo Bio, Carlsbad, CA), respectively, according to the manufacturers’ instructions. Complementary DNA (cDNA) was synthesized from the RNA samples (1 μg) using a PrimeScript RT Reagent Kit with gDNA Eraser that includes a genomic DNA elimination reaction. Real-time quantitative PCR (qPCR) of the nirK , nirS , and nosZ (clade I and II) genes were conducted using QuantStudio 6 Flex (Applied Biosystems, Waltham, MA). The primers F1aCu/R3Cu, Cd3aF/ R3cd, nosZ 2F/ nosZ 2R, and nosZ- II-F/ nosZ- II-R were used, and the primer sequence and thermal conditions are shown in Table S . The microbial communities harboring the marker genes nirK , nirS , and clade I nosZ were determined. Paired-end sequencing (2 × 300) was conducted through Illumina MiSeq PE high-throughput sequencing. To further explore the potential microbial functions in response to AMF, DNA samples from the second harvest in pot expt 2 were selected for shotgun metagenomics using the Illumina NovaSeq platform with a paired-end protocol . The details of DNA and RNA extraction, cDNA synthesis, qPCR, high-throughput sequencing, and metagenomic sequencing are shown in the . 2 O emissions as affected by the presence of AMF Microcosm setup Microcosms with two chambers, one root chamber for plant growth (3 × 10 × 15 cm 3 ) and one hyphal chamber for hyphal growth (7 × 10 × 15 cm 3 ), were constructed (Fig. ). The two chambers were separated by a 30-μm or a 0.45-μm mesh that allowed or prevented AMF hyphae access to the hyphal chamber. In all cases, the roots were not allowed to access the hyphal chamber. In each hyphal chamber, we introduced a patch with a 30-μm pore nylon mesh bag (4 × 7.5 cm 2 , 5 cm high) that could be filled with residues. A gas probe was inserted into the patch to collect gas samples to measure the N 2 O concentration as an indicator of N 2 O production in the patch (Fig. ). Plant growth substrate and AMF inoculum The soil was collected from bare arable land at Quzhou Experimental Station (36° 52′ N, 114° 01′ E, 40 m a.s.l.) in Quzhou County, Hebei Province, North China. The soil was air-dried, sieved (< 2 mm) and mixed 1:1 (w/w) with sand to serve as a growth substrate in the pot experiment. The substrate was γ-irradiated with a maximum dose of 32 kGy to eliminate indigenous AMF. The root chamber was inoculated with the AMF Funneliformis mosseae (HK01). Details are shown in the . Gas probe and residue patches Gas probe A stainless-steel tube (15 cm high, 15 cm 3 volume) was sealed gas-tight at its end. Two opposing windows (2 × 6 cm 2 ) were opened 0.5 cm from the end of the tube. These windows were covered with a polyvinylidene difluoride (PVDF) membrane (0.22 μm) that was air-permeable but water-impermeable (Fig. ). Residue patches Patch materials consisted of a mixture of 13 g DW (dry weight) substrate with either 2 g DW milled residues (sterilized or unsterilized) or 2 g substrate as control. The base of each gas probe was wrapped within each patch bag. Faba bean stubble was used as residue (total carbon (TC) 36.91%, total nitrogen (TN) 3.19%, C:N ratio 11.6). The stubble was oven-dried at 40 °C to maintain the root microbiome (unsterilized residue, NS) and ground in a ball mill. Portions of milled stubble were also sterilized at 121 °C for 30 min for use as sterilized residue (S). Details are shown in the . Experimental design Pot expt 1 consisted of two factors: (1) patch type, sterilized or unsterilized faba bean residue patches (a mixture of 13 g substrate with either 2 g sterilized or unsterilized faba bean residue), and soil in the patch as the control (15 g substrate); and (2) AMF, presence or absence of AMF, where AMF hyphae access to the hyphal chamber was either allowed or denied. Each treatment had 8 replicates. The root chamber contained 450 g sterilized substrate and 50 g AMF inoculum. Nutrients were supplied to the root chamber to ensure sufficient nutrients for plant growth by adding 100 mg kg −1 N (Ca (NO 3 ) 2 ·4H 2 O), 20 mg kg −1 P (KH 2 PO 4 ), and 100 mg kg −1 K (K 2 SO 4 ). The hyphal chamber contained 1500 g sterilized substrate only. A sterile centrifuge tube (50 mL) was placed in the center of the hyphal chamber to reserve space for the subsequent addition of the patches. Two maize ( Zea mays L.) seeds were placed in the root chamber and thinned to one seedling after germination. The centrifuge tube was replaced with patch 30 days after maize planting. Each patch, enclosed in a 30-μm mesh bag and with a gas probe attached, was placed in the spot reserved by the centrifuge tube (Fig. ). Then, 5 mL of microbial filtrate derived from the substrate soil was added to each patch to equalize microbial communities other than AMF . The patch was then covered with 20 g substrate. Soil moisture was maintained at 60% of WFPS with deionized water by weighing the pots daily according to Li et al. . The microcosm experiment was conducted in a greenhouse at China Agricultural University, Beijing, at 25–30 °C (day) /18–22 °C (night) and 60–80% relative humidity. Addition of inorganic nitrogen fertilizers Thirty-six days after patch addition (66 days after maize planting), 7 mL of 15 mM (NH 4 ) 2 SO 4 (NH 4 + -N treatment) or 30 mM KNO 3 (NO 3 − -N treatment) solution was injected into each patch (corresponding to 0.196 mg N g −1 DW patch). This was performed by injecting 3.5 ml of solution twice with a 1-h interval between injections to minimize solution diffusion into the surrounding substrate. This resulted in four replicates of each nitrogen addition treatment. The gas collection details are shown in the . Microcosms with two chambers, one root chamber for plant growth (3 × 10 × 15 cm 3 ) and one hyphal chamber for hyphal growth (7 × 10 × 15 cm 3 ), were constructed (Fig. ). The two chambers were separated by a 30-μm or a 0.45-μm mesh that allowed or prevented AMF hyphae access to the hyphal chamber. In all cases, the roots were not allowed to access the hyphal chamber. In each hyphal chamber, we introduced a patch with a 30-μm pore nylon mesh bag (4 × 7.5 cm 2 , 5 cm high) that could be filled with residues. A gas probe was inserted into the patch to collect gas samples to measure the N 2 O concentration as an indicator of N 2 O production in the patch (Fig. ). The soil was collected from bare arable land at Quzhou Experimental Station (36° 52′ N, 114° 01′ E, 40 m a.s.l.) in Quzhou County, Hebei Province, North China. The soil was air-dried, sieved (< 2 mm) and mixed 1:1 (w/w) with sand to serve as a growth substrate in the pot experiment. The substrate was γ-irradiated with a maximum dose of 32 kGy to eliminate indigenous AMF. The root chamber was inoculated with the AMF Funneliformis mosseae (HK01). Details are shown in the . Gas probe A stainless-steel tube (15 cm high, 15 cm 3 volume) was sealed gas-tight at its end. Two opposing windows (2 × 6 cm 2 ) were opened 0.5 cm from the end of the tube. These windows were covered with a polyvinylidene difluoride (PVDF) membrane (0.22 μm) that was air-permeable but water-impermeable (Fig. ). Residue patches Patch materials consisted of a mixture of 13 g DW (dry weight) substrate with either 2 g DW milled residues (sterilized or unsterilized) or 2 g substrate as control. The base of each gas probe was wrapped within each patch bag. Faba bean stubble was used as residue (total carbon (TC) 36.91%, total nitrogen (TN) 3.19%, C:N ratio 11.6). The stubble was oven-dried at 40 °C to maintain the root microbiome (unsterilized residue, NS) and ground in a ball mill. Portions of milled stubble were also sterilized at 121 °C for 30 min for use as sterilized residue (S). Details are shown in the . Pot expt 1 consisted of two factors: (1) patch type, sterilized or unsterilized faba bean residue patches (a mixture of 13 g substrate with either 2 g sterilized or unsterilized faba bean residue), and soil in the patch as the control (15 g substrate); and (2) AMF, presence or absence of AMF, where AMF hyphae access to the hyphal chamber was either allowed or denied. Each treatment had 8 replicates. The root chamber contained 450 g sterilized substrate and 50 g AMF inoculum. Nutrients were supplied to the root chamber to ensure sufficient nutrients for plant growth by adding 100 mg kg −1 N (Ca (NO 3 ) 2 ·4H 2 O), 20 mg kg −1 P (KH 2 PO 4 ), and 100 mg kg −1 K (K 2 SO 4 ). The hyphal chamber contained 1500 g sterilized substrate only. A sterile centrifuge tube (50 mL) was placed in the center of the hyphal chamber to reserve space for the subsequent addition of the patches. Two maize ( Zea mays L.) seeds were placed in the root chamber and thinned to one seedling after germination. The centrifuge tube was replaced with patch 30 days after maize planting. Each patch, enclosed in a 30-μm mesh bag and with a gas probe attached, was placed in the spot reserved by the centrifuge tube (Fig. ). Then, 5 mL of microbial filtrate derived from the substrate soil was added to each patch to equalize microbial communities other than AMF . The patch was then covered with 20 g substrate. Soil moisture was maintained at 60% of WFPS with deionized water by weighing the pots daily according to Li et al. . The microcosm experiment was conducted in a greenhouse at China Agricultural University, Beijing, at 25–30 °C (day) /18–22 °C (night) and 60–80% relative humidity. Thirty-six days after patch addition (66 days after maize planting), 7 mL of 15 mM (NH 4 ) 2 SO 4 (NH 4 + -N treatment) or 30 mM KNO 3 (NO 3 − -N treatment) solution was injected into each patch (corresponding to 0.196 mg N g −1 DW patch). This was performed by injecting 3.5 ml of solution twice with a 1-h interval between injections to minimize solution diffusion into the surrounding substrate. This resulted in four replicates of each nitrogen addition treatment. The gas collection details are shown in the . In pot expt 2, with a duration of 55 days, we investigated whether AMF affected the abundance and expression of the nosZ gene in residue patches. Two factors were analyzed: (1) presence or absence of AMF and (2) harvest time, corresponding to days 24 (T1) and 34 (T2) after patch placement. Each treatment had 5 replicates. The microcosm setup, plant growth substrate, AMF inoculum, and patches were similar to those of pot expt 1 with the following modifications. The patch effect was enlarged by modifying pot size (Fig. ). Details are shown in the . Sufficient nutrients were supplied to both chambers and patches by adding 200 mg kg −1 N (Ca (NO 3 ) 2 ·4H 2 O), 20 mg kg −1 P (KH 2 PO 4 ), and 100 mg kg −1 K (K 2 SO 4 ). Here, NO 3 − -N was added to all the chambers including the patch chambers to minimize N diffusion from the patch to the surrounding soil. The experimental procedure was similar to that in pot expt 1. A mixture of 200 g DW substrate with 2 g DW milled unsterilized residues was placed in the patch 21 days after maize planting. Microbial filtrates were added to each patch. The N 2 O concentrations in the headspace of the bottle were monitored from day 4 after patch placement onwards at 2-day intervals until day 32 by taking 10 mL of headspace gas from the patch chamber using a syringe, at 0 and 3 h after the chamber was closed. The sampling times were 09.00 am and 12.00 am, and this time interval was selected based on the R 2 (0.96) value found in a preliminary experiment (Table S ). The fluxes and cumulative N 2 O emissions were calculated using formulae described previously . Plant harvest and determination of soil physicochemical properties Pot expt 1 was harvested 6 days after the addition of inorganic nitrogen. Pot expt 2 was harvested twice, on day 24 (i.e., 45 days after maize planting) and day 34 (i.e., 55 days after maize planting) after patch placement. The details of the harvest procedure and determination of soil water content, dissolved organic carbon (DOC), total dissolved nitrogen (TDN), mineral N concentrations, hyphal length density (HLD), TC, TN, ammonium (NH 4 + -N), and nitrate (NO 3 − -N) concentrations are shown in the . DNA and RNA extraction, cDNA synthesis, real-time PCR, high-throughput sequencing, and shotgun metagenomic sequencing In the two pot experiments, soil DNA and RNA were extracted from 0.50 g and 2 g fresh soil using a fast DNA SPIN Kit (MP Biomedicals, Santa Ana, CA) and an RNA PowerSoil Total RNA Isolation Kit (Mo Bio, Carlsbad, CA), respectively, according to the manufacturers’ instructions. Complementary DNA (cDNA) was synthesized from the RNA samples (1 μg) using a PrimeScript RT Reagent Kit with gDNA Eraser that includes a genomic DNA elimination reaction. Real-time quantitative PCR (qPCR) of the nirK , nirS , and nosZ (clade I and II) genes were conducted using QuantStudio 6 Flex (Applied Biosystems, Waltham, MA). The primers F1aCu/R3Cu, Cd3aF/ R3cd, nosZ 2F/ nosZ 2R, and nosZ- II-F/ nosZ- II-R were used, and the primer sequence and thermal conditions are shown in Table S . The microbial communities harboring the marker genes nirK , nirS , and clade I nosZ were determined. Paired-end sequencing (2 × 300) was conducted through Illumina MiSeq PE high-throughput sequencing. To further explore the potential microbial functions in response to AMF, DNA samples from the second harvest in pot expt 2 were selected for shotgun metagenomics using the Illumina NovaSeq platform with a paired-end protocol . The details of DNA and RNA extraction, cDNA synthesis, qPCR, high-throughput sequencing, and metagenomic sequencing are shown in the . Pot expt 1 was harvested 6 days after the addition of inorganic nitrogen. Pot expt 2 was harvested twice, on day 24 (i.e., 45 days after maize planting) and day 34 (i.e., 55 days after maize planting) after patch placement. The details of the harvest procedure and determination of soil water content, dissolved organic carbon (DOC), total dissolved nitrogen (TDN), mineral N concentrations, hyphal length density (HLD), TC, TN, ammonium (NH 4 + -N), and nitrate (NO 3 − -N) concentrations are shown in the . In the two pot experiments, soil DNA and RNA were extracted from 0.50 g and 2 g fresh soil using a fast DNA SPIN Kit (MP Biomedicals, Santa Ana, CA) and an RNA PowerSoil Total RNA Isolation Kit (Mo Bio, Carlsbad, CA), respectively, according to the manufacturers’ instructions. Complementary DNA (cDNA) was synthesized from the RNA samples (1 μg) using a PrimeScript RT Reagent Kit with gDNA Eraser that includes a genomic DNA elimination reaction. Real-time quantitative PCR (qPCR) of the nirK , nirS , and nosZ (clade I and II) genes were conducted using QuantStudio 6 Flex (Applied Biosystems, Waltham, MA). The primers F1aCu/R3Cu, Cd3aF/ R3cd, nosZ 2F/ nosZ 2R, and nosZ- II-F/ nosZ- II-R were used, and the primer sequence and thermal conditions are shown in Table S . The microbial communities harboring the marker genes nirK , nirS , and clade I nosZ were determined. Paired-end sequencing (2 × 300) was conducted through Illumina MiSeq PE high-throughput sequencing. To further explore the potential microbial functions in response to AMF, DNA samples from the second harvest in pot expt 2 were selected for shotgun metagenomics using the Illumina NovaSeq platform with a paired-end protocol . The details of DNA and RNA extraction, cDNA synthesis, qPCR, high-throughput sequencing, and metagenomic sequencing are shown in the . 2 O production by isolated denitrifiers in response to hyphal exudates Isolation, identification, and genome sequencing analysis Denitrifier strains were isolated from patches in the presence/absence of AMF at the second harvest in pot expt 2 to examine the enriched denitrifier community in the hyphosphere. Fresh soil was vortexed and suspended in ddH 2 O. Then, 10 5 -fold dilutions of the soil suspension were spread on bromothymol blue (BTB) agar plates to isolate the denitrifiers . Each sample was prepared in triplicate. The plates were incubated at 30 °C for 1–3 days. Separate blue colonies were isolated and purified by repeated streaking on BTB plates. The total bacterial DNA of each isolate was extracted from 1 mL culture suspension with a genomic DNA extraction kit (Tiangen Biotech, Beijing, China). The bacterial primers 27F/1492R were used for 16S rDNA amplification, and sequencing was performed by Tsingke Biotech, Beijing, China. The PCR thermal conditions are shown in Table S . Following dereplication with a cut off value of 99% sequence similarity, the sequences were aligned with reference sequences in the National Center for Biotechnology Information (NCBI) GenBank database. A phylogenetic tree was then constructed by the neighbor-joining method with bootstrap analysis of 1000 replicates using MEGA version 5 . The bacterial primers nosZ 1527 F / nosZ 1773 R were used for nosZ gene amplification to examine whether the Pseudomonas isolates possessed the nosZ gene (Table S ). The target band was detected, sequenced and then identified using a BLAST search in GenBank in NCBI. Three Pseudomonas fluorescens isolates (JL1, JL2, and JL3) possessing the nosZ gene were screened. The draft genomes of the three strains were sequenced. Details are shown in the . Collection and analysis of hyphal exudates An in vitro two-chamber culture was established to collect hyphal exudates to examine the response of P. fluorescens JL1 to hyphal exudates (Fig. ). The AMF strain used, Rhizophagus irregularis MUCL 43194, was grown on axenically produced transformed carrot ( Daucus carota L.) roots. Growth and hyphal exudate harvesting were performed using a previously described protocol . The collection of hyphal exudates and the analysis of sugars, carboxylates, and amino acids in the hyphal exudates are shown in the . Analysis revealed concentrations of 7.16 mM TC and 2.35 mM TN in the exudate solutions. These values were used as references for subsequent experiments. Serum bottle assay A sealed serum bottle assay was conducted to examine the effects of hyphal exudates and major compounds on net N 2 O production by P. fluorescens JL1. Hyphal exudate was applied as one treatment. Fructose, trehalose, citrate, malate, glutamine, or glutamic acid was selected as the carbon source treatment because these compounds were detected at high concentrations in hyphal exudates. Glucose was used as the control. There were 8 treatments in total. The same liquid MSR medium as that used for the collection of hyphal exudates (see ) was used to dissolved specific carbon source. The carbon and nitrogen contents in the medium were adjusted to the same level as those in the hyphal exudate solutions (7.16 mM C and 2.35 mM N). The hyphal exudate medium and specific compound medium were supplemented with 10% FeNaEDTA (relative to MSR medium) to ensure denitrification. The medium pH was then adjusted to 7.2, and the medium was filtered through an Acrodisc syringe filter (0.22-μm Super Membrane, Pall Corporation, Port Washington, NY) to obtain carbon-based medium (CB medium). The CB medium was supplemented with 92.84 mM glucose to reach an initial C concentration of 100 mM. NO 3 − -N was supplemented to reach a level of 10 mM to ensure denitrification. The pellet obtained from the centrifugation of 1 mL P. fluorescens JL1 suspension was re-suspended in 10 mL modified CB medium and transferred to a 120-mL anaerobic serum bottle. All serum bottles were shaken at 180 rpm and maintained at 30 °C. The gas was measured after 0.5, 1, 2, 3, 6, 8, 10, and 12 h. Each treatment was set up in triplicate. Details are shown in the . Assay of gene expression of denitrifiers Gene expression of the complete denitrifier P. fluorescens JL1 in response to hyphal exudates was determined. Citrate and malate were selected as carbon sources based on the results from the serum bottle assay. Glucose was used as the control. The experimental design was the same as in the serum bottle assay. At 0.5, 1, 2, 3, and 6 h, total RNA were extracted and relative changes in nirS and nosZ genes normalized by the 2 −ΔΔCt method . Details are shown in the . Chemotaxis assay M8 basal medium solidified with 0.3% agar was used to assay the chemotaxis of P. fluorescens JL1 to hyphal exudate and its main compounds. The carbon source was substituted with CB medium according to the serum bottle assay. Carbon-free CB medium was used as the control (CK), and the same in the growth assay below. M8 basal medium was autoclaved and cooled to ~ 50 °C, and CB medium was added prior to plate pouring. A final carbon concentration of 716 μΜ (according to 10% C in the hyphal exudates) was maintained in the medium. After thorough mixing, the medium was dispensed into culture plates. One microliter of P. fluorescens JL1 suspension (OD 600 value 0.20, see ) was placed on the center of the agar layer. The plates were placed in a 28 °C incubator. The area covered by each strain, i.e., the swimming motility zone (as depicted by radial growth), was monitored and photographed after 48 h. Growth assay CB medium (see the serum bottle assay) was used to assay the growth of P. fluorescens JL1 in response to hyphal exudates or to its main compounds. The medium was supplemented with 300 mg L −1 NH 4 + -N (NH 4 Cl) and 10% vitamins (relative to MSR medium) to assure sufficiency for bacterial assimilation. P. fluorescens JL1 suspension was inoculated into 250 μL CB medium and cultured in a 10 × 10-well honeycomb microplate (initial OD 600 value 0.05). The OD 600 value was measured every 2 h at 30 °C for 24 h using a Bioscreen C automated microbiology growth curve analysis system (Oy Growth Curves Ab, Turku, Finland), with 4 replicates per treatment. Denitrifier strains were isolated from patches in the presence/absence of AMF at the second harvest in pot expt 2 to examine the enriched denitrifier community in the hyphosphere. Fresh soil was vortexed and suspended in ddH 2 O. Then, 10 5 -fold dilutions of the soil suspension were spread on bromothymol blue (BTB) agar plates to isolate the denitrifiers . Each sample was prepared in triplicate. The plates were incubated at 30 °C for 1–3 days. Separate blue colonies were isolated and purified by repeated streaking on BTB plates. The total bacterial DNA of each isolate was extracted from 1 mL culture suspension with a genomic DNA extraction kit (Tiangen Biotech, Beijing, China). The bacterial primers 27F/1492R were used for 16S rDNA amplification, and sequencing was performed by Tsingke Biotech, Beijing, China. The PCR thermal conditions are shown in Table S . Following dereplication with a cut off value of 99% sequence similarity, the sequences were aligned with reference sequences in the National Center for Biotechnology Information (NCBI) GenBank database. A phylogenetic tree was then constructed by the neighbor-joining method with bootstrap analysis of 1000 replicates using MEGA version 5 . The bacterial primers nosZ 1527 F / nosZ 1773 R were used for nosZ gene amplification to examine whether the Pseudomonas isolates possessed the nosZ gene (Table S ). The target band was detected, sequenced and then identified using a BLAST search in GenBank in NCBI. Three Pseudomonas fluorescens isolates (JL1, JL2, and JL3) possessing the nosZ gene were screened. The draft genomes of the three strains were sequenced. Details are shown in the . An in vitro two-chamber culture was established to collect hyphal exudates to examine the response of P. fluorescens JL1 to hyphal exudates (Fig. ). The AMF strain used, Rhizophagus irregularis MUCL 43194, was grown on axenically produced transformed carrot ( Daucus carota L.) roots. Growth and hyphal exudate harvesting were performed using a previously described protocol . The collection of hyphal exudates and the analysis of sugars, carboxylates, and amino acids in the hyphal exudates are shown in the . Analysis revealed concentrations of 7.16 mM TC and 2.35 mM TN in the exudate solutions. These values were used as references for subsequent experiments. A sealed serum bottle assay was conducted to examine the effects of hyphal exudates and major compounds on net N 2 O production by P. fluorescens JL1. Hyphal exudate was applied as one treatment. Fructose, trehalose, citrate, malate, glutamine, or glutamic acid was selected as the carbon source treatment because these compounds were detected at high concentrations in hyphal exudates. Glucose was used as the control. There were 8 treatments in total. The same liquid MSR medium as that used for the collection of hyphal exudates (see ) was used to dissolved specific carbon source. The carbon and nitrogen contents in the medium were adjusted to the same level as those in the hyphal exudate solutions (7.16 mM C and 2.35 mM N). The hyphal exudate medium and specific compound medium were supplemented with 10% FeNaEDTA (relative to MSR medium) to ensure denitrification. The medium pH was then adjusted to 7.2, and the medium was filtered through an Acrodisc syringe filter (0.22-μm Super Membrane, Pall Corporation, Port Washington, NY) to obtain carbon-based medium (CB medium). The CB medium was supplemented with 92.84 mM glucose to reach an initial C concentration of 100 mM. NO 3 − -N was supplemented to reach a level of 10 mM to ensure denitrification. The pellet obtained from the centrifugation of 1 mL P. fluorescens JL1 suspension was re-suspended in 10 mL modified CB medium and transferred to a 120-mL anaerobic serum bottle. All serum bottles were shaken at 180 rpm and maintained at 30 °C. The gas was measured after 0.5, 1, 2, 3, 6, 8, 10, and 12 h. Each treatment was set up in triplicate. Details are shown in the . Gene expression of the complete denitrifier P. fluorescens JL1 in response to hyphal exudates was determined. Citrate and malate were selected as carbon sources based on the results from the serum bottle assay. Glucose was used as the control. The experimental design was the same as in the serum bottle assay. At 0.5, 1, 2, 3, and 6 h, total RNA were extracted and relative changes in nirS and nosZ genes normalized by the 2 −ΔΔCt method . Details are shown in the . M8 basal medium solidified with 0.3% agar was used to assay the chemotaxis of P. fluorescens JL1 to hyphal exudate and its main compounds. The carbon source was substituted with CB medium according to the serum bottle assay. Carbon-free CB medium was used as the control (CK), and the same in the growth assay below. M8 basal medium was autoclaved and cooled to ~ 50 °C, and CB medium was added prior to plate pouring. A final carbon concentration of 716 μΜ (according to 10% C in the hyphal exudates) was maintained in the medium. After thorough mixing, the medium was dispensed into culture plates. One microliter of P. fluorescens JL1 suspension (OD 600 value 0.20, see ) was placed on the center of the agar layer. The plates were placed in a 28 °C incubator. The area covered by each strain, i.e., the swimming motility zone (as depicted by radial growth), was monitored and photographed after 48 h. CB medium (see the serum bottle assay) was used to assay the growth of P. fluorescens JL1 in response to hyphal exudates or to its main compounds. The medium was supplemented with 300 mg L −1 NH 4 + -N (NH 4 Cl) and 10% vitamins (relative to MSR medium) to assure sufficiency for bacterial assimilation. P. fluorescens JL1 suspension was inoculated into 250 μL CB medium and cultured in a 10 × 10-well honeycomb microplate (initial OD 600 value 0.05). The OD 600 value was measured every 2 h at 30 °C for 24 h using a Bioscreen C automated microbiology growth curve analysis system (Oy Growth Curves Ab, Turku, Finland), with 4 replicates per treatment. The effectiveness of P. fluorescens JL1 in reducing N 2 O emissions was validated by inoculating the strain into patches amended with different carbon sources in the −AMF treatment to compare their effects with the in situ hyphal exudates. The design of the microcosm, growth substrate, nutrient supplements, and patch composition were the same as in pot expt 2. The patch materials were sterilized to eliminate indigenous microorganisms after culturing in an incubator for 7 days at 25 °C and 60% of WFPS. Each patch was inoculated with P. fluorescens JL1 suspension at a final concentration of 10 8 CFU bacteria g −1 soil. The patches were placed 21 days after maize planting. Ten days after patch placement, 2 mL carbon source dissolved in sterile H 2 O (pH 7.5) were injected slowly into the center of the patches at 18:00 on the day before the onset of gas measurement. There were four treatments in the patches: (1) absence of AMF (−AMF) with H 2 O; (2) −AMF with 7.16 mmol glucose-C kg −1 soil; (3) −AMF with 7.16 mmol citrate-C kg −1 soil; and (4) presence of AMF (+AMF), with 4 replicates per treatment. Gas was monitored every 2 days from days 2 to 24 after patch placement. Eight milliliters of headspace gas was collected from the patch chamber using a syringe 0, 1.5, and 3 h after the chamber was closed. Then, 8 mL of N 2 was replenished quickly after every gas sampling to balance the air pressure in the patches. The sampling time was 9.00 am to 12.00 am. The soil moisture content was maintained at 60% WFPS by adjusting the weight of each pot with sterile H 2 O. RNA extraction, cDNA synthesis, and the relative change in nirS and nosZ genes were conducted and assessed as described above. The bacterial numbers in patches were counted according to the total number of colony-forming units (CFU g −1 soil) of bacteria . Samples were collected from a long-term intercropping experiment to test whether a positive correlation between AMF abundance and nosZ gene copies occurred in agricultural ecosystems. We selected an intercropping experiment because intercropping has been shown to increase AMF abundance compared to monocultures . The long-term experiment started in 2010 at Baiyun Experimental Station, Gansu Province, Ningxia Hui Autonomous Region, Northwest China. The experiment was a split-plot completely randomized block design. Two planting patterns of faba bean monoculture and faba bean intercropped with maize at two P application rates (zero P or 40 kg P ha −1 year −1 ) were established, and each treatment was set up in triplicate. Details of the field management scheme have been published by Li et al. . Soil samples were collected when the faba bean was at the full-bloom stage. Soil samples close to faba bean plants were collected from the top 20 cm of the soil profile using a 35-mm-diameter auger. Five soil cores were collected randomly from each plot and combined to give one composite sample per plot of monocultures or intercropping. The composite samples were sieved through a 2-mm mesh. One portion was stored at −80 °C for molecular analysis, and the remainder was air-dried for the determination of HLD. Soil DNA extraction, real-time PCR of the nosZ gene, and HLD were conducted and assessed as described above. Statistical analysis was conducted in R 4.0.3 or SPSS version 22.0. Figures were produced using the ggplot2 R package or Origin 2021. Details of the statistical analyses are shown in the . Pot experiments: AMF reduced N 2 O emissions in residue patches In pot expt 1 (Fig. ), AMF hyphae grew into all patches and average HLD in patches was 5.29±0.42 m g −1 soil (Fig. S A) in the +AMF treatment, which was approximately 1.9 times higher than that (1.84±0.17 m g −1 soil) in the −AMF treatment. High N 2 O concentrations occurred only in the unsterilized faba bean (NSfaba) patches and in the −AMF treatment 24 h after NO 3 − application, but not subsequently. In contrast, the N 2 O concentration in the +AMF treatment declined significantly compared to the −AMF treatment 24 h after NO 3 − application in NSfaba patches, and remained low at near-atmospheric concentrations comparable to those in the control and sterilized faba bean (Sfaba) patches (Fig. A and Fig. S B). The N 2 O concentration in the NSfaba patches amended with NH 4 + -N was low and no significant differences between AMF treatments were observed (Fig. A and Fig. S B). In pot expt 2, the temporal dynamics of N 2 O emissions with residues amended only with NO 3 − -N were monitored over 1 month. The average HLD value in patches was 5.72±0.15 m g −1 soil in the +AMF treatment, 3.8 times higher than that (1.18±0.05 m g −1 soil) in the −AMF treatment (Fig. S D). The presence of AMF hyphae significantly reduced N 2 O emission from residue patches from day 8 until the end of the experiment, with the fluxes declining by ≤70% and cumulative emissions by 63% compared to the −AMF treatment (Fig. B). Pot experiments: AMF promoted the abundance and expression of the nosZ gene and enriched N 2 O-reducing Pseudomonas in residue patches The abundance of the key genes involved in N 2 O production ( nirK and nirS ) and consumption (clade I and II nosZ ) in residue patches were determined. In pot expt 1, AMF significantly increased nirS and clade I nosZ gene copies and the ratio of nosZ I/( nirK + nirS ) only in the NSfaba patches and not in the Sfaba patches or the control (soil only) (Fig. A). Clade I nosZ gene copies were negatively correlated with N 2 O concentrations in the NSfaba patches under the NO 3 − -N treatment ( r = −0.78, P = 0.021) but not under the NH 4 + -N treatment ( r = −0.23, P = 0.59) (Fig. S A). Moreover, overall clade I nosZ gene copies were positively correlated with HLD (Fig. S B). In pot expt 2, AMF significantly increased the nirK transcript copies at the first harvest and the clade I nosZ gene and transcript copies and transcript ratio of nosZ I/( nirK + nirS ) at the second harvest (Fig. B, C). The abundance and expression of the nirS and clade II nosZ gene were not significantly affected by AMF (Fig. B, C). Multiple stepwise regression indicates that the variation in N 2 O emission was best explained by nirK gene expression at the first harvest and by clade I nosZ gene expression at the second harvest (Table S ). Moreover, the clade I nosZ gene and transcript copies were positively correlated with HLD and DOC concentrations, which were significantly increased by AMF at the second harvest (Figs. S C and S C, D). Based on these results, we focused on the clade I nosZ community in the subsequent experiment. Amplicon sequencing analysis at the gene level in the two pot experiments and also at the transcript level in pot expt 2 was conducted to identify the N 2 O-reducing community (targeting the clade I nosZ community) in the residue patches. At the genus level, Pseudomonas , Achromobacter , Shinella , and Sinorhizobium were detected. Pseudomonas was the most abundant genus, accounting for 32% in the NSfaba patches at the gene level in pot expt 1 (Fig. A), and 24 and 58% at the gene and transcript levels respectively, in pot expt 2 (Fig. S A, B). At the OTU level, AMF significantly altered the structure of the clade I nosZ community based on both gene (pot expts 1 and 2) and transcript (pot expt 2) analyses (Tables S and S ). For clade I nosZ community, linear discriminant analysis (LDA) effect size (LEfSe) shows that Pseudomonas was remarkably enriched in the presence of AMF within each patch type in pot expt 1 (Fig. B). Similarly, in pot expt 2, AMF significantly increased the relative abundance of Pseudomonas within the clade I nosZ community by 40% at the gene level at the first harvest (Fig. S A) and by 27% at the transcript level at the second harvest (Fig. S B). Moreover, cumulative N 2 O emissions were negatively correlated with the relative abundance of Pseudomonas at both the gene and transcript levels ( r = −0.45, P < 0.05; r = −0.57, P < 0.01; Fig. C). Shotgun metagenomics of the microbiomes in the patches in the −AMF and +AMF treatments (pot expt 2) at the second harvest was carried out. Sequences of predicted nosZ genes from the KEGG database were assigned against the NCBI NR database to assess the taxonomic composition of N 2 O-reducing community. For the N 2 O-reducing community, Pseudomonas fluorescens was the abundant species, accounting for 4.35% on average. Only the relative abundance of P. fluorescens increased significantly in the +AMF treatment (Fig. A). The carbon metabolism and the microbial taxonomic composition were also analyzed. The relative abundances of key genes involved in the microbial citrate cycle (tricarboxylic acid [TCA] cycle) especially in P. fluorescens , 2-oxocarboxylic acid metabolism and glycine, serine, and threonine metabolism increased significantly in the +AMF treatment (Fig. B, C). Together, the altered carbon metabolism in combination with the increase in DOC content in the +AMF treatment implies that the enrichment of P. fluorescens and stimulation of N 2 O reductase might be associated with hyphal exudates. In vitro experiment: cultivation of Pseudomonas A total of 40 isolates taxonomically affiliated with Pseudomonas were obtained from patch samples collected at the second harvest in pot expt 2. The nosZ gene of the 40 isolates was amplified by PCR and sequenced, with nosZ gene sequences detected in 27 isolates. The majority of the 27 nosZ -possessing isolates were aligned within the same species Pseudomonas JL1 and were affiliated with P. fluorescens based on the phylogenetic tree constructed with 16S rRNA genes (Fig. S A). Three isolates ( P . fluorescens JL1, JL2, and JL3) were then selected from the above 27 isolates to conduct draft-genome sequencing. The three isolates possessed all genes involved in complete denitrification converting nitrate into N 2 . Using multiple sequence alignment, nos operon cluster analysis and the associated signal peptide (twin-arginine translocation, TAT) approaches, the selected P. fluorescens strain JL1 was confirmed to possess clade I TAT-dependent nosZ gene (100% identity to the Pseudomonas strain WP_047225819.1). Subsequent assays in the in vitro experiment and inoculation experiments were conducted using the P . fluorescens strain JL1. Close attachments of P . fluorescens to AMF hyphae was observed microscopically in the in vitro cultures stained with 4 ′ ,6-diamidino-2-phenylindole (Fig. S A). In vitro experiments: chemotaxis, growth, and N 2 O production by P. fluorescens Glucose, fructose, trehalose, glutamine, glutamic acid, citrate, and malate were abundant in hyphal exudates (Table S ). P . fluorescens JL1 displayed very little chemotaxis or growth in the carbon-free medium but its chemotaxis and growth increased quickly upon the addition of hyphal exudates (Fig. A and Fig. S B). The areas of swimming motility (indicating chemotactic ability) of P . fluorescens JL1 in the media supplemented with amino acids (glutamine and glutamic acid) or carboxylates (citrate and malate) were comparable to those obtained with hyphal exudates, which were on average three times higher than those obtained with sugars (glucose, fructose, and trehalose) (Fig. A). However, the optical densities (ODs) of P . fluorescens JL1 in the media supplemented with amino acids and citrate were higher than those obtained with hyphal exudates and sugars (Fig. S B). P. fluorescens JL1 was cultured anaerobically to study the effects of hyphal exudates and major compounds on N 2 O emission and the expression of nirS and nosZ genes. Indeed, the N 2 O concentrations in P. fluorescens JL1 cultures receiving hyphal exudates or carboxylates (citrate, malate) were significantly lower than those in cultures receiving amino acids or sugars over the incubation period (Fig. B). Furthermore, the nosZ gene expression and the transcript ratio of nosZ / nirS (except at 1 h) were highest in cultures receiving hyphal exudates, followed by the citrate or malate addition treatments, and the values in the glucose addition treatment was the lowest (Fig. C and Fig. S C). Inoculation experiment: validation that AMF exudates stimulated nosZ gene expression and reduced N 2 O production by P. fluorescens An experiment with the re-inoculation of sterilized residue patches with P . fluorescens strain JL1 was conducted to determine how AMF colonization and/or AMF exudates stimulated nosZ gene expression and hampered N 2 O production (Fig. ). Here, the bacterial numbers were > 10 7 CFU g −1 soil in patches inoculated with P . fluorescens JL1. The bacterial numbers in the +AMF and −AMF + citrate/glucose treatments were significantly higher than those in the −AMF+H 2 O treatments (Fig. A). Twelve days after patch addition and 2 days after carbon addition, the N 2 O fluxes were significantly lower in the +AMF and −AMF + citrate treatments than in the −AMF and −AMF + glucose treatments (Fig. B). Cumulative N 2 O emissions in the +AMF and −AMF + citrate treatments were 50 and 40% lower, respectively, than in the −AMF + H 2 O treatment, and approximately 80% lower than in the −AMF + glucose treatment (Fig. C). Compared to the −AMF + H 2 O/glucose treatments, nosZ gene expression was upregulated in the +AMF treatment and the transcript ratio of nosZ / nirS increased in the +AMF and −AMF + citrate treatments (Fig. D). Field experiment: correlation between AMF and the abundance of clade I nosZ gene We took samples from an 11-year-long intercropping field experiment. HLD and the abundance of clade I nosZ gene in the maize/faba bean intercropping treatment were significantly higher than in the faba bean monoculture under zero P application (Fig. A, B). Furthermore, the abundance of clade I nosZ gene was significantly positively correlated with HLD (Fig. C). 2 O emissions in residue patches In pot expt 1 (Fig. ), AMF hyphae grew into all patches and average HLD in patches was 5.29±0.42 m g −1 soil (Fig. S A) in the +AMF treatment, which was approximately 1.9 times higher than that (1.84±0.17 m g −1 soil) in the −AMF treatment. High N 2 O concentrations occurred only in the unsterilized faba bean (NSfaba) patches and in the −AMF treatment 24 h after NO 3 − application, but not subsequently. In contrast, the N 2 O concentration in the +AMF treatment declined significantly compared to the −AMF treatment 24 h after NO 3 − application in NSfaba patches, and remained low at near-atmospheric concentrations comparable to those in the control and sterilized faba bean (Sfaba) patches (Fig. A and Fig. S B). The N 2 O concentration in the NSfaba patches amended with NH 4 + -N was low and no significant differences between AMF treatments were observed (Fig. A and Fig. S B). In pot expt 2, the temporal dynamics of N 2 O emissions with residues amended only with NO 3 − -N were monitored over 1 month. The average HLD value in patches was 5.72±0.15 m g −1 soil in the +AMF treatment, 3.8 times higher than that (1.18±0.05 m g −1 soil) in the −AMF treatment (Fig. S D). The presence of AMF hyphae significantly reduced N 2 O emission from residue patches from day 8 until the end of the experiment, with the fluxes declining by ≤70% and cumulative emissions by 63% compared to the −AMF treatment (Fig. B). 2 O-reducing Pseudomonas in residue patches The abundance of the key genes involved in N 2 O production ( nirK and nirS ) and consumption (clade I and II nosZ ) in residue patches were determined. In pot expt 1, AMF significantly increased nirS and clade I nosZ gene copies and the ratio of nosZ I/( nirK + nirS ) only in the NSfaba patches and not in the Sfaba patches or the control (soil only) (Fig. A). Clade I nosZ gene copies were negatively correlated with N 2 O concentrations in the NSfaba patches under the NO 3 − -N treatment ( r = −0.78, P = 0.021) but not under the NH 4 + -N treatment ( r = −0.23, P = 0.59) (Fig. S A). Moreover, overall clade I nosZ gene copies were positively correlated with HLD (Fig. S B). In pot expt 2, AMF significantly increased the nirK transcript copies at the first harvest and the clade I nosZ gene and transcript copies and transcript ratio of nosZ I/( nirK + nirS ) at the second harvest (Fig. B, C). The abundance and expression of the nirS and clade II nosZ gene were not significantly affected by AMF (Fig. B, C). Multiple stepwise regression indicates that the variation in N 2 O emission was best explained by nirK gene expression at the first harvest and by clade I nosZ gene expression at the second harvest (Table S ). Moreover, the clade I nosZ gene and transcript copies were positively correlated with HLD and DOC concentrations, which were significantly increased by AMF at the second harvest (Figs. S C and S C, D). Based on these results, we focused on the clade I nosZ community in the subsequent experiment. Amplicon sequencing analysis at the gene level in the two pot experiments and also at the transcript level in pot expt 2 was conducted to identify the N 2 O-reducing community (targeting the clade I nosZ community) in the residue patches. At the genus level, Pseudomonas , Achromobacter , Shinella , and Sinorhizobium were detected. Pseudomonas was the most abundant genus, accounting for 32% in the NSfaba patches at the gene level in pot expt 1 (Fig. A), and 24 and 58% at the gene and transcript levels respectively, in pot expt 2 (Fig. S A, B). At the OTU level, AMF significantly altered the structure of the clade I nosZ community based on both gene (pot expts 1 and 2) and transcript (pot expt 2) analyses (Tables S and S ). For clade I nosZ community, linear discriminant analysis (LDA) effect size (LEfSe) shows that Pseudomonas was remarkably enriched in the presence of AMF within each patch type in pot expt 1 (Fig. B). Similarly, in pot expt 2, AMF significantly increased the relative abundance of Pseudomonas within the clade I nosZ community by 40% at the gene level at the first harvest (Fig. S A) and by 27% at the transcript level at the second harvest (Fig. S B). Moreover, cumulative N 2 O emissions were negatively correlated with the relative abundance of Pseudomonas at both the gene and transcript levels ( r = −0.45, P < 0.05; r = −0.57, P < 0.01; Fig. C). Shotgun metagenomics of the microbiomes in the patches in the −AMF and +AMF treatments (pot expt 2) at the second harvest was carried out. Sequences of predicted nosZ genes from the KEGG database were assigned against the NCBI NR database to assess the taxonomic composition of N 2 O-reducing community. For the N 2 O-reducing community, Pseudomonas fluorescens was the abundant species, accounting for 4.35% on average. Only the relative abundance of P. fluorescens increased significantly in the +AMF treatment (Fig. A). The carbon metabolism and the microbial taxonomic composition were also analyzed. The relative abundances of key genes involved in the microbial citrate cycle (tricarboxylic acid [TCA] cycle) especially in P. fluorescens , 2-oxocarboxylic acid metabolism and glycine, serine, and threonine metabolism increased significantly in the +AMF treatment (Fig. B, C). Together, the altered carbon metabolism in combination with the increase in DOC content in the +AMF treatment implies that the enrichment of P. fluorescens and stimulation of N 2 O reductase might be associated with hyphal exudates. Pseudomonas A total of 40 isolates taxonomically affiliated with Pseudomonas were obtained from patch samples collected at the second harvest in pot expt 2. The nosZ gene of the 40 isolates was amplified by PCR and sequenced, with nosZ gene sequences detected in 27 isolates. The majority of the 27 nosZ -possessing isolates were aligned within the same species Pseudomonas JL1 and were affiliated with P. fluorescens based on the phylogenetic tree constructed with 16S rRNA genes (Fig. S A). Three isolates ( P . fluorescens JL1, JL2, and JL3) were then selected from the above 27 isolates to conduct draft-genome sequencing. The three isolates possessed all genes involved in complete denitrification converting nitrate into N 2 . Using multiple sequence alignment, nos operon cluster analysis and the associated signal peptide (twin-arginine translocation, TAT) approaches, the selected P. fluorescens strain JL1 was confirmed to possess clade I TAT-dependent nosZ gene (100% identity to the Pseudomonas strain WP_047225819.1). Subsequent assays in the in vitro experiment and inoculation experiments were conducted using the P . fluorescens strain JL1. Close attachments of P . fluorescens to AMF hyphae was observed microscopically in the in vitro cultures stained with 4 ′ ,6-diamidino-2-phenylindole (Fig. S A). 2 O production by P. fluorescens Glucose, fructose, trehalose, glutamine, glutamic acid, citrate, and malate were abundant in hyphal exudates (Table S ). P . fluorescens JL1 displayed very little chemotaxis or growth in the carbon-free medium but its chemotaxis and growth increased quickly upon the addition of hyphal exudates (Fig. A and Fig. S B). The areas of swimming motility (indicating chemotactic ability) of P . fluorescens JL1 in the media supplemented with amino acids (glutamine and glutamic acid) or carboxylates (citrate and malate) were comparable to those obtained with hyphal exudates, which were on average three times higher than those obtained with sugars (glucose, fructose, and trehalose) (Fig. A). However, the optical densities (ODs) of P . fluorescens JL1 in the media supplemented with amino acids and citrate were higher than those obtained with hyphal exudates and sugars (Fig. S B). P. fluorescens JL1 was cultured anaerobically to study the effects of hyphal exudates and major compounds on N 2 O emission and the expression of nirS and nosZ genes. Indeed, the N 2 O concentrations in P. fluorescens JL1 cultures receiving hyphal exudates or carboxylates (citrate, malate) were significantly lower than those in cultures receiving amino acids or sugars over the incubation period (Fig. B). Furthermore, the nosZ gene expression and the transcript ratio of nosZ / nirS (except at 1 h) were highest in cultures receiving hyphal exudates, followed by the citrate or malate addition treatments, and the values in the glucose addition treatment was the lowest (Fig. C and Fig. S C). 2 O production by P. fluorescens An experiment with the re-inoculation of sterilized residue patches with P . fluorescens strain JL1 was conducted to determine how AMF colonization and/or AMF exudates stimulated nosZ gene expression and hampered N 2 O production (Fig. ). Here, the bacterial numbers were > 10 7 CFU g −1 soil in patches inoculated with P . fluorescens JL1. The bacterial numbers in the +AMF and −AMF + citrate/glucose treatments were significantly higher than those in the −AMF+H 2 O treatments (Fig. A). Twelve days after patch addition and 2 days after carbon addition, the N 2 O fluxes were significantly lower in the +AMF and −AMF + citrate treatments than in the −AMF and −AMF + glucose treatments (Fig. B). Cumulative N 2 O emissions in the +AMF and −AMF + citrate treatments were 50 and 40% lower, respectively, than in the −AMF + H 2 O treatment, and approximately 80% lower than in the −AMF + glucose treatment (Fig. C). Compared to the −AMF + H 2 O/glucose treatments, nosZ gene expression was upregulated in the +AMF treatment and the transcript ratio of nosZ / nirS increased in the +AMF and −AMF + citrate treatments (Fig. D). nosZ gene We took samples from an 11-year-long intercropping field experiment. HLD and the abundance of clade I nosZ gene in the maize/faba bean intercropping treatment were significantly higher than in the faba bean monoculture under zero P application (Fig. A, B). Furthermore, the abundance of clade I nosZ gene was significantly positively correlated with HLD (Fig. C). Returning crop residues to the field is an effective measure to increase carbon sequestration in agricultural ecosystems but this gain can be offset by high N 2 O emission, especially when residues of N 2 -fixing legumes are returned . Crop residues in soils create unique micro-environmental conditions that are conducive to denitrification by absorbing water from surrounding soil and by stimulating microbial respiration due to dissolved organic carbon released during decomposition . The current study clearly demonstrates that (i) interactions between AMF and N 2 O reducers mitigate N 2 O emissions in residue patches, as evidenced by the alteration in N 2 O flux and the changes in the abundance and community composition of hyphosphere microbiota in the two pot experiments; and (ii) carboxylates exuded by hyphae recruited complete denitrifier ( P . fluorescens ) and triggered the nosZ gene (encoding N 2 O reductase) expression of P. fluorescens , as evidenced by the chemotaxis, growth, and N 2 O production in the in vitro cultures and the inoculation experiment. Interactions between AMF and N 2 O reducers mitigate N 2 O emission in patches In pot expt 1, the presence of AMF hyphae suppressed N 2 O concentrations in the unsterilized faba bean (NSfaba) patches after NO 3 − application but not after NH 4 + application (Fig. A and Fig. S B). In pot expt 2, the size of the patches was enlarged to 202 g and NO 3 − was supplied as basal fertilizer to all chambers including patch chambers to minimize N diffusion. The residue rate (10 g kg −1 ) was comparable to crop residues used in previous studies under field and condition-controlled conditions [ – ]. Here again AMF hyphae consistently and significantly reduced the N 2 O flux from residue patches from day 8 after patch placement until the end of the experiment (Fig. B). The consistent results in the two experiments provide compelling evidence that AMF hyphae reduced N 2 O emissions in the residue patches, primarily by mediating the denitrification pathway, although the relative importance of this pathway among other processes may merit further exploration . Our results are in line with previous studies showing AMF-mediated reduction of N 2 O emission from soil with residue amendment or without residue amendment under high soil moisture conducive to denitrification [ , , ]. The diversity and activity of the N 2 O-producing ( nirK or nirS type) and N 2 O-reducing ( nosZ type) microbial communities ultimately determine net N 2 O emissions. The relative abundance of bacteria possessing the nosZ gene is a good proxy of the N 2 O/ (N 2 + N 2 O) ratio . In pot expt 1, AMF hyphae significantly increased the abundance of clade I nosZ, the nirS gene, and the nosZ I/( nirK + nirS ) ratio in the NSfaba patches (Fig. A). As there was higher frequency of co-occurrence of nosZ with nirS , these results indicate that AMF hyphae may promote the growth and expression of N 2 O reducers (clade I) in residue patches. This was further supported by pot expt 2 where AMF significantly increased the abundance and expression of clade I nosZ , and the transcript ratio of nosZ I/( nirK + nirS ) at the second but not at the first harvest (Fig. B, C). Synergies between AMF and N 2 O reducers may therefore explain the decline in N 2 O production in residue patches. In pot expt 2 at the first harvest, the increase in the expression of the nirK gene (Fig. C) might be a response to imposing anaerobiosis which primes an initial pulse of emission. Hence, research efforts on dynamic changes of N 2 O reducer/producer community are required in future. In our experiment, no significant difference in the abundance and expression of clade II nosZ was observed between the −AMF and +AMF treatments (Fig. B, C), suggesting these bacteria may be of relatively minor importance compared to clade I type. Previous studies showed that the clade I nosZ was dominant in the rhizosphere while clade II was in the soils . It is likely that in similar fashion to (mycor-)rhizosphere, hyphosphere generated by the proliferation of AMF into the residues is favorable for the clade I nosZ community. The N rate applied to the patches (approximately 200 mg kg −1 ) was equivalent to the amount of fertilizer N typically used for cereal crops . High concentrations of NO 3 − in soil almost completely inhibit N 2 O reduction to N 2 , as NO 3 − reductase outcompetes N 2 O reductase for electrons supplied by labile organic carbon including AMF exudates. A recent study shows that the reduction in the rate of N 2 O emissions in the presence of AMF under normal N inputs was higher than that under high N inputs in conventional soil, but the opposite trend occurred in organically managed soil . Aside from the well-reported substrate-controlled denitrification process , the interactions of AMF and hyphospheric microbes are also shown to be regulated by nitrogen availability . Yet this remains largely unexplored. It is therefore particularly desirable to investigate AMF-mediated denitrification mechanisms in the context of environmental controls in order to maximize the N 2 O mitigation potential of AMF. Exudation of carboxylates by AMF hyphae recruits P. fluorescens and triggers nosZ gene expression in P. fluorescens Soils contain diverse denitrifying bacteria such as Citrobacter , Pseudomonas , Ochrobactrum , and Burkholderia . A previous study reported that only a few members of the bacterial community (~10%) in residue patches responded to AMF colonization according to 16S rRNA gene microarray analysis . The results obtained from amplicon and metagenomic sequencings in pot experiments and isolation in the in vitro cultures supported the conclusion that AMF hyphae consistently increased the relative abundance of N 2 O-reducing Pseudomonas , which was predominant in residue patches (Figs. A and A and Fig. S ). Moreover, cumulative N 2 O emissions were negatively correlated with the relative abundance and activity of Pseudomonas (Fig. C). This is the first report of N 2 O-reducing Pseudomonas directly and positively responded to AMF hyphal proliferation being responsible for low N 2 O emissions in residue patches. Pseudomonas spp. are fast-growing r -strategists enriched in nutrient-rich environments such as the rhizosphere and hyphosphere . In a similar fashion to the rhizosphere, the hyphosphere provides a unique niche in which microbial communities differ from those in the bulk soil due to hyphal exudates , as supported by the increased patch DOC concentrations in the +AMF treatment (Fig. S C). Most Pseudomonas isolates cultivated in vitro possessing the nosZ gene belonged to P. fluorescens (Fig. S A). The three isolates ( P . fluorescens JL1, JL2, and JL3) selected for draft-genome sequencing possessed all denitrifying genes and were complete denitrifiers. P. fluorescens F113 was previously reported as a typical “true denitrifier” . P. fluorescens is effectively attached to AMF hyphae (Fig. S A), as was also observed in a previous study . Taken together, these results imply that the enrichment and stimulation of complete denitrifying P. fluorescens in the hyphosphere can be attributed to AMF hyphal exudates. AMF hyphae exude organic carbon, mainly in the form of sugars, carboxylates, and amino acids . Previous studies show that AMF hyphal exudates promoted the growth of phosphate-solubilizing bacteria and that fructose exuded by AMF stimulated the expression of phosphatase genes in Rahnella aquatilis . Here, we found that glucose, fructose, trehalose, glutamine, glutamic acid, citrate, and malate were abundant in hyphal exudates (Table S ), corroborating with previous studies . AMF hyphal exudates significantly promoted the chemotaxis and growth of P. fluorescens (Fig. A and Fig. S B), reduced N 2 O emissions, and upregulated the expression of the nosZ but not of the nirS gene (Fig. B, C and Fig. S C). Moreover, the role of carboxylates in bacterial chemotaxis, N 2 O emissions, and gene expression was similar to that of hyphal exudates (Fig. ). Together, these results demonstrate that carboxylates exuded by hyphae are attractants in recruiting P . fluorescens and also act as stimulants triggering nosZ gene expression, resulting in a significant decline in N 2 O emissions. This was further validated in the inoculation experiment in which cumulative N 2 O emission and nosZ gene expression in the citrate addition treatment were similar to those in the +AMF treatment and in which N 2 O emission lower and nosZ gene expression higher than in the glucose or H 2 O addition treatments (Fig. C, D). Thus, an N 2 O-reducing microbiome in residue patches has been developed by carboxylates exuded by AMF hyphae. A similar situation with reduced N 2 O emissions after the addition of carboxylates such as citrate, succinate, and acetate but not glucose to soils or to pure cultures of Pseudomonas was previously observed. The N 2 O reductase encoded by the nosZ gene is a weak competitor for electrons compared to other denitrifying reductases . NADH, the usual direct electron donor mainly produced in the citrate cycle, is more conducive to electron transfer to N 2 O reductase. Carboxylates such as citrate and malate in hyphal exudates are directly involved in the citrate cycle, while the metabolic use of glucose requires enzymatic conversions and consumes extra energy . Moreover, AMF increased the relative abundances of key genes involved in citrate cycle of bacteria, especially P. fluorescens in residue patches (pot expt 2, Fig. B, C). Taken together, these results imply that hyphal exudates (with carboxylates as major components) promote the citrate cycle, trigger complete denitrification, and subsequently reduce N 2 O emissions by P . fluorescens . The results of the current study may be relevant for diverse ecosystems. The values of HLD in the present study fall within the range of 200–600 cm cm −3 (approximately 1.5–5.0 m g −1 ) in farmland soil but were lower than in woody and non-woody systems (2400 and 2700 cm cm −3 on average, respectively) . The global decline in the abundance and diversity of AMF due to increasing land use intensity is potentially alarming. This decline may disrupt the extensive connections between AMF and their associated microbiomes, with cascading negative effects on ecosystem functioning, specifically with respect to the underappreciated role of co-colonization by AMF and Pseudomonas in the mitigation of N 2 O emissions. To counter this adverse development, the restoration of AMF diversity in agricultural ecosystems may be achieved by the development of sustainable management practices such as diversified cropping , organic farming , or conservation agriculture . To verify that sustainable agriculture practices may indeed stimulate co-colonization by AMF and N 2 O reducers, we analyzed soil samples taken from an 11-year-long intercropping field experiment. In the maize/faba bean intercropping soils, the HLD and the gene abundance of clade I nosZ were significantly higher than those in the faba bean monoculture, and the clade I nosZ gene abundance was significantly positively correlated with HLD (Fig. C). Similar situations, i.e., low mineral N and high organic C availability, may also occur in grassland and forest soils, where uptake of atmospheric N 2 O is observed . We speculate that the mechanisms we describe in the present study may explain this phenomenon, as AMF are abundant in these ecosystems. Our study demonstrates that reinforcing synergies between AMF and the hyphosphere microbiome may have far-reaching implications for both sustainable agriculture and the mitigation of N 2 O emissions from cropping systems and, thus, for the mitigation of climate change. We envisage that indiscernible and variable N 2 O fluxes occurring in soil microenvironments can be substantially reduced by AMF and the hyphosphere microbiome. Our study therefore also advances our understanding of the multiple functions delivery by AMF beyond promoting uptake of soil nutrients. 2 O reducers mitigate N 2 O emission in patches In pot expt 1, the presence of AMF hyphae suppressed N 2 O concentrations in the unsterilized faba bean (NSfaba) patches after NO 3 − application but not after NH 4 + application (Fig. A and Fig. S B). In pot expt 2, the size of the patches was enlarged to 202 g and NO 3 − was supplied as basal fertilizer to all chambers including patch chambers to minimize N diffusion. The residue rate (10 g kg −1 ) was comparable to crop residues used in previous studies under field and condition-controlled conditions [ – ]. Here again AMF hyphae consistently and significantly reduced the N 2 O flux from residue patches from day 8 after patch placement until the end of the experiment (Fig. B). The consistent results in the two experiments provide compelling evidence that AMF hyphae reduced N 2 O emissions in the residue patches, primarily by mediating the denitrification pathway, although the relative importance of this pathway among other processes may merit further exploration . Our results are in line with previous studies showing AMF-mediated reduction of N 2 O emission from soil with residue amendment or without residue amendment under high soil moisture conducive to denitrification [ , , ]. The diversity and activity of the N 2 O-producing ( nirK or nirS type) and N 2 O-reducing ( nosZ type) microbial communities ultimately determine net N 2 O emissions. The relative abundance of bacteria possessing the nosZ gene is a good proxy of the N 2 O/ (N 2 + N 2 O) ratio . In pot expt 1, AMF hyphae significantly increased the abundance of clade I nosZ, the nirS gene, and the nosZ I/( nirK + nirS ) ratio in the NSfaba patches (Fig. A). As there was higher frequency of co-occurrence of nosZ with nirS , these results indicate that AMF hyphae may promote the growth and expression of N 2 O reducers (clade I) in residue patches. This was further supported by pot expt 2 where AMF significantly increased the abundance and expression of clade I nosZ , and the transcript ratio of nosZ I/( nirK + nirS ) at the second but not at the first harvest (Fig. B, C). Synergies between AMF and N 2 O reducers may therefore explain the decline in N 2 O production in residue patches. In pot expt 2 at the first harvest, the increase in the expression of the nirK gene (Fig. C) might be a response to imposing anaerobiosis which primes an initial pulse of emission. Hence, research efforts on dynamic changes of N 2 O reducer/producer community are required in future. In our experiment, no significant difference in the abundance and expression of clade II nosZ was observed between the −AMF and +AMF treatments (Fig. B, C), suggesting these bacteria may be of relatively minor importance compared to clade I type. Previous studies showed that the clade I nosZ was dominant in the rhizosphere while clade II was in the soils . It is likely that in similar fashion to (mycor-)rhizosphere, hyphosphere generated by the proliferation of AMF into the residues is favorable for the clade I nosZ community. The N rate applied to the patches (approximately 200 mg kg −1 ) was equivalent to the amount of fertilizer N typically used for cereal crops . High concentrations of NO 3 − in soil almost completely inhibit N 2 O reduction to N 2 , as NO 3 − reductase outcompetes N 2 O reductase for electrons supplied by labile organic carbon including AMF exudates. A recent study shows that the reduction in the rate of N 2 O emissions in the presence of AMF under normal N inputs was higher than that under high N inputs in conventional soil, but the opposite trend occurred in organically managed soil . Aside from the well-reported substrate-controlled denitrification process , the interactions of AMF and hyphospheric microbes are also shown to be regulated by nitrogen availability . Yet this remains largely unexplored. It is therefore particularly desirable to investigate AMF-mediated denitrification mechanisms in the context of environmental controls in order to maximize the N 2 O mitigation potential of AMF. P. fluorescens and triggers nosZ gene expression in P. fluorescens Soils contain diverse denitrifying bacteria such as Citrobacter , Pseudomonas , Ochrobactrum , and Burkholderia . A previous study reported that only a few members of the bacterial community (~10%) in residue patches responded to AMF colonization according to 16S rRNA gene microarray analysis . The results obtained from amplicon and metagenomic sequencings in pot experiments and isolation in the in vitro cultures supported the conclusion that AMF hyphae consistently increased the relative abundance of N 2 O-reducing Pseudomonas , which was predominant in residue patches (Figs. A and A and Fig. S ). Moreover, cumulative N 2 O emissions were negatively correlated with the relative abundance and activity of Pseudomonas (Fig. C). This is the first report of N 2 O-reducing Pseudomonas directly and positively responded to AMF hyphal proliferation being responsible for low N 2 O emissions in residue patches. Pseudomonas spp. are fast-growing r -strategists enriched in nutrient-rich environments such as the rhizosphere and hyphosphere . In a similar fashion to the rhizosphere, the hyphosphere provides a unique niche in which microbial communities differ from those in the bulk soil due to hyphal exudates , as supported by the increased patch DOC concentrations in the +AMF treatment (Fig. S C). Most Pseudomonas isolates cultivated in vitro possessing the nosZ gene belonged to P. fluorescens (Fig. S A). The three isolates ( P . fluorescens JL1, JL2, and JL3) selected for draft-genome sequencing possessed all denitrifying genes and were complete denitrifiers. P. fluorescens F113 was previously reported as a typical “true denitrifier” . P. fluorescens is effectively attached to AMF hyphae (Fig. S A), as was also observed in a previous study . Taken together, these results imply that the enrichment and stimulation of complete denitrifying P. fluorescens in the hyphosphere can be attributed to AMF hyphal exudates. AMF hyphae exude organic carbon, mainly in the form of sugars, carboxylates, and amino acids . Previous studies show that AMF hyphal exudates promoted the growth of phosphate-solubilizing bacteria and that fructose exuded by AMF stimulated the expression of phosphatase genes in Rahnella aquatilis . Here, we found that glucose, fructose, trehalose, glutamine, glutamic acid, citrate, and malate were abundant in hyphal exudates (Table S ), corroborating with previous studies . AMF hyphal exudates significantly promoted the chemotaxis and growth of P. fluorescens (Fig. A and Fig. S B), reduced N 2 O emissions, and upregulated the expression of the nosZ but not of the nirS gene (Fig. B, C and Fig. S C). Moreover, the role of carboxylates in bacterial chemotaxis, N 2 O emissions, and gene expression was similar to that of hyphal exudates (Fig. ). Together, these results demonstrate that carboxylates exuded by hyphae are attractants in recruiting P . fluorescens and also act as stimulants triggering nosZ gene expression, resulting in a significant decline in N 2 O emissions. This was further validated in the inoculation experiment in which cumulative N 2 O emission and nosZ gene expression in the citrate addition treatment were similar to those in the +AMF treatment and in which N 2 O emission lower and nosZ gene expression higher than in the glucose or H 2 O addition treatments (Fig. C, D). Thus, an N 2 O-reducing microbiome in residue patches has been developed by carboxylates exuded by AMF hyphae. A similar situation with reduced N 2 O emissions after the addition of carboxylates such as citrate, succinate, and acetate but not glucose to soils or to pure cultures of Pseudomonas was previously observed. The N 2 O reductase encoded by the nosZ gene is a weak competitor for electrons compared to other denitrifying reductases . NADH, the usual direct electron donor mainly produced in the citrate cycle, is more conducive to electron transfer to N 2 O reductase. Carboxylates such as citrate and malate in hyphal exudates are directly involved in the citrate cycle, while the metabolic use of glucose requires enzymatic conversions and consumes extra energy . Moreover, AMF increased the relative abundances of key genes involved in citrate cycle of bacteria, especially P. fluorescens in residue patches (pot expt 2, Fig. B, C). Taken together, these results imply that hyphal exudates (with carboxylates as major components) promote the citrate cycle, trigger complete denitrification, and subsequently reduce N 2 O emissions by P . fluorescens . The results of the current study may be relevant for diverse ecosystems. The values of HLD in the present study fall within the range of 200–600 cm cm −3 (approximately 1.5–5.0 m g −1 ) in farmland soil but were lower than in woody and non-woody systems (2400 and 2700 cm cm −3 on average, respectively) . The global decline in the abundance and diversity of AMF due to increasing land use intensity is potentially alarming. This decline may disrupt the extensive connections between AMF and their associated microbiomes, with cascading negative effects on ecosystem functioning, specifically with respect to the underappreciated role of co-colonization by AMF and Pseudomonas in the mitigation of N 2 O emissions. To counter this adverse development, the restoration of AMF diversity in agricultural ecosystems may be achieved by the development of sustainable management practices such as diversified cropping , organic farming , or conservation agriculture . To verify that sustainable agriculture practices may indeed stimulate co-colonization by AMF and N 2 O reducers, we analyzed soil samples taken from an 11-year-long intercropping field experiment. In the maize/faba bean intercropping soils, the HLD and the gene abundance of clade I nosZ were significantly higher than those in the faba bean monoculture, and the clade I nosZ gene abundance was significantly positively correlated with HLD (Fig. C). Similar situations, i.e., low mineral N and high organic C availability, may also occur in grassland and forest soils, where uptake of atmospheric N 2 O is observed . We speculate that the mechanisms we describe in the present study may explain this phenomenon, as AMF are abundant in these ecosystems. Our study demonstrates that reinforcing synergies between AMF and the hyphosphere microbiome may have far-reaching implications for both sustainable agriculture and the mitigation of N 2 O emissions from cropping systems and, thus, for the mitigation of climate change. We envisage that indiscernible and variable N 2 O fluxes occurring in soil microenvironments can be substantially reduced by AMF and the hyphosphere microbiome. Our study therefore also advances our understanding of the multiple functions delivery by AMF beyond promoting uptake of soil nutrients. Our study provides novel insights into the importance of AMF in mediating nitrogen transformation processes conducted mainly by denitrifiers that lead to cascading effects on soil N 2 O emission. We demonstrate that AMF enriched the N 2 O-reducing Pseudomonas in the hyphosphere, which was responsible for the decline in N 2 O emissions in the residue patches. Notably, carboxylates exuded by hyphae acted as attractants recruiting P . fluorescens JL1 and as stimulants triggering the expression of nosZ gene. These insights provide a novel mechanistic understanding of the intriguing interactions between AMF and microbial guilds in the hyphosphere, and collectively indicate how these trophic microbial interactions substantially affect the denitrification process at microsites. This knowledge opens novel avenues to exploit cross-kingdom microbial interactions for sustainable agriculture and climate change mitigation. Additional file 1: Fig. S1. Hyphal length density (pot expt 1), patch N 2 O concentrations (pot expt 1) dissolved organic carbon content, and total carbon and nitrogen contents (pot expt 2) in patches in the absence or presence of AMF. A, pot expt 1. Hyphal length density from different patches under −AMF and +AMF treatments ( n = 8). B, pot expt1. Dynamic N 2 O concentrations from different patches under −AMF and +AMF treatments after the addition of NO 3 − -N or NH 4 + -N ( n = 4). Control, soil patch; NSfaba and Sfaba; patches with unsterilized (NS) or sterilized (S) faba bean residues, respectively. C-F, pot expt 2. Dissolved organic carbon (C), hyphal length density (D), total carbon (E) and total nitrogen (F) content under the −AMF and +AMF treatments at both harvests ( n = 5). T1 and T2, the first (day 24) and second (day 34) harvests, respectively; asterisks, significant differences between the −AMF and +AMF treatments in each patch type (pot expt 1) or at each harvest (pot expt 2) according to two-tailed unpaired t -tests (*, P < 0.05; **, P < 0.01; ***, P < 0.001). Additional file 2: Fig. S2. Correlation between N 2 O emission, hyphal length density and nosZ gene copies or transcript copies. A, pot expt 1. Correlation between N 2 O concentration and nosZ gene copies in different patch types 24 h after the addition of ammonium or nitrate. B, pot expt 1. Correlation between nosZ gene copies and hyphal length density in different patche types. Control, soil patch; NSfaba and Sfaba, patches with unsterilized (NS) or sterilized (S) faba bean residues, respectively. C, D, pot expt 2. Correlation of nosZ gene and transcript copies with hyphal length density (C) and dissolved organic carbon (D) contents at the first and second harvests. Correlation analysis is based on Pearson correlation coefficient. Gray shading denotes the 95% confidence intervals, and only significant correlations are listed. Additional file 3: Fig. S3. Structure of microbial communities harbouring nirK , nirS and clade I nosZ in pot expt 2. A, B, The relative abundance of major taxonomic groups of nirK , nirS and clade I nosZ communities in the absence or presence of AMF at both harvests based on gene (A) and transcript (B) levels ( n = 5). T1 and T2, the first (day 24) and second (day 34) harvests, respectively; asterisks, significant differences between the −AMF and +AMF treatments at each harvest according to the Wilcoxon rank sum test (*, P < 0.05; **, P < 0.01; ***, P < 0.001). Additional file 4: Fig. S4. Phylogeny and community structure of culturable denitrifying bacteria in response to AMF hyphae in the in vitro experiment. A, Phylogenetic tree of culturable denitrifying bacteria from patches of faba bean residue. This was constructed by the neighbor-joining method based on 16S rRNA gene sequences. Names of strains obtained from this study are shown in bold. B, Relative abundances of major culturable denitrifying bacterial communities in the absence or presence of AMF ( n = 5). Asterisks, significant differences between the −AMF and +AMF treatments according to the Wilcoxon rank sum test (*, P < 0.05; **, P < 0.01; ***, P < 0.001). C, Nonmetric multidimensional scaling (NMDS) pattern of culturable denitrifying bacterial communities between −AMF and +AMF treatments based on Bray–Curtis dissimilarity. Ellipses in the plots indicate 95% confidence intervals for microbial communities under the −AMF and +AMF treatments ( n = 5). Additional file 5: Fig. S5. Response of Pseudomonas fluorescens to AMF hyphal exudates and major compounds in the in vitro experiment. A, AMF hyphae with attached P. fluorescens stained with 4′,6-diamidino-2-phenylindole (DAPI); scale bar, 10 μm. B, Bacterial optical densities (OD 600 ) of P . fluorescens in response to AMF hyphal exudates and major compounds ( n = 3). C, Expression of the nirS gene and nosZ / nirS ratio of P . fluorescens in response to hyphal exudates and major compounds ( n = 3). Different lowercase letters indicate significant differences among treatments by the least significant difference (LSD) test at the 5% level. D, Dynamic N 2 O concentrations in the headspace of serum bottles emitted from three strains of P . fluorescens in response to glucose, citrate, and hyphal exudates ( n = 3). Asterisks, significant differences between hyphal exudate or citrate treatment and glucose treatment at 3 h within each strain according to two-tailed unpaired t -test (*, P < 0.05; ***, P < 0.001) . Additional file 6: Fig. S6. Soil water content, total carbon and nitrogen contents, mineral nitrogen and dissolved total nitrogen contents in patches in the absence or presence of AMF A-C, pot expt 1. Soil water content (A), total carbon (B) and nitrogen (C) contents under the −AMF and +AMF treatments in different patches ( n = 8). Control, soil patch; NSfaba and Sfaba, patches with unsterilized (NS) or sterilized (S) faba bean residues, respectively. D-G, pot expt 2. Soil water content (D), ammonium (E), nitrate (F) and dissolved total nitrogen (G) contents under the −AMF and +AMF treatments at both harvests ( n = 5). T1 and T2, the first (day 24) and second (day 34) harvests, respectively; Asterisks, significant differences between −AMF and +AMF treatments at each harvest (pot Expt 2) according to two-tailed unpaired t -test (*, P < 0.05; **, P < 0.01; ***, P < 0.001). Additional file 7: Fig. S7. Structure of nirK and nirS communities in the absence or presence of AMF. Pot expt 1. Relative abundance of major taxonomic groups of nirK and nirS communities under the −AMF and +AMF treatments in different patches ( n = 8). Control, soil patch; NSfaba and Sfaba, patches with unsterilized (NS) or sterilized (S) faba bean residues, respectively; asterisks, significant differences between the −AMF and +AMF treatments in each patch type according to the Wilcoxon rank sum test (*, P < 0.05; **, P < 0.01; ***, P < 0.001). Additional file 8: Materials and Methods. Supplementary Text. Table S1. Temporal N 2 O concentrations (μL L -1 ) in the headspace in the preliminary experiment. Table S2. Primers and PCR conditions used for the PCR. Table S3. Stepwise multiple regression to identify the abundance and expression of key genes involved in N cycling which had the strongest statistical contributions to variation in the cumulative N 2 O emission in pot expt 2. Independent variables include the abundances and expressions of nirK , nirS and clade I and II nosZ genes. Dependents variable is the cumulative N 2 O emission. Table S4. Permutational multivariate analysis of variance (PERMANOVA) of the effects of patch type (PT; pot expt 1) or harvest time (HT; pot expt 2) and AMF treatment on microbial communities harbouring nirK , nirS and clade I nosZ based on the gene and transcript sequencing. Table S5. Permutational multivariate analysis of variance (PERMANOVA) of the effect of AMF treatment on clade I nosZ community in different patches (pot expt 1) or harvest time (pot expt 2) based on the gene and transcript sequencing. Table S6. In vitro experiment: metabolite concentrations in the hyphal exudates of Rhizophagus irregularis . Table S7. Effects of patch type and AMF treatment on biomass, N concentration and N content of maize in pot expt 1. Table S8. Effects of harvest time and AMF treatment on biomass, N concentration and N content of maize in pot expt 2.
Protocol for the 2ND-STEP study, Japan Clinical Oncology Group study JCOG1802: a randomized phase II trial of second-line treatment for advanced soft tissue sarcoma comparing trabectedin, eribulin and pazopanib
ee9b88d7-8be0-4e1e-83ff-6bb968840e30
9996999
Internal Medicine[mh]
Soft tissue sarcomas (STS) are rare malignancies that comprise a variety of histological diagnoses . The standard treatment for STS is mainly based on the clinical stage and resectability of the tumor . The standard treatment for resectable cases is surgical resection, whereas that for unresectable or metastatic advanced STS is chemotherapy. Doxorubicin alone or in combination with ifosfamide or dacarbazine is widely accepted as first-line chemotherapy for advanced STS . Doxorubicin induces irreversible cumulative cardiotoxicity, and second-line chemotherapy is required when the total dose approaches the upper limit or in tumors that are refractory to doxorubicin. Trabectedin, eribulin, pazopanib, and gemcitabine plus docetaxel (GD) form the standard treatment options for second-line chemotherapy for advanced STS; however, there are no clear recommendations regarding the choice of regimen. The U.S. National Comprehensive Cancer Network guidelines list pazopanib, trabectedin, and eribulin as preferred regimens; other recommended regimens include dacarbazine, ifosfamide, temozolomide, vinorelbine, and regorafenib; pembrolizumab is also listed as a useful agent in specific cases . The ESMO-EURACAN-GENTURIS guidelines recommend that second-line chemotherapy should be based on histopathology: trabectedin is a treatment option for all histological types, pazopanib should be used for non-liposarcoma, eribulin for liposarcoma only, and the GD and gemcitabine-dacarbazine combinations are preferred in patients who underwent prior treatment with doxorubicin-containing agents . The results of clinical trials conducted to date {Table } indicate that trabectedin is highly effective for translocation-related sarcomas , liposarcoma and leiomyosarcoma (L-sarcoma) , eribulin for L-sarcoma, especially liposarcoma , pazopanib for other histological types besides liposarcoma, and GD for leiomyosarcoma , based on the histological inclusion criteria of each trial. These results have influenced the approval of each drug. While trabectedin is approved for L-sarcoma in the U.S.A., eribulin is approved exclusively for liposarcoma in the U.S.A and Europe, and pazopanib is approved for STS, excluding liposarcoma in the U.S.A and Europe. All three agents are approved for all histologic types of STS in Japan. Recent research, including that conducted using real-world data, has shown that all of these agents are effective against a wider range of histological types . However, there is no clear evidence demonstrating the superiority of any one of these agents as second-line chemotherapy for advanced STS, since no randomized controlled trials have been conducted with these agents. Therefore, the Bone and Soft Tissue Tumor Study Group (BSTTSG) of the Japan Clinical Oncology Group (JCOG) planned to establish a standard second-line treatment for advanced STS from among the widely used regimens such as trabectedin, eribulin, pazopanib, and GD. First, we plan to conduct a selection design, randomized, phase II study using trabectedin, eribulin, and pazopanib, which are relatively new drugs for STS that have been approved since the 2010s. A randomized phase III study comparing the best second-line agent determined by this JCOG1802 study with GD, which has been in use since the 2000s and is currently deemed standard therapy for STS, will be planned. Study design The objective of this clinical trial (JCOG1802, 2ND-STEP) is to determine the most promising regimen among trabectedin, eribulin, and pazopanib, which will be designated as the test arm in a future phase III trial of second-line treatment for patients with metastatic or unresectable STS that has progressed after first-line chemotherapy with doxorubicin. This multicenter, randomized, open-label, parallel-arm, selection design phase II trial aims to examine the efficacy and safety of trabectedin, eribulin, and pazopanib, which are widely used anticancer agents for advanced STS. At commencement, this study will be conducted across 37 institutions in Japan, all of which are participants in the BSTTSG of the JCOG. Patients with advanced STS are being treated at all participating institutions, and potentially eligible patients will be registered by investigators. The study protocol was approved by the National Cancer Center Hospital Certified Review Board for Clinical Trials (CRB Certification No. CRB3180008) prior to the initiation of patient accrual. The inclusion and exclusion criteria for this study are summarized in Table . After confirming participant eligibility, patients will be randomly assigned to the treatment arms at the JCOG Data Center. Random allocation to trabectedin, eribulin, and pazopanib will be performed using a minimization method in a ratio of 1:1:1. The institution, histology (liposarcoma vs. leiomyosarcoma vs. translocation-related sarcoma vs. other), and distant metastases (N1 and/or M1 vs. other) will serve as factors for allocation adjustment (Fig. ). Interventions Patients will be randomized to arms A, B, or C (Fig. ) as follows. arm A, intravenous drip infusion of trabectedin 1.2 mg/m 2 (body surface area) on day 1 for 24 h every 3 weeks; arm B, intravenous drip infusion of eribulin 1.4 mg/m 2 (of body surface area) on days 1 and 8 for 2–5 min every 3 weeks; and arm C, oral administration of pazopanib 800 mg/day more than 1 h before meals or more than 2 h after meals every day. The criteria for dose reduction for each arm are set as follows. There are three dose levels of trabectedin in arm A: level 0 (full dose), 1.2 mg/m 2 ; level 1, 1.0 mg/m 2 ; and level 2, 0.8 mg/m 2 . There are three dose levels for eribulin in arm B: level 0 (full dose), 1.4 mg/m 2 ; level 1, 1.1 mg/m 2 ; and level 2, 0.7 mg/m 2 . There are four dose levels for pazopanib in arm C: level 0 (full dose), 800 mg/body; level 1, 600 mg/body; level 2, 400 mg/body; and level 3, 200 mg/body. The dose will be lowered by one level in the next course, in the event of severe myelosuppression, liver dysfunction, or cardiac dysfunction. The treatment protocol will be terminated if any toxicity is observed even at the lowest dose level. The concomitant use of any of the following therapies is prohibited during administration of the treatment protocol: (1) anticancer drugs other than the treatment regimen, (2) radiation therapy (including particle therapy) for the target lesion, and (3) immunotherapy. Participant timeline The participant timeline is summarized in Table . Candidates who consent to participate will be checked for eligibility based on the inclusion/exclusion criteria, and enrollment will be completed after confirmation of participant eligibility. Physical examination, Eastern Cooperative Oncology Group performance status (PS), body weight, body height, complete blood count (CBC), serum biochemistry, activated partial prothrombin time, prothrombin time-international normalized ratio, thyroid stimulating hormone, free T4, and urine analyses are performed within 14 days before registration. Echocardiography, electrocardiography, contrast-enhanced computed tomography (CT) of the chest, abdomen, and pelvis, and additional CT/magnetic resonance imaging (MRI) of the lesions will be performed within 28 days before registration. Plain CT is acceptable when contrast agents cannot be administered due to allergies, asthma, etc. Participants will commence the treatment protocol within 14 days of registration. The treatment protocol is continued until one of the following termination criteria are met: (1) exacerbation of disease (judged as ineffective treatment); (2) the treatment protocol cannot be continued due to adverse events including grade 4 non-hematologic toxicity and delay of 56 days before the start of the next course; (3) the patient requests termination of treatment for reasons related to adverse events; (4) the patient requests termination of treatment for reasons unrelated to adverse events; (5) death during treatment; and (6) post-enrollment exacerbation prior to the initiation of treatment (inability to start the treatment protocol due to rapid progression), discovery of protocol violation, change of treatment due to a change in the pathological diagnosis after enrollment, or other reasons that make the patient ineligible for treatment. Treatment efficacy will be determined by performing contrast-enhanced chest-abdomen-pelvis CT and additional CT/MRI of the lesions every 4 weeks on the first four occasions after initiation of the treatment protocol, and every 6 weeks thereafter. After termination of the treatment protocol, physical examination, PS, body weight, CBC, serum biochemistry, and adverse events will be assessed every 6 months. If treatment is terminated for reasons other than progression of the disease, 6-weekly CT examination will be continued until disease progression. Primary endpoint The primary endpoint is progression-free survival (PFS). PFS is defined as the period from the date of registration until the date of progression or the date of death from any cause, whichever is earlier. Herein, progression includes both progressive disease as determined by imaging based on the revised Response Evaluation Criteria in Solid Tumours (RECIST) guidelines, and clinical progression without confirmation by imaging studies. Secondary endpoints The secondary endpoints include overall survival, disease-control rate, response rate, and proportion of adverse events (adverse reactions). Overall survival is defined as the period from the date of registration to the date of death from any cause. The disease-control rate is defined as the proportion of patients with either complete response (CR), partial response (PR), or stable disease according to the RECIST criteria from amongst all eligible patients with measurable disease. The response rate is defined as the proportion of patients whose best overall response is either CR or PR amongst all eligible patients with measurable disease. The proportion of adverse events (adverse reactions) is defined as the frequency of adverse events (toxicity) of the worst severity according to the Common Terminology Criteria for Adverse Events v5.0-JCOG during the entire course. Sample size calculation The required sample size to accurately ascertain the most favorable treatment based on the point estimate of the hazard ratio (HR) was calculated using Monte Carlo simulation corresponding to the instance where an exponential distribution is assumed for Liu’s selection design . Based on previous studies , we established an accrual period of 3 years, follow-up period of 6 months, and the median PFS for each arm to be 2, 2, and 4 months in condition 1; 2, 3, and 4 months in condition 2; and 3, 3, and 4 months in condition 3. Under these conditions, the required sample size to correctly select the most favorable treatment arm with a probability of 80% was 7 for condition 1, 20 for condition 2, and 34 for condition 3. To maintain the probability of correct selection of at least 80% in any condition, the planned sample size was set at 40 for each arm and the total sample size was 120, where the probability was 99.7% for condition 1, 88.8% for condition 2, and 83.0% for condition 3. Data collection, management, monitoring, and auditing Data entry into the electronic case report form is performed by investigators using an electronic data capture (EDC) system via the JCOG Web Entry System. Adverse events are closely monitored by investigators. Investigators will report severe adverse events to the institutional administrator and principal investigator, and then to the CRB of the National Cancer Center Hospital, as appropriate. Clinical data entry, data management, and central monitoring will be performed using the EDC system, E-DMS Online (EPS Corporation, Tokyo, Japan). All statistical analyses will be conducted at the JCOG Data Center. Interim analysis is not planned for this study because the follow-up period for the primary analysis is short and any safety issues with participants can be ascertained through periodic monitoring. In-house monitoring will be performed every 6 months by the JCOG Data Center to evaluate the study progress and improve the quality of the data. Statistical analysis Statistical analysis will be performed at the JCOG Data Center. The primary analysis will be performed 6 months after the end of accrual, when collection of the primary endpoint data for all enrolled patients is expected to be complete. The treatment protocol with the best point estimate of the HR for PFS, which is the primary endpoint of this study, will be the test treatment arm in a subsequent phase III trial. However, the treatment arm for the phase III trial will be decided comprehensively if the following results are obtained, taking into consideration endpoints other than PFS. First, the PFS obtained is substantially lower than expected (insufficient results for promising therapy). Second, the overall survival results differ significantly from those of PFS. Third, the frequency of adverse events among the arms differs significantly from the expected frequency. For the primary analysis of PFS, the respective HRs of arms A to B and A to C will be calculated using an unstratified Cox proportional-hazards model for all enrolled patients, and the treatment with the best HR will be judged to be the most promising regimen. Since this study does not make judgments based on hypothesis testing, no significance level is set a priori, and no adjustment will be made for multiplicity. Subgroup analyses based on the factors mentioned below are to be conducted, as necessary. The factors for which subgroup analyses are planned include age group 1 (< 40/ ≥ 40 years), age group 2 (< 70/ ≥ 70 years), sex (male/female), PS (0/1 and 2), histological type (liposarcoma/leiomyosarcoma/translocation-related sarcoma/other), distant metastasis 1 [(M1 and/or N1)/other], distant metastasis 2 [M1/(N1 and M0)/other], and doxorubicin (perioperative chemotherapy/palliative chemotherapy). The objective of this clinical trial (JCOG1802, 2ND-STEP) is to determine the most promising regimen among trabectedin, eribulin, and pazopanib, which will be designated as the test arm in a future phase III trial of second-line treatment for patients with metastatic or unresectable STS that has progressed after first-line chemotherapy with doxorubicin. This multicenter, randomized, open-label, parallel-arm, selection design phase II trial aims to examine the efficacy and safety of trabectedin, eribulin, and pazopanib, which are widely used anticancer agents for advanced STS. At commencement, this study will be conducted across 37 institutions in Japan, all of which are participants in the BSTTSG of the JCOG. Patients with advanced STS are being treated at all participating institutions, and potentially eligible patients will be registered by investigators. The study protocol was approved by the National Cancer Center Hospital Certified Review Board for Clinical Trials (CRB Certification No. CRB3180008) prior to the initiation of patient accrual. The inclusion and exclusion criteria for this study are summarized in Table . After confirming participant eligibility, patients will be randomly assigned to the treatment arms at the JCOG Data Center. Random allocation to trabectedin, eribulin, and pazopanib will be performed using a minimization method in a ratio of 1:1:1. The institution, histology (liposarcoma vs. leiomyosarcoma vs. translocation-related sarcoma vs. other), and distant metastases (N1 and/or M1 vs. other) will serve as factors for allocation adjustment (Fig. ). Patients will be randomized to arms A, B, or C (Fig. ) as follows. arm A, intravenous drip infusion of trabectedin 1.2 mg/m 2 (body surface area) on day 1 for 24 h every 3 weeks; arm B, intravenous drip infusion of eribulin 1.4 mg/m 2 (of body surface area) on days 1 and 8 for 2–5 min every 3 weeks; and arm C, oral administration of pazopanib 800 mg/day more than 1 h before meals or more than 2 h after meals every day. The criteria for dose reduction for each arm are set as follows. There are three dose levels of trabectedin in arm A: level 0 (full dose), 1.2 mg/m 2 ; level 1, 1.0 mg/m 2 ; and level 2, 0.8 mg/m 2 . There are three dose levels for eribulin in arm B: level 0 (full dose), 1.4 mg/m 2 ; level 1, 1.1 mg/m 2 ; and level 2, 0.7 mg/m 2 . There are four dose levels for pazopanib in arm C: level 0 (full dose), 800 mg/body; level 1, 600 mg/body; level 2, 400 mg/body; and level 3, 200 mg/body. The dose will be lowered by one level in the next course, in the event of severe myelosuppression, liver dysfunction, or cardiac dysfunction. The treatment protocol will be terminated if any toxicity is observed even at the lowest dose level. The concomitant use of any of the following therapies is prohibited during administration of the treatment protocol: (1) anticancer drugs other than the treatment regimen, (2) radiation therapy (including particle therapy) for the target lesion, and (3) immunotherapy. The participant timeline is summarized in Table . Candidates who consent to participate will be checked for eligibility based on the inclusion/exclusion criteria, and enrollment will be completed after confirmation of participant eligibility. Physical examination, Eastern Cooperative Oncology Group performance status (PS), body weight, body height, complete blood count (CBC), serum biochemistry, activated partial prothrombin time, prothrombin time-international normalized ratio, thyroid stimulating hormone, free T4, and urine analyses are performed within 14 days before registration. Echocardiography, electrocardiography, contrast-enhanced computed tomography (CT) of the chest, abdomen, and pelvis, and additional CT/magnetic resonance imaging (MRI) of the lesions will be performed within 28 days before registration. Plain CT is acceptable when contrast agents cannot be administered due to allergies, asthma, etc. Participants will commence the treatment protocol within 14 days of registration. The treatment protocol is continued until one of the following termination criteria are met: (1) exacerbation of disease (judged as ineffective treatment); (2) the treatment protocol cannot be continued due to adverse events including grade 4 non-hematologic toxicity and delay of 56 days before the start of the next course; (3) the patient requests termination of treatment for reasons related to adverse events; (4) the patient requests termination of treatment for reasons unrelated to adverse events; (5) death during treatment; and (6) post-enrollment exacerbation prior to the initiation of treatment (inability to start the treatment protocol due to rapid progression), discovery of protocol violation, change of treatment due to a change in the pathological diagnosis after enrollment, or other reasons that make the patient ineligible for treatment. Treatment efficacy will be determined by performing contrast-enhanced chest-abdomen-pelvis CT and additional CT/MRI of the lesions every 4 weeks on the first four occasions after initiation of the treatment protocol, and every 6 weeks thereafter. After termination of the treatment protocol, physical examination, PS, body weight, CBC, serum biochemistry, and adverse events will be assessed every 6 months. If treatment is terminated for reasons other than progression of the disease, 6-weekly CT examination will be continued until disease progression. The primary endpoint is progression-free survival (PFS). PFS is defined as the period from the date of registration until the date of progression or the date of death from any cause, whichever is earlier. Herein, progression includes both progressive disease as determined by imaging based on the revised Response Evaluation Criteria in Solid Tumours (RECIST) guidelines, and clinical progression without confirmation by imaging studies. The secondary endpoints include overall survival, disease-control rate, response rate, and proportion of adverse events (adverse reactions). Overall survival is defined as the period from the date of registration to the date of death from any cause. The disease-control rate is defined as the proportion of patients with either complete response (CR), partial response (PR), or stable disease according to the RECIST criteria from amongst all eligible patients with measurable disease. The response rate is defined as the proportion of patients whose best overall response is either CR or PR amongst all eligible patients with measurable disease. The proportion of adverse events (adverse reactions) is defined as the frequency of adverse events (toxicity) of the worst severity according to the Common Terminology Criteria for Adverse Events v5.0-JCOG during the entire course. The required sample size to accurately ascertain the most favorable treatment based on the point estimate of the hazard ratio (HR) was calculated using Monte Carlo simulation corresponding to the instance where an exponential distribution is assumed for Liu’s selection design . Based on previous studies , we established an accrual period of 3 years, follow-up period of 6 months, and the median PFS for each arm to be 2, 2, and 4 months in condition 1; 2, 3, and 4 months in condition 2; and 3, 3, and 4 months in condition 3. Under these conditions, the required sample size to correctly select the most favorable treatment arm with a probability of 80% was 7 for condition 1, 20 for condition 2, and 34 for condition 3. To maintain the probability of correct selection of at least 80% in any condition, the planned sample size was set at 40 for each arm and the total sample size was 120, where the probability was 99.7% for condition 1, 88.8% for condition 2, and 83.0% for condition 3. Data entry into the electronic case report form is performed by investigators using an electronic data capture (EDC) system via the JCOG Web Entry System. Adverse events are closely monitored by investigators. Investigators will report severe adverse events to the institutional administrator and principal investigator, and then to the CRB of the National Cancer Center Hospital, as appropriate. Clinical data entry, data management, and central monitoring will be performed using the EDC system, E-DMS Online (EPS Corporation, Tokyo, Japan). All statistical analyses will be conducted at the JCOG Data Center. Interim analysis is not planned for this study because the follow-up period for the primary analysis is short and any safety issues with participants can be ascertained through periodic monitoring. In-house monitoring will be performed every 6 months by the JCOG Data Center to evaluate the study progress and improve the quality of the data. Statistical analysis will be performed at the JCOG Data Center. The primary analysis will be performed 6 months after the end of accrual, when collection of the primary endpoint data for all enrolled patients is expected to be complete. The treatment protocol with the best point estimate of the HR for PFS, which is the primary endpoint of this study, will be the test treatment arm in a subsequent phase III trial. However, the treatment arm for the phase III trial will be decided comprehensively if the following results are obtained, taking into consideration endpoints other than PFS. First, the PFS obtained is substantially lower than expected (insufficient results for promising therapy). Second, the overall survival results differ significantly from those of PFS. Third, the frequency of adverse events among the arms differs significantly from the expected frequency. For the primary analysis of PFS, the respective HRs of arms A to B and A to C will be calculated using an unstratified Cox proportional-hazards model for all enrolled patients, and the treatment with the best HR will be judged to be the most promising regimen. Since this study does not make judgments based on hypothesis testing, no significance level is set a priori, and no adjustment will be made for multiplicity. Subgroup analyses based on the factors mentioned below are to be conducted, as necessary. The factors for which subgroup analyses are planned include age group 1 (< 40/ ≥ 40 years), age group 2 (< 70/ ≥ 70 years), sex (male/female), PS (0/1 and 2), histological type (liposarcoma/leiomyosarcoma/translocation-related sarcoma/other), distant metastasis 1 [(M1 and/or N1)/other], distant metastasis 2 [M1/(N1 and M0)/other], and doxorubicin (perioperative chemotherapy/palliative chemotherapy). Most guidelines state that first-line chemotherapy for advanced STS should include doxorubicin alone or in combination with ifosfamide or dacarbazine; however, there are no clear guidelines for standard second-line therapy. One of the chief factors preventing the establishment of clear drug selection criteria for second-line chemotherapy for advanced STS is the lack of clinical trials that directly compare the efficacy and safety of drugs used for this purpose. Therefore, data from clinical trials with different types of participants are used as a reference for drug selection based on indirect comparison. We intend to overcome this problem by conducting this multicenter, randomized, open-label, parallel-arm, selection design phase II trial of the three major treatment options for second-line chemotherapy for advanced STS, viz. trabectedin, eribulin, and pazopanib. The JCOG 1802 trial (a randomized phase II trial of trabectedin, eribulin, and pazopanib as second-line treatment for advanced soft-tissue sarcoma after doxorubicin, which is also known as the 2ND-STEP trial) will be the first randomized trial using trabectedin, eribulin, and pazopanib for STS worldwide. This trial is expected to determine the most promising regimen from amongst trabectedin, eribulin, and pazopanib as the test arm regimen in a future phase III trial of second-line treatment for advanced STS patients.
Microbial diversity and abundance in loamy sandy soil under renaturalization of former arable land
3bced9a6-c144-42c9-8c31-88138316580a
9997190
Microbiology[mh]
Farming on low-productivity soils, traditional agricultural activities are often unprofitable, and the establishment of new forests and grasslands can be one of the most efficient ways of using these lands to keep them unused. Such renaturalizations increase the biodiversity of ecosystems through low-intensity agriculture and afforestation, reduce gas emissions and can therefore be seen as positive factors from an environmental point of view ( ; ). Over the past 50 years, afforestation of abandoned land, usually completely empty, has become more common—especially in the United States and the United Kingdom. Meadows and pastures across Europe are currently being turned back into forests. China, India and the countries of North and Central Africa, the Middle East and Australia are implementing afforestation projects ( ; ; ). Renaturalization processes are currently taking place quite rapidly in some parts of Lithuania and this trend is likely to continue in the future. It is predicted that with the development of non-agricultural activities, forested areas will gradually establish themselves in place of the agrarian landscape that has prevailed for many centuries. However, there is a lack of detailed research on this topic. Summarizing the data of the first research decade, and the results of subsequent years ( ; ), the transformation of field crop rotation soils into various phytocenoses can be described as a complex of factors with a significant effect on the accumulation of energy and nutrients, in which soil microorganisms play an important role. Adding some components and/or suppressing other existing components can be expected to achieve or improve the desired result. Therefore, in this regard, it is important to know the composition of these microorganism communities, as well as the abundance of individual taxonomic groups. In both the soil rhizosphere and the rhizoplan, bacteria and fungi interact closely with each other. Bacteria also play a very important role in promoting plant growth by increasing the nutrients available to plants, producing phytohormones, and inhibiting the development of soil pathogens ( ; ). In addition, the population structure of microorganisms changes in space and time and is affected by the availability of C, N resources, diurnal t°, porosity, moisture electrolyte concentration, pH changes and oxygen availability ( ). The intensity of microbial activity is not necessarily related to their taxonomic diversity, as biogeochemical processes are determined by the activity of active microorganisms. However, despite the importance of active microbes, most research methods are designed to estimate total microbial biomass without estimating its active fraction. Active microorganisms account for about 0.1–0.2% of the total microbial biomass and very rarely exceed 5% in soils without readily available substrates. Potentially active microorganisms, ready to absorb available substrates within a few hours, account for 10 to 40%, and sometimes up to 60% of the total microbial biomass. The number of microbes at dormant state, depending on the agroeco-biochemical characteristics of the soil, can be from 42 to 66% of the total microbial biomass. The transition from a potentially active state to an active one occurs in a few minutes, but the transition from a dormant state to an active state can take from several hours to several days ( ; ; ; ). One of the simpler methods, plate-count techniques, allows the assessment of most active/potentially active microorganism groups by functional-trophic specialization using selective nutrient media ( ; ). Soil microorganisms in various Lithuanian soils have already been studied to some extent, but only in a fragmented way and without delving into taxonomic diversity ( ; ; ; ; ; ). The main research gaps of all these studies are related to the absence of detailed studies of both taxonomic and functional diversity of microbes. There is also a lack of clarification of the dependence of the structure of soil communities on the agrochemical properties of the soil. With detailed information of this kind, additional measures can be envisaged to help speed up the renaturalization of soils. Preparations of mycorrhizal fungi are already often used in the case of afforestation. Since there are no detailed microbiological studies of the soil in our region, we started to analyze the soil microorganism communities comprehensively, i.e., determining their structure and composition. We hypothesized that renaturalization of former arable soil will change the abundance and diversity of microbes that may determine soil agrochemical properties, and that full use of information from soil microbial communities can improve soil productivity, maintain prehistory and sustainability. The aim of the research is to determine the qualitative and quantitative parameters of low-performing agro-ecosystem soil microorganism groups caused by different land use systems. Study site and soil sampling The study area (∼54°34′N, 25°05′E) is situated in East part of Lithuania, East Europe, in the northern part of the temperate climate zone ( ). The study was conducted as a part of long-term experiment, started in 1995. The experiment was arranged as land-use change of former arable field into fertilized/unfertilized managed grassland (MGf and MGu), soiled field (SA), Pine afforested field (PA), and left cropland field (fertilized/non-fertilized (Cf and Cu)). During the long-term experiment, various biological and agroecological properties was analyzed separately ( ), excluding soil microorganisms. Soil samples for microbiological analysis were collected as previously described in . Quantification of cultivable bacteria and fungi Cultivable microbial quantification was performed by plate-count techniques using different selective media: Meat Peptone Agar (MPA) (Liofilchem, Italy) for organotrophic bacteria, Starch Ammonia Agar (SAA) for bacteria using the mineral source of nitrogen ( ), Ashby’s Mannitol Agar—for nonsymbiotic diazotrophic bacteria ( ), and Sabouraud CAF agar (Liofilchem, Italy)—for filamentous fungi and yeasts/yeast-like fungi. The number of bacterial and fungal colony forming units (CFU) was calculated per gram of dry soil ( ). Soil DNA extraction and microbiomic analysis Pooled soil samples for metagenomic analysis were taken from topsoil layer 10–20 cm depth in summer 2020. Total genomic soil DNA from six soil samples was extracted using the ZR Soil Microbe DNA MiniPrepTM (50) (Zymo Research, Irvine, CA, USA) DNA extraction kit according to the manufacturer’s instructions. NGS analysis was performed with BaseClear BV (Leiden, the Netherlands) service using the Illumina NovaSeq 6000 or MiSeq system. The sequences generated with the MiSeq system were performed under accreditation according to the scope of BaseClear B.V. (L457; NEN-EN-ISO/IEC 17025) based on 16S rDNA for bacteria and 5.8S-ITS2 for fungi. Paired-end sequence reads were collapsed into so-called pseudoreads using sequence overlap with USEARCH version 9.2 ( ). Classification of these pseudoreads is performed based on the results of alignment with SNAP version 1.0.23 ( ) against the RDP database ( ) for bacterial organisms, while fungal organisms are classified using the UNITE ITS gene database ( ). Climate conditions Lithuania is in the Northern part of the temperate climate zone. The meteorological conditions of the research years were strictly different. The average temperature in 2017 was close to the multi-annual average, but it was very wet throughout the year. Meanwhile, 2018 was dry and warm, and 2019-2020 was the hottest in the entire almost 240-year (1778–2020) observation period, and there was a significant lack of moisture. According to LHMT, in 2020 it surpassed the warmest ones until 2019, when the average annual air temperature of 8.8 °C was registered and in 2015 (8.3 °C). Annual precipitation in 2020 was 646 mm, which is only 7% less than the multi-annual rate (694 mm) ( http://www.meteo.lt/en ). Graphic images of meteorological conditions are shown in , were comparison with multi-annual rate (1991–2020) was used. Statistical analysis Microbial abundance data reported as mean ± standard error of the mean and were analyzed using ANOVA. Mean separations were made for significant effect with F -test at 0.0000 < p < 0.022. Taxonomic diversity of microbes was assessed based on the amount of OTUs. Alpha diversity metrics (Chao1 and Shannon) were used to express soil microbial community structure. The Chao1 index describes the abundance of species, while the Shannon index—the diversity of species in given community. Statistical computations were performed using the STATISTICA 16.0 software package (StatSoft, Inc. Tulsa, OK, USA). The study area (∼54°34′N, 25°05′E) is situated in East part of Lithuania, East Europe, in the northern part of the temperate climate zone ( ). The study was conducted as a part of long-term experiment, started in 1995. The experiment was arranged as land-use change of former arable field into fertilized/unfertilized managed grassland (MGf and MGu), soiled field (SA), Pine afforested field (PA), and left cropland field (fertilized/non-fertilized (Cf and Cu)). During the long-term experiment, various biological and agroecological properties was analyzed separately ( ), excluding soil microorganisms. Soil samples for microbiological analysis were collected as previously described in . Cultivable microbial quantification was performed by plate-count techniques using different selective media: Meat Peptone Agar (MPA) (Liofilchem, Italy) for organotrophic bacteria, Starch Ammonia Agar (SAA) for bacteria using the mineral source of nitrogen ( ), Ashby’s Mannitol Agar—for nonsymbiotic diazotrophic bacteria ( ), and Sabouraud CAF agar (Liofilchem, Italy)—for filamentous fungi and yeasts/yeast-like fungi. The number of bacterial and fungal colony forming units (CFU) was calculated per gram of dry soil ( ). Pooled soil samples for metagenomic analysis were taken from topsoil layer 10–20 cm depth in summer 2020. Total genomic soil DNA from six soil samples was extracted using the ZR Soil Microbe DNA MiniPrepTM (50) (Zymo Research, Irvine, CA, USA) DNA extraction kit according to the manufacturer’s instructions. NGS analysis was performed with BaseClear BV (Leiden, the Netherlands) service using the Illumina NovaSeq 6000 or MiSeq system. The sequences generated with the MiSeq system were performed under accreditation according to the scope of BaseClear B.V. (L457; NEN-EN-ISO/IEC 17025) based on 16S rDNA for bacteria and 5.8S-ITS2 for fungi. Paired-end sequence reads were collapsed into so-called pseudoreads using sequence overlap with USEARCH version 9.2 ( ). Classification of these pseudoreads is performed based on the results of alignment with SNAP version 1.0.23 ( ) against the RDP database ( ) for bacterial organisms, while fungal organisms are classified using the UNITE ITS gene database ( ). Lithuania is in the Northern part of the temperate climate zone. The meteorological conditions of the research years were strictly different. The average temperature in 2017 was close to the multi-annual average, but it was very wet throughout the year. Meanwhile, 2018 was dry and warm, and 2019-2020 was the hottest in the entire almost 240-year (1778–2020) observation period, and there was a significant lack of moisture. According to LHMT, in 2020 it surpassed the warmest ones until 2019, when the average annual air temperature of 8.8 °C was registered and in 2015 (8.3 °C). Annual precipitation in 2020 was 646 mm, which is only 7% less than the multi-annual rate (694 mm) ( http://www.meteo.lt/en ). Graphic images of meteorological conditions are shown in , were comparison with multi-annual rate (1991–2020) was used. Microbial abundance data reported as mean ± standard error of the mean and were analyzed using ANOVA. Mean separations were made for significant effect with F -test at 0.0000 < p < 0.022. Taxonomic diversity of microbes was assessed based on the amount of OTUs. Alpha diversity metrics (Chao1 and Shannon) were used to express soil microbial community structure. The Chao1 index describes the abundance of species, while the Shannon index—the diversity of species in given community. Statistical computations were performed using the STATISTICA 16.0 software package (StatSoft, Inc. Tulsa, OK, USA). Soil agrochemical features During the long research period (23 years), the agrochemical indicators have changed slightly. Soil pH changed the mostly in the unfertilized cropland (increased) and in the fertilized cropland (decreased), while the concentration of organic carbon decreased the most in the unfertilized crop rotation field ( ). Abundance of cultivable soil microorganisms In 2017, the highest abundance of diazotrophic bacteria in the summer period was observed in the cropland and in the cultivated grassland MG ( ). Fungi and yeasts were characterized by a high abundance compared to the next year during the summer and autumn periods of this year ( ). In 2018, diazotrophs and nitrifiers were again more abundant than in other groups in the cropland and in the cultivated grassland ( ). This may be related to the crops being grown and their fertilization. Barley with red clover undersowing and fertilized with N 60 P 60 K 100 was currently grown in the cropland. In 2019, an increase in the physiological groups of some bacteria was observed in the autumn in a cropland where red clover was grown without fertilization. Increasing amounts of diazotrophs and nitrifiers from the summer period towards autumn were also found here ( ). The amounts of fungi and yeasts were exceptionally higher in the fertilized part of the cultivated grassland during the summer and autumn ( ). In the spring period of 2020, significantly higher amounts (not statistically different between themselves) of diazotrophs and nitrifiers in the fertilized areas of the cropland and cultivated grassland were detected ( ). Soil microbiomic analysis The method of next generation sequencing (NGS) of molecular biology was used to determine the taxonomic composition of bacteria and microscopic fungi in summer 2020 soil samples. A total of 295,390 valid reads of the 16S RNA fragment of bacteria were clustered into 4458 smallest (at species level) OTU, were obtained, and total of 302,190 fungal rRNA spacer ITS1 valid fragments were clustered into 707 smallest (at species level) OTU ( ). On average, about 2307 bacterial taxonomic units and 365 fungal taxonomic units were determined for each sample tested. The highest number of reads for both bacteria and fungi were in the unfertilized cropland, and the OTU of bacteria was mostly in the unfertilized grassland sample, and that of fungi in the fertilized cultivated grassland ( ). Heat maps ( ) were built by applying the NG-CHM Heat Map Builder 2.20.2 ( ). After calculating alpha abundance parameters, it was found that the highest species richness of bacteria was in fertilized cropland (Cf) and planted plots (1704.4), while the highest species richness (Chao1) and abundance (Shannon index) of fungi were highest in soiled field (Sf) (respectively,981.9 and 7.7). The distribution of fungi varied quite a bit, and the lowest was in the unfertilized managed grassland (MGu) plot (Chao1 = 572.8, and Shanon = 5.4) ( ). Analyzing the taxonomic diversity of soil microbes, it was observed that the bacterial communities were dominated by two types of bacteria: Actinobacteria and Proteobacteria . The highest number of Actinobacteria was in the managed grassland (MGf) (45.88%), the least in the afforested area PA (26.68%). The amount of Proteobacteria varied from 25.28% in the MGf area to 30.95% in the afforested area. The amount of other bacteria important for agroecosystems belonging to the Firmicutes phylum varied from 3.95% in managed grassland (MGu) to 7.01% in fertilized cropland (Cf) ( ). The distribution of fungi was slightly different. The main taxonomic part of all fungi was occupied by representatives of Ascomycota phyllum (59.61–70.02%), but the area that was planted with pine trees stood out here, Ascomycota occupied only 45.28%, and 29.5% of the space was transferred to Basidiomycota . Meanwhile, Basidiomycota were few in other plots: only from 3.09 to 6.71%. Another taxonomic group of fungi accounted for an appreciable proportion was Mortierellomycota (3.24–12.48%) ( ). During the long research period (23 years), the agrochemical indicators have changed slightly. Soil pH changed the mostly in the unfertilized cropland (increased) and in the fertilized cropland (decreased), while the concentration of organic carbon decreased the most in the unfertilized crop rotation field ( ). In 2017, the highest abundance of diazotrophic bacteria in the summer period was observed in the cropland and in the cultivated grassland MG ( ). Fungi and yeasts were characterized by a high abundance compared to the next year during the summer and autumn periods of this year ( ). In 2018, diazotrophs and nitrifiers were again more abundant than in other groups in the cropland and in the cultivated grassland ( ). This may be related to the crops being grown and their fertilization. Barley with red clover undersowing and fertilized with N 60 P 60 K 100 was currently grown in the cropland. In 2019, an increase in the physiological groups of some bacteria was observed in the autumn in a cropland where red clover was grown without fertilization. Increasing amounts of diazotrophs and nitrifiers from the summer period towards autumn were also found here ( ). The amounts of fungi and yeasts were exceptionally higher in the fertilized part of the cultivated grassland during the summer and autumn ( ). In the spring period of 2020, significantly higher amounts (not statistically different between themselves) of diazotrophs and nitrifiers in the fertilized areas of the cropland and cultivated grassland were detected ( ). The method of next generation sequencing (NGS) of molecular biology was used to determine the taxonomic composition of bacteria and microscopic fungi in summer 2020 soil samples. A total of 295,390 valid reads of the 16S RNA fragment of bacteria were clustered into 4458 smallest (at species level) OTU, were obtained, and total of 302,190 fungal rRNA spacer ITS1 valid fragments were clustered into 707 smallest (at species level) OTU ( ). On average, about 2307 bacterial taxonomic units and 365 fungal taxonomic units were determined for each sample tested. The highest number of reads for both bacteria and fungi were in the unfertilized cropland, and the OTU of bacteria was mostly in the unfertilized grassland sample, and that of fungi in the fertilized cultivated grassland ( ). Heat maps ( ) were built by applying the NG-CHM Heat Map Builder 2.20.2 ( ). After calculating alpha abundance parameters, it was found that the highest species richness of bacteria was in fertilized cropland (Cf) and planted plots (1704.4), while the highest species richness (Chao1) and abundance (Shannon index) of fungi were highest in soiled field (Sf) (respectively,981.9 and 7.7). The distribution of fungi varied quite a bit, and the lowest was in the unfertilized managed grassland (MGu) plot (Chao1 = 572.8, and Shanon = 5.4) ( ). Analyzing the taxonomic diversity of soil microbes, it was observed that the bacterial communities were dominated by two types of bacteria: Actinobacteria and Proteobacteria . The highest number of Actinobacteria was in the managed grassland (MGf) (45.88%), the least in the afforested area PA (26.68%). The amount of Proteobacteria varied from 25.28% in the MGf area to 30.95% in the afforested area. The amount of other bacteria important for agroecosystems belonging to the Firmicutes phylum varied from 3.95% in managed grassland (MGu) to 7.01% in fertilized cropland (Cf) ( ). The distribution of fungi was slightly different. The main taxonomic part of all fungi was occupied by representatives of Ascomycota phyllum (59.61–70.02%), but the area that was planted with pine trees stood out here, Ascomycota occupied only 45.28%, and 29.5% of the space was transferred to Basidiomycota . Meanwhile, Basidiomycota were few in other plots: only from 3.09 to 6.71%. Another taxonomic group of fungi accounted for an appreciable proportion was Mortierellomycota (3.24–12.48%) ( ). Changing the use of agricultural land to forestry or other land uses that the annual crop and harvest cycle will be replaced by other cycles, e.g. , significantly longer forest cycles. As a result, the physico-chemical properties of the soil use change, which has a decisive influence on the dynamics of nitrogen and carbon fluxes ( ; ; ). During the land use change process, aboveground vegetation changes lead to changes in the underground communities of microorganisms ( ). Various agrochemical and botanical studies in the areas of the long-term renaturalization experiment have been carried out since 1995 ( ; ). If the agrochemical indicators were checked periodically from the beginning of the experiment, then the microbiological analysis was conducted only in 2017–2020. Therefore, it is only necessary to compare the data of these studies among individual areas during the present period of investigation. As the research years were characterized by very different meteorological conditions ( ), it would be impossible to determine some trends in the dynamics of the microorganism’s abundance. Therefore, we will analyze the data for each year separately ( , and ). Summarizing the dynamics of bacterial abundance during the study period, it should be noted that the afforested area differed in the smallest amounts and fluctuations in abundance. Meanwhile, in terms of fungi and yeasts, this area was characterized by higher volumes. Such a tendency is confirmed by studies by other authors ( ; ; ; ). Comparing the results of the analysis of cultivable bacterial abundance with each other, we notice that the largest fluctuations were in the areas where some anthropogenic activity was carried out. Particularly sharp jumps in the abundance of diazotrophs and nitrifiers were caused by fertilization and the cultivation of certain plants (leguminous). As organic fertilizers were not used in the studied areas, organotrophic bacteria were not so abundant ( and ). In some cases, such as in 2020, the amount of organotrophs in summer samples was significantly lower than at the beginning or end of vegetation ( ). This was most likely due to a sufficiently high temperature and lack of humidity. The highest abundance of organotrophic bacteria was recorded in the summer–autumn of 2019 (2.15 ± 0.06 and 2.39 ± 0.03 × 10 5 CFU × g −1 ). The highest abundance of non-symbiotic diazotrophs was found in all areas undergoing anthropogenic activity in the summer of 2017 (from 2.92 ± 0.02 to 3.13 ± 0.06 × 10 5 CFU × g −1 ). In some cases, their number was higher in the cropland (fertilized and not) in autumn 2019 and spring 2020 ( ). A statistically higher abundance of nitrifiers was found in croplands in autumn 2019 (3.67 ± 0.67 × 10 5 CFU × g −1 ) and spring 2020 (3.72 ± 0.46 × 10 5 CFU × g −1 ). High levels of fungi were detected in the fertilized part of the cultivated grassland in the summer–autumn period of 2019–2020 (7.46 ± 0.38–9.6 ± 0.11 × 10 3 CFU × g −1 ) and in the afforested area in summer–autumn 2017 (8.7 ± 0.12–7.03 ± 0.09 × 10 3 CFU × g −1 ) ( ). The lowest and most stable bacterial amounts were in the afforested area, which, considering the changes in agrochemical parameters during renaturalization, accumulated the highest amount of soil organic carbon up to 12.2 ± 0.1 g × kg −1 and the highest in the humification rate, reaching 21.3% ( ). These processes were significantly influenced by the higher amount and the taxonomic structure of fungi compared to other samples. In other nearby experiments with intensive and organic farming, the cultivable bacterial abundance in the low-yield sandy loam ( Haplic Luvisols ) soil was 10 5 –10 6 CFU, the fungal was 10 3 –10 5 CFU ( ; ), in the loamy Cambisol bacteria—up to 10 6 and fungi 10 5 ( ). In fertile soils carbonate Chernozem in Kazakhstan, researchers counted organotrophic and nitrifying bacteria up to 10 7 CFU and fungi up to 10 5 CFU ( ). Thus, we see that the amounts of microorganisms can vary tens of times depending on the type of soil. Alpha diversity indexes were calculated to assess species diversity: Chao1 and Shannon. The Chao1 index estimates the total richness. The Shannon Diversity Index is a mathematical measure of the diversity of species in each community. This index provides more information about the composition of the community, i. considers the relative abundance of different species and Chao1 the species richness in each community ( i.e., the number of existing species). The highest richness of bacterial species was observed in the fertilizing part of the crop rotation field, and fungi—in the soiled area ( ). Most amount of the DNA sequences were assigned to 13 major bacterial phylum ( ). The taxonomic groups of Actino- and Proteobacteria were the most numerous. The content of Actinobacteria ranged from 43% in the cultivated grassland to 34% in the fallow (SF); most of the Proteobacteria were 29% in the afforested area, the lowest in the cultivated grassland 24% ( ). state ( ) that Proteobacteria predominate in place of former Actinobacteria in the soil of the afforested area. In the case of our study, this was the case compared to the cropland, the relative amount of Proteobacteria in the cultivated soil increased and the number of Actinobacteria decreased ( and ). The third taxonomic category in terms of quantity in our case is Firmicutes (5.29%), not Acidobacteria , as stated by . According to the data of other authors ( ), when the former grassland is planted, the indicators of bacterial abundance are reversed—from Proteobacteria to Actinobacteria ( and ). According to the data of fungal metagenomic analysis, all read DNA fragments were organized into five main large taxonomic categories ( and ) and the remaining large category of unclassified functional units. Ascomycota had the largest number of taxonomic units (from 37% to 42% of all fungi). Comparing the taxonomic structure of all the studied samples, we can see that the structure of the afforested area differs. An increase (up to 24%) in the taxonomic group of Basidiomycota is observed here at the expense of Ascomycota ( and ). In all remaining samples, Mortierellomycota was second in abundance (range 1% in unfertilized grassland to 7.92% in fallow). The most common members of this taxonomic group were the genus Mortierella . The increase in Basidiomycota in the pinus planted area is not surprising, as the pine root system is characterized by mycorrhiza and most mycorrhizal fungi belong to Basidiomycota . Other researchers have confirmed this statement in their work ( ; ; ). Representatives of the following genera of fungi have appeared in the afforested area: Inocybe , Russula , Tomentella , Pseudotomentella , Tricholoma , Tylospora and others. Larger substantial changes are observed by analyzing the structure of the lower taxonomic ranks of Ascomycota . The most prominent of all is the samples from non-anthropogenic fields, i. fallow and afforested areas. The number of taxonomic units belonging to the orders Eurotiales (genera Penicillium , Aspergillus , Talaromyces ) and Hypocreales ( Acremonium , Metarhizium , Lecanicillium , Trichoderma , Fusarium ) was significantly higher in the pine planted and fallow. This was especially evident in the sample of the afforested field, the increase of the representatives of these orders is at the expense of the order of Pleosporales (genera Coniothyrium , Pyrenochaeta , Pleotrichocladium ), which is more numerous in the cropland. In the afforested area there are several representatives of the order Helotiales ( Meliniomyces (mycorhyzal fungi), Tetracladium , Cadophora (mycorhyzal fungi), Phialocephala (mycorhyzal fungi). In the area planted with pines, there was an increase in the taxonomic rank of Basidiomycota fungi, including many mycorrhizal fungi belonging to the genera Inocybe , Tricholoma , Tylospora , Russula , Pseudotomentella , Tomentella , Naganishia ). A distinctive feature of the afforested area was the appearing of a basidiomycetous yeasts at the species level, Slooffia cresolica , covering 2429 reads. Thus, the form of renaturalization-afforestation had the greatest impact on soil microorganism communities. The largest structural changes occurred here, especially among fungi ( and ). However, it is this area that has suffered from some pests, which has destroyed almost all the trees in the last few years. Therefore, when planning future afforestation, the phytosanitary condition needs to be closely monitored and the necessary measures taken in a timely manner. The analysis of the abundance of bacteria and microscopic fungi showed that their abundance depends on the applied agrotechnical measures and the specifics of the cultivated plants, as well as on the meteorological conditions. The abundance of both cultivable bacteria and fungi was not high compared to other types of soils, bacteria were counted up to 10 5 , and fungi—10 3 CFU per 1 g of dry soil. In the renaturalized areas, where no economic activity took place, the abundance of microorganisms was statistically lower and less variable in terms of abundance during the vegetation period than in the cultivated land areas. Summarizing the dynamics of bacterial abundance during the study period, it should be noted that the area planted with pines differed in the smallest amounts and fluctuations in abundance. In the case of fungi and yeasts, meanwhile, the area was more abundant. The taxonomic groups of Actinobacteria and Proteobacteria had the highest OTU. The relative amount of Proteobacteria increased and the number of Actinobacteria decreased in the area planted with pines compared to other. The highest number of fungal OTU is characterized by the division of Ascomycota . From all the studied samples, the taxonomic structure differs from the afforested area, which, at the expense of Ascomycota , significantly increased the number of Basidiomycota (especially mycorrhizal). To maintain a stable structure of soil microorganisms’ communities, moderate fertilization with both mineral and organic fertilizers should be applied, as well as a fair crop rotation, especially for bean crops. The choice of afforestation requires regular monitoring of the phytosanitary status and preventive measures against diseases and pests and timely protection measures. Further studies should try to determine what period is needed for the reorganization of soil microcommunities from the initial phase up to the present. 10.7717/peerj.14761/supp-1 Supplemental Information 1 Meteorogical conditions during experimental years (2017–2020) Click here for additional data file.
Community differentiation of rhizosphere microorganisms and their responses to environmental factors at different development stages of medicinal plant
9fce8ff9-b9d8-448b-8810-73ab0e5914e7
9997192
Microbiology[mh]
The plant root system, rhizosphere microorganisms and rhizosphere soil constitute a plant rhizosphere microecosystem. In this microecosystem, biotic factors (plant genotypes, plant developmental stages, invasive pathogenic microorganisms) and abiotic factors (soil composition, soil management and climatic conditions) in soil influence the composition, diversity, structure and function of rhizosphere microbial communities ( ; ; ; ). In turn, rhizosphere microorganisms influence plant root exudates and even plant metabolome ( ), which involves improving plant physiology and resistance to pathogens through various mechanisms ( ; ), affecting plant health and growth ( ; ). Rhizosphere microorganisms also affect soil evolution and play a vital role in the conversion of poor soil and low-quality soil into cultivable soil ( ). In summary, multitrophic interactions between plants, microorganisms and environmental factors lead to the formation of complex symbiotic networks in rhizosphere microecosystem. This symbiotic network dynamically affects rhizosphere communities and alters plant phenotypes ( ). Beneficial rhizosphere microorganisms can enhance plant roots’ vitality, promote plant growth, increase plant yield and improve resistance to phytopathogens ( ). Conversely, harmful microorganisms can lead to plant disease, the inhibition of root growth, the suppression of plant growth, and crop failure ( ). The study of the symbiosis and interactions between medicinal plants, environmental factors and microbes can help regulate the microbiota of medicinal plants themselves and their surroundings, contribute to the goal of high quality and high yield of medicinal plants ( ; ; ; ; ). Therefore, it is of great significance to study the dynamics of rhizosphere microbiota in the cultivation of medicinal herbs. The traditional Chinese medicinal herb “Beishashen” is the swelling root of perennial herb Glehnia littoralis Fr. Schmidt ex Miq ( ). It contains a variety of coumarin compounds and alkaloids, and has a variety of activities such as antibacterial and anti-inflammatory ( ; ), so it is widely used in China and Southeast Asia. Due to the endangered status of the wild G. littoralis resources ( ), the medicinal herb “Beishashen” on the market mainly comes from human cultivation. In China, G. littoralis has been cultivated for more than 600 years. Due to its special habitat requirements, it can only grow in sandy soil, which makes its planting area relatively fixed. Long-term planting in fixed areas results in typical negative plant-soil feedback (NPSF), leading to a decline in the yield and quality of the medicinal herbs ( ; ). Interactions between rhizosphere microbiota and plants affect plant health and productivity, and this process is very important in medicinal plant cultivation ( ; ). Given the importance of rhizosphere microbes, the study of rhizosphere microbiota may provide a way to improve the yield and quality of the medicinal herbs. Relevant microbial research has attracted the attention of the researchers, but so far, only the bacteriostatic activity of endophytic fungi and related studies have been reported ( ). To provide a basis for the interpretation and utilization of beneficial rhizosphere microbial resources, rhizosphere microbes of G. littoralis in genuine producing areas were analyzed by high-throughput sequencing technique. The composition, diversity, function, and dynamics of rhizosphere microorganisms at different development stages of G. littoralis , as well as the correlation between rhizosphere microorganisms and environmental factors, were investigated, hoping to provide reference data for optimizing cultivation of Chinese medicinal herb “Beishashen”. Sampling All the samples were collected from Haiyang City (37°01′27.04″N, 120°44′53.71″E), Shandong Province, China, one of the main cultivation areas of G. littoralis . As a perennial plant, G. littoralis is usually harvested within a year when used as a Chinese medicinal herb. Therefore, samples growing for only one year in three growing seasons (Spring, Summer and Autumn) were collected according to the phenological stages, representing the seedling stage, the vigorous growth stage and the harvesting stage, respectively. In a 2 m × 2 m quadrat, three samples were collected on the diagonal of the quadrat as triplicates of each development stage. Since the root system at the seedling stage is small, in order to collect enough rhizosphere soil, the rhizosphere soil of 2–3 G. littoralis seedlings on the diagonal of each quadrat was collected as a replicate. The collected rhizosphere soil samples were quickly transported to the laboratory at low temperatures. Some soil samples were kept in the shade to test soil physiochemical properties, and the rest was stored at −80 °C for high-throughput sequencing. Field experiments were approved by Ludong University (project number 20210301). Measurement of soil physiochemical properties Soil samples with roots and debris removed using a 2 mm sieve were air-dried and stored at 4 °C for use. Fourteen common environmental factors in soil were detected according to previous studies, including available phosphorus (AP), available potassium (AK), ammonium nitrogen (AN), soil organic matter (SOM), total organic carbon (TOC) ( ), nitrate nitrogen (NN) ( ), Saccharase (SC), Urease (UE), alkaline phosphatase (AKP) ( ); total nitrogen (TN), total hydrogen (TH), total carbon (TC), and total sulfur (TS) were detected by Elemental Analyzer (Elementar vario EL cube Elemental Analyzer; Elementar, Langenselbold, Germany) and the pH of soil samples was determined in 1:2.5 soil-water suspension using pH meter (Sartorius PB-10). The saccharase activity was expressed as mg glucose·d −1 ·g −1 soil, the urease activity was expressed as mg NH 3 -N·d −1 ·g −1 soil and the alkaline phosphatase activity was expressed as mg P 2 O 5 ·2h −1 ·g −1 soil. DNA extraction and high-throughput sequencing The total DNA of rhizosphere microorganisms was extracted using HiPure Soil DNA Kits (Tiangen, Beijing, China) according to the manufacturer’s protocols. The extracted DNA was used for PCR amplification. For bacteria, the 16S V3–V4 region of the ribosomal RNA gene was amplified using primers 338F (5′-ACTCCTACGGGAGGCAGCA-3′) ( ), and 806R (5′- GGACTACHVGGGTATCTAAT-3′) ( ). The PCR program is as following: 95 °C for 5 min, followed by 25 cycles at 95 °C for 30 s, 50 °C for 30 s, and 72 °C for 40 s and a final extension at 72 °C for 7 min. For fungi, the ITS (internal transcribed spacer) region of the ribosomal RNA gene was amplified using primers ITS-F (5′-CTTGGTCATTTAGAGGAAGTAA-3′) ( ) and ITS-R (5′-GCTGCGTTCTTCATCGATGC- 3′) ( ) by PCR (95 °C for 5 min, followed by 25 cycles at 95 °C for 1 min, 50 °C for 30 s, and 72 °C for 1 min and a final extension at 72 °C for 7 min). PCR products were confirmed by 1.8% agarose gel electrophoresis and purified using VAHTSTM DNA Clean Beads (Vazyme, Nanjing, China). A library was constructed, the Solexa PCR product was purified, and the sequenced library was constructed after quantification and homogenization. The DNA was purified with the Monarch DNA Gel Extraction Kit (Hongyue, Beijing, China) and then subjected to high-throughput sequencing on Illumina Novaseq 6000 (Biomarker Technologies, Beijing, China). The raw sequencing data has been uploaded to the NCBI Sequence Read Archive (SRA) database (BioProject: PRJNA903756 ). Three biological replicates of each stage were sequenced. Data analysis Raw data were first filtered by Trimomatic (v0.33) ( ). Primer sequences were then identified and removed by Cutadapt 1.9.1 ( ), resulting in high-quality reads without primer sequences. Based on overlapping sequences, high-quality reads were assembled by FLASH (v1.2.7) ( ), which generated clean reads. Chimeric sequences were identified and removed by UCHIME (v4.2) ( ), generating effective reads. The effective reads were then clustered with Usearch software (v10) at a similarity level of 97.0% to obtain operational taxonomic units (OTUs) tables ( ). QIIME2 software (V2020.6) was used to evaluate the Alpha diversity and Beta diversity of samples ( ). Taxonomic annotation was carried out based on SILVA database (release 138) ( ), and the community composition of each sample was counted at six levels (phylum, class, order, family, genus, species). PICRUSt2 ( ) was used to perform bacterial function prediction analysis, and FUNGuild ( ) was used to predict the nutritional and functional groups of the fungal communities. Biomarkers were extracted using the RandomForest ( ) package in R (V4.2) ( ). Network analysis and redundancy analysis (RDA) were performed to assess the interaction between rhizosphere microorganisms and environmental factors. Gephi software (V0.9.7) ( ) was used for network analysis based on Pearson’s correlation coefficient. RStudio (V2022.07.1) was used for the RDA analysis and the heatmap ( ). The STAMP (V2.1.3) software ( ) was used to analyze the functional differences between rhizosphere microbiota. Box maps, abundance maps and PcoA analysis were performed using the online mapping tool imageGP ( http://www.ehbio.com/Cloud_Platform/front/#/ ) ( ). Venn diagrams was produced using the online mapping tool jvenn ( http://www.bioinformatics.com.cn/static/others/jvenn/example.html ) ( ). All the samples were collected from Haiyang City (37°01′27.04″N, 120°44′53.71″E), Shandong Province, China, one of the main cultivation areas of G. littoralis . As a perennial plant, G. littoralis is usually harvested within a year when used as a Chinese medicinal herb. Therefore, samples growing for only one year in three growing seasons (Spring, Summer and Autumn) were collected according to the phenological stages, representing the seedling stage, the vigorous growth stage and the harvesting stage, respectively. In a 2 m × 2 m quadrat, three samples were collected on the diagonal of the quadrat as triplicates of each development stage. Since the root system at the seedling stage is small, in order to collect enough rhizosphere soil, the rhizosphere soil of 2–3 G. littoralis seedlings on the diagonal of each quadrat was collected as a replicate. The collected rhizosphere soil samples were quickly transported to the laboratory at low temperatures. Some soil samples were kept in the shade to test soil physiochemical properties, and the rest was stored at −80 °C for high-throughput sequencing. Field experiments were approved by Ludong University (project number 20210301). Soil samples with roots and debris removed using a 2 mm sieve were air-dried and stored at 4 °C for use. Fourteen common environmental factors in soil were detected according to previous studies, including available phosphorus (AP), available potassium (AK), ammonium nitrogen (AN), soil organic matter (SOM), total organic carbon (TOC) ( ), nitrate nitrogen (NN) ( ), Saccharase (SC), Urease (UE), alkaline phosphatase (AKP) ( ); total nitrogen (TN), total hydrogen (TH), total carbon (TC), and total sulfur (TS) were detected by Elemental Analyzer (Elementar vario EL cube Elemental Analyzer; Elementar, Langenselbold, Germany) and the pH of soil samples was determined in 1:2.5 soil-water suspension using pH meter (Sartorius PB-10). The saccharase activity was expressed as mg glucose·d −1 ·g −1 soil, the urease activity was expressed as mg NH 3 -N·d −1 ·g −1 soil and the alkaline phosphatase activity was expressed as mg P 2 O 5 ·2h −1 ·g −1 soil. The total DNA of rhizosphere microorganisms was extracted using HiPure Soil DNA Kits (Tiangen, Beijing, China) according to the manufacturer’s protocols. The extracted DNA was used for PCR amplification. For bacteria, the 16S V3–V4 region of the ribosomal RNA gene was amplified using primers 338F (5′-ACTCCTACGGGAGGCAGCA-3′) ( ), and 806R (5′- GGACTACHVGGGTATCTAAT-3′) ( ). The PCR program is as following: 95 °C for 5 min, followed by 25 cycles at 95 °C for 30 s, 50 °C for 30 s, and 72 °C for 40 s and a final extension at 72 °C for 7 min. For fungi, the ITS (internal transcribed spacer) region of the ribosomal RNA gene was amplified using primers ITS-F (5′-CTTGGTCATTTAGAGGAAGTAA-3′) ( ) and ITS-R (5′-GCTGCGTTCTTCATCGATGC- 3′) ( ) by PCR (95 °C for 5 min, followed by 25 cycles at 95 °C for 1 min, 50 °C for 30 s, and 72 °C for 1 min and a final extension at 72 °C for 7 min). PCR products were confirmed by 1.8% agarose gel electrophoresis and purified using VAHTSTM DNA Clean Beads (Vazyme, Nanjing, China). A library was constructed, the Solexa PCR product was purified, and the sequenced library was constructed after quantification and homogenization. The DNA was purified with the Monarch DNA Gel Extraction Kit (Hongyue, Beijing, China) and then subjected to high-throughput sequencing on Illumina Novaseq 6000 (Biomarker Technologies, Beijing, China). The raw sequencing data has been uploaded to the NCBI Sequence Read Archive (SRA) database (BioProject: PRJNA903756 ). Three biological replicates of each stage were sequenced. Raw data were first filtered by Trimomatic (v0.33) ( ). Primer sequences were then identified and removed by Cutadapt 1.9.1 ( ), resulting in high-quality reads without primer sequences. Based on overlapping sequences, high-quality reads were assembled by FLASH (v1.2.7) ( ), which generated clean reads. Chimeric sequences were identified and removed by UCHIME (v4.2) ( ), generating effective reads. The effective reads were then clustered with Usearch software (v10) at a similarity level of 97.0% to obtain operational taxonomic units (OTUs) tables ( ). QIIME2 software (V2020.6) was used to evaluate the Alpha diversity and Beta diversity of samples ( ). Taxonomic annotation was carried out based on SILVA database (release 138) ( ), and the community composition of each sample was counted at six levels (phylum, class, order, family, genus, species). PICRUSt2 ( ) was used to perform bacterial function prediction analysis, and FUNGuild ( ) was used to predict the nutritional and functional groups of the fungal communities. Biomarkers were extracted using the RandomForest ( ) package in R (V4.2) ( ). Network analysis and redundancy analysis (RDA) were performed to assess the interaction between rhizosphere microorganisms and environmental factors. Gephi software (V0.9.7) ( ) was used for network analysis based on Pearson’s correlation coefficient. RStudio (V2022.07.1) was used for the RDA analysis and the heatmap ( ). The STAMP (V2.1.3) software ( ) was used to analyze the functional differences between rhizosphere microbiota. Box maps, abundance maps and PcoA analysis were performed using the online mapping tool imageGP ( http://www.ehbio.com/Cloud_Platform/front/#/ ) ( ). Venn diagrams was produced using the online mapping tool jvenn ( http://www.bioinformatics.com.cn/static/others/jvenn/example.html ) ( ). Microbial community composition, diversity and structure Feature of operational taxonomic units (OTUs) The operational taxonomic units (OTUs) of rhizosphere bacteria and fungi were obtained from all the samples at the three growth stages ( ). A total of 1,885 bacterial OTUs were identified in rhizosphere soil of three growth stages, including 1,660 OTUs at the seedling stage, 1,267 OTUs at the vigorous stage, and 1,409 OTUs at the harvesting stage, accounting for 88.06%, 67.21% and 74.75% of the total bacterial OTUs at each growth stage, respectively. A total of 966 OTUs were shared by three different growth stages of G. littoralis , 244 OTUs specific to the seedling stage, 61 OTUs specific to the vigorous growth stage, and 95 OTUs specific to the harvesting stage ( ). A total of 903 fungal OTUs were found in the soil at all growth stages, including 474 OTUs at the seedling stage (52.49%), 411 OTUs at the vigorous growth stage (45.51%), 615 OTUs at the harvesting stage (68.11%). A total of 192 OTUs were found to be shared by three different growth stages of G. littoralis , 158 OTUs specific to the seedling stage, 85 OTUs specific to the vigorous growth stage, and 255 OTUs specific to the harvesting stage ( ). The largest number of OTUs of bacteria was at the seedling stage, and the largest number of OTUs of fungi was at the harvesting stage ( and ). Similarly, the specific OTUs at the two stages were also in the largest number ( and ). Variation in the abundance and diversity of rhizosphere bacteria and fungi The bacterial OTUs were clustered into 25 phyla and 479 genera. At phylum level, the relative abundance of bacteria showed no obvious difference between the bacterial communities at the vigorous growth stage and harvesting stage, but obvious differences were found between the bacteria at the two stages and those at the seedling stage ( ). Proteobacteria was the major component of each bacterial community, representing 31.94%, 41.36% and 41.15% of the total species in each community at each development stage, respectively; Acidobacteriota was the second most abundant phylum after Proteobacteria, which happened in 22.30%, 19.61% and 17.94% of all the samples at each development stage, respectively; except for Proteobacteria and Acidobacteriota, Actinobacteria was the main phylum with abundance above 10%, which happened in 15.02%, 11.84% and 13.66% of the samples at each development stage, respectively ( ). The fungal OTUs were assigned to 10 phyla and 312 genera. Similar to bacteria community, the abundance of dominant phylum in fungi community showed no obvious difference between the vigorous growth stage and the harvesting stage, but there were obvious differences between the fungi at the two stages and those at the seedling stage ( ). Ascomycota was the most enriched phylum representing 70.64%, 78.98%, and 80.14% of the total species at each growth stage and Basidiomycota was the second abundant phylum after Ascomycota, which happened in 11.93%, 15.2% and 9.88% of the samples at each development stage ( ). Diversity of rhizosphere microbiota The alpha diversity of the bacterial community of G. littoralis showed significant differences between the seedling stage and the vigorous growth and harvesting stages, while there was no significant difference between the vigorous growth stage and the harvesting stage ( , ). The bacterial community’s richness (ACE and Chao1) and diversity (Shannon index) were the highest at the seedling stage, and followed by the vigorous growth stage and the harvesting stage ( ). There was no significant difference in ACE, Chao1 and Shannon index between rhizosphere bacterial communities at the vigorous growth stage and the harvesting stage ( ). In addition, the richness and diversity of bacterial communities showed a gradually decreasing trend with the different development stages of G. littoralis ( ). The alpha diversity analysis results of fungi community are also listed in ; it is shown that the fungi community’ richness (ACE and Chao1) at the harvesting stage was the highest, followed by the seedling stage and the vigorous growth stage ( ). This result was different from that of the bacterial community. The Shannon index of fungi community at the seedling stage was the highest and significantly higher than that at the vigorous growth stage. The Shannon index of fungi community at the harvesting stage was slightly higher than that at the vigorous growth stage, and the difference was small and not significant. PCoA analysis based on Bray-Curtis distances was performed to demonstrate changes in rhizosphere microbial community structure at different development stages. Both rhizosphere bacteria and rhizosphere fungi were divided into two groups ( ). One group included only microbiota at the seedling stages, and the other group included microbiota at the vigorous growth stage and the harvesting stage. This was consistent with the results of alpha diversity analysis, indicating that the rhizosphere microorganisms at the seedling stage are different from rhizosphere microorganisms at other development stages. Characteristics of rhizosphere microbial function at different development stages There was no significant difference in the abundance of the major function at class 2 KEGG pathways in the microbiota of three development stages ( ). The dominant 10 functions of bacteria include “Global and overview maps”, “Carbohydrate metabolism”, “Amino acid metabolism”, “Energy metabolism”, “Metabolism of cofactors and vitamins”, “Membrane transport”, “Nucleotide metabolism”, “Translation”, “Signal transduction” and “Replication and repair” ( ). “Global and overview maps” was the main component, accounting for 42.13–42.32% ( ). Comparison analysis of the functions between the rhizosphere bacteria of G. littoralis at three development stages showed that six, three and seven levels of functions were significantly different between the bacteria at the seedling stage and the vigorous growth stage, the vigorous growth stage and the harvesting stage, and the seedling stage and the harvesting stage, respectively ( – ). From the seedling stage to the vigorous growth stage, the function of “Circulatory system” and “Endocrine and metabolic diseases” increased significantly, while “Glycan biosynthesis and metabolism”, “Transport and catabolism”, “Nervous system” and “Substance dependence” showed a significant decrease ( ). Comparison of functions between the vigorous growth stage and the harvesting stage ( ) showed a marked increase in “Replication and repair”, while “Drug resistance” and “Cancers” presented a marked decline. From the vigorous growth stage to the harvesting stage ( ), the functions of “Cell growth and death”, “Carbohydrate metabolism”, “Infectious diseases: parastitic” and “Substance dependence” significantly decreased, and the functions of “Circulatory system”, “Immune system” and “Environmental adaptation” significantly increased. Eight trophic modes were found in the fungi at three development stages ( ), including symbiotroph, saprotroph, pathotroph, saprotroph-symbiotroph, pathotroph-symbiotroph, pathotroph-saprotroph, pathotroph-saprotroph-symbiotroph, and pathogen-saprotroph-symbiotroph. Saprotroph was the main trophic mode, accounting for 39.30–60.59% of the total fungi at each stage, and its relative abundance gradually increased with the development stages of G. littoralis ( , ). In addition, pathotroph-saprotroph-symbiotroph was the second trophic mode after Saprotroph, accounting for 7.11–16.36% of fungi at each stage ( ). Functional differences analysis showed that only the abundance of pathotroph-saprotroph-symbiotroph were significantly different at different development stages, and revealed an increasing trend and then a decreasing trend with the development of G. littoralis ( and ). Finally, the PCA analysis of the functions at the three development stages was performed, and distinct differentiation was observed between the functions at the seedling stage and those at the vigorous growth and harvesting stages ( ). Microbial biomarkers in the rhizosphere at different development stages The random forest method was used to identify microbial biomarkers in the rhizosphere of G. littoralis . The optimal model of rhizosphere bacteria and fungi ( and ) was established based on the cross-validation results. The biomarkers identified in the samples were sorted according to their importance to the community (Mean Decrease Accuracy, MDA) from large to small to obtain a histogram ( and ). A heatmap analysis of rhizosphere microbes at different development stages was also performed ( and ). At bacterial order level, 47 important biomarkers were identified ( and ). These biomarkers can be used to distinguish samples at different stages accurately ( ). The five orders with the greatest impact on bacterial community were Bdellovibrionales, Actinomycetales, Erysipelotrichales, Enterobacterales, and Lactobacillales ( ). At fungal order level, 22 important biomarkers were identified ( and ). These biomarkers were also able to accurately distinguish samples at different development stages. The top five orders were Mucorales, Cystobasidiales, Ustilaginales, GS11, Wallemiales ( ). The clustering results showed that both the bacterial and fungal biomarkers at three development stages clustered into two branches ( and ), one was composed with the microbiota at the seedling stage, another was composed with the microbiota at the vigorous growth stage and at the harvesting stage, which was consistent with the results of the above PcoA analysis ( and ). The interactions of microbes-microbes and microbes-environmental factors at different development stages Redundancy analysis (RDA) revealed that eight environmental factors drove changes in the microbiome at different development stages ( and ). In the rhizosphere bacterial community, soil organic matter (SOM), pH, urease (UE), and available phosphorus (AP) showed a positive correlation with the bacterial community at the seedling stage; saccharase (SC) and nitrate nitrogen (NN) were positively correlated with the bacterial community at the vigorous growth stage and the harvesting stage; alkaline phosphatase (AKP) and available potassium (AK) impacted the bacterial community at all stages ( ). The bacteria that played a major role at the seedling stage were Acidobacteriota, Actinobacteriota, Armatimonadota, Bacteroidota, Chloroflexi, Dependentiae, Firmicutes, Fusobacteriota, Gemmatimonadota, Methylomirabilota, Nitrospirota, Planctomycetota, and Verrucomicrobiota; The main bacteria at the vigorous growth and the harvesting stages were Proteobacteria, Patescibacteria, Bdellovibrionota, Campylobacterota, Cyanobacteria, Desulfobacterota, Elusimicrobiota, and Myxococcota ( ). In the rhizosphere fungal community, SOM, AP, pH, AK, AKP and UE were the main influencing factors at the seedling stage, while SC and NN were the main influencing factors at the vigorous growth and harvesting stages ( ). Chytridiomycota, Mortierellomycota, Olpidiomycota were the main compositions of fungal community at the seedling stage, and Ascomycota, Basidiomycota, Glomeromycota, Mucoromycota, and Rozellomycota were the main compositions of fungal community at the vigorous growth and harvesting stages ( ). Network analysis was carried out to understand interactions between different rhizosphere microbiota, and between rhizosphere microbiota and environmental factors. It was shown that rhizosphere bacterial community had more complex interaction network than rhizosphere fungal community ( and ), which may be attributed to the higher abundance and diversity of the bacterial community. In bacterial community, all 13 environmental factors revealed positive or negative impacts on bacteria at the Phylum level, with more negative (56.60%) than positive correlations (43.40%) were observed ( ). Planctomycetota contributed the most to the network, followed by Firmicutes, Myxococcota, and Proteobacteria. In fungal community, only 7 of the 13 environmental factors were associated with rhizosphere fungi ( ). AN may be the key factors driving the composition of rhizosphere fungal community. Chytridiomycota, Mortierellomycota, and Olpidiomycota may be the three phyla that had the greatest influences on the fungal community ( ). Interestingly, total hydrogen (TH) and total sulfur (TS) were negatively correlated with rhizosphere bacteria but positively correlated with rhizosphere fungi, suggesting that rhizosphere bacteria and fungi respond differently to environmental factors. A co-occurrence network of bacteria-fungi-environmental factors was established ( ), and more positive interactions (60%) were found in bacteria-bacteria and bacteria-fungi. Feature of operational taxonomic units (OTUs) The operational taxonomic units (OTUs) of rhizosphere bacteria and fungi were obtained from all the samples at the three growth stages ( ). A total of 1,885 bacterial OTUs were identified in rhizosphere soil of three growth stages, including 1,660 OTUs at the seedling stage, 1,267 OTUs at the vigorous stage, and 1,409 OTUs at the harvesting stage, accounting for 88.06%, 67.21% and 74.75% of the total bacterial OTUs at each growth stage, respectively. A total of 966 OTUs were shared by three different growth stages of G. littoralis , 244 OTUs specific to the seedling stage, 61 OTUs specific to the vigorous growth stage, and 95 OTUs specific to the harvesting stage ( ). A total of 903 fungal OTUs were found in the soil at all growth stages, including 474 OTUs at the seedling stage (52.49%), 411 OTUs at the vigorous growth stage (45.51%), 615 OTUs at the harvesting stage (68.11%). A total of 192 OTUs were found to be shared by three different growth stages of G. littoralis , 158 OTUs specific to the seedling stage, 85 OTUs specific to the vigorous growth stage, and 255 OTUs specific to the harvesting stage ( ). The largest number of OTUs of bacteria was at the seedling stage, and the largest number of OTUs of fungi was at the harvesting stage ( and ). Similarly, the specific OTUs at the two stages were also in the largest number ( and ). Variation in the abundance and diversity of rhizosphere bacteria and fungi The bacterial OTUs were clustered into 25 phyla and 479 genera. At phylum level, the relative abundance of bacteria showed no obvious difference between the bacterial communities at the vigorous growth stage and harvesting stage, but obvious differences were found between the bacteria at the two stages and those at the seedling stage ( ). Proteobacteria was the major component of each bacterial community, representing 31.94%, 41.36% and 41.15% of the total species in each community at each development stage, respectively; Acidobacteriota was the second most abundant phylum after Proteobacteria, which happened in 22.30%, 19.61% and 17.94% of all the samples at each development stage, respectively; except for Proteobacteria and Acidobacteriota, Actinobacteria was the main phylum with abundance above 10%, which happened in 15.02%, 11.84% and 13.66% of the samples at each development stage, respectively ( ). The fungal OTUs were assigned to 10 phyla and 312 genera. Similar to bacteria community, the abundance of dominant phylum in fungi community showed no obvious difference between the vigorous growth stage and the harvesting stage, but there were obvious differences between the fungi at the two stages and those at the seedling stage ( ). Ascomycota was the most enriched phylum representing 70.64%, 78.98%, and 80.14% of the total species at each growth stage and Basidiomycota was the second abundant phylum after Ascomycota, which happened in 11.93%, 15.2% and 9.88% of the samples at each development stage ( ). Diversity of rhizosphere microbiota The alpha diversity of the bacterial community of G. littoralis showed significant differences between the seedling stage and the vigorous growth and harvesting stages, while there was no significant difference between the vigorous growth stage and the harvesting stage ( , ). The bacterial community’s richness (ACE and Chao1) and diversity (Shannon index) were the highest at the seedling stage, and followed by the vigorous growth stage and the harvesting stage ( ). There was no significant difference in ACE, Chao1 and Shannon index between rhizosphere bacterial communities at the vigorous growth stage and the harvesting stage ( ). In addition, the richness and diversity of bacterial communities showed a gradually decreasing trend with the different development stages of G. littoralis ( ). The alpha diversity analysis results of fungi community are also listed in ; it is shown that the fungi community’ richness (ACE and Chao1) at the harvesting stage was the highest, followed by the seedling stage and the vigorous growth stage ( ). This result was different from that of the bacterial community. The Shannon index of fungi community at the seedling stage was the highest and significantly higher than that at the vigorous growth stage. The Shannon index of fungi community at the harvesting stage was slightly higher than that at the vigorous growth stage, and the difference was small and not significant. PCoA analysis based on Bray-Curtis distances was performed to demonstrate changes in rhizosphere microbial community structure at different development stages. Both rhizosphere bacteria and rhizosphere fungi were divided into two groups ( ). One group included only microbiota at the seedling stages, and the other group included microbiota at the vigorous growth stage and the harvesting stage. This was consistent with the results of alpha diversity analysis, indicating that the rhizosphere microorganisms at the seedling stage are different from rhizosphere microorganisms at other development stages. The operational taxonomic units (OTUs) of rhizosphere bacteria and fungi were obtained from all the samples at the three growth stages ( ). A total of 1,885 bacterial OTUs were identified in rhizosphere soil of three growth stages, including 1,660 OTUs at the seedling stage, 1,267 OTUs at the vigorous stage, and 1,409 OTUs at the harvesting stage, accounting for 88.06%, 67.21% and 74.75% of the total bacterial OTUs at each growth stage, respectively. A total of 966 OTUs were shared by three different growth stages of G. littoralis , 244 OTUs specific to the seedling stage, 61 OTUs specific to the vigorous growth stage, and 95 OTUs specific to the harvesting stage ( ). A total of 903 fungal OTUs were found in the soil at all growth stages, including 474 OTUs at the seedling stage (52.49%), 411 OTUs at the vigorous growth stage (45.51%), 615 OTUs at the harvesting stage (68.11%). A total of 192 OTUs were found to be shared by three different growth stages of G. littoralis , 158 OTUs specific to the seedling stage, 85 OTUs specific to the vigorous growth stage, and 255 OTUs specific to the harvesting stage ( ). The largest number of OTUs of bacteria was at the seedling stage, and the largest number of OTUs of fungi was at the harvesting stage ( and ). Similarly, the specific OTUs at the two stages were also in the largest number ( and ). The bacterial OTUs were clustered into 25 phyla and 479 genera. At phylum level, the relative abundance of bacteria showed no obvious difference between the bacterial communities at the vigorous growth stage and harvesting stage, but obvious differences were found between the bacteria at the two stages and those at the seedling stage ( ). Proteobacteria was the major component of each bacterial community, representing 31.94%, 41.36% and 41.15% of the total species in each community at each development stage, respectively; Acidobacteriota was the second most abundant phylum after Proteobacteria, which happened in 22.30%, 19.61% and 17.94% of all the samples at each development stage, respectively; except for Proteobacteria and Acidobacteriota, Actinobacteria was the main phylum with abundance above 10%, which happened in 15.02%, 11.84% and 13.66% of the samples at each development stage, respectively ( ). The fungal OTUs were assigned to 10 phyla and 312 genera. Similar to bacteria community, the abundance of dominant phylum in fungi community showed no obvious difference between the vigorous growth stage and the harvesting stage, but there were obvious differences between the fungi at the two stages and those at the seedling stage ( ). Ascomycota was the most enriched phylum representing 70.64%, 78.98%, and 80.14% of the total species at each growth stage and Basidiomycota was the second abundant phylum after Ascomycota, which happened in 11.93%, 15.2% and 9.88% of the samples at each development stage ( ). The alpha diversity of the bacterial community of G. littoralis showed significant differences between the seedling stage and the vigorous growth and harvesting stages, while there was no significant difference between the vigorous growth stage and the harvesting stage ( , ). The bacterial community’s richness (ACE and Chao1) and diversity (Shannon index) were the highest at the seedling stage, and followed by the vigorous growth stage and the harvesting stage ( ). There was no significant difference in ACE, Chao1 and Shannon index between rhizosphere bacterial communities at the vigorous growth stage and the harvesting stage ( ). In addition, the richness and diversity of bacterial communities showed a gradually decreasing trend with the different development stages of G. littoralis ( ). The alpha diversity analysis results of fungi community are also listed in ; it is shown that the fungi community’ richness (ACE and Chao1) at the harvesting stage was the highest, followed by the seedling stage and the vigorous growth stage ( ). This result was different from that of the bacterial community. The Shannon index of fungi community at the seedling stage was the highest and significantly higher than that at the vigorous growth stage. The Shannon index of fungi community at the harvesting stage was slightly higher than that at the vigorous growth stage, and the difference was small and not significant. PCoA analysis based on Bray-Curtis distances was performed to demonstrate changes in rhizosphere microbial community structure at different development stages. Both rhizosphere bacteria and rhizosphere fungi were divided into two groups ( ). One group included only microbiota at the seedling stages, and the other group included microbiota at the vigorous growth stage and the harvesting stage. This was consistent with the results of alpha diversity analysis, indicating that the rhizosphere microorganisms at the seedling stage are different from rhizosphere microorganisms at other development stages. There was no significant difference in the abundance of the major function at class 2 KEGG pathways in the microbiota of three development stages ( ). The dominant 10 functions of bacteria include “Global and overview maps”, “Carbohydrate metabolism”, “Amino acid metabolism”, “Energy metabolism”, “Metabolism of cofactors and vitamins”, “Membrane transport”, “Nucleotide metabolism”, “Translation”, “Signal transduction” and “Replication and repair” ( ). “Global and overview maps” was the main component, accounting for 42.13–42.32% ( ). Comparison analysis of the functions between the rhizosphere bacteria of G. littoralis at three development stages showed that six, three and seven levels of functions were significantly different between the bacteria at the seedling stage and the vigorous growth stage, the vigorous growth stage and the harvesting stage, and the seedling stage and the harvesting stage, respectively ( – ). From the seedling stage to the vigorous growth stage, the function of “Circulatory system” and “Endocrine and metabolic diseases” increased significantly, while “Glycan biosynthesis and metabolism”, “Transport and catabolism”, “Nervous system” and “Substance dependence” showed a significant decrease ( ). Comparison of functions between the vigorous growth stage and the harvesting stage ( ) showed a marked increase in “Replication and repair”, while “Drug resistance” and “Cancers” presented a marked decline. From the vigorous growth stage to the harvesting stage ( ), the functions of “Cell growth and death”, “Carbohydrate metabolism”, “Infectious diseases: parastitic” and “Substance dependence” significantly decreased, and the functions of “Circulatory system”, “Immune system” and “Environmental adaptation” significantly increased. Eight trophic modes were found in the fungi at three development stages ( ), including symbiotroph, saprotroph, pathotroph, saprotroph-symbiotroph, pathotroph-symbiotroph, pathotroph-saprotroph, pathotroph-saprotroph-symbiotroph, and pathogen-saprotroph-symbiotroph. Saprotroph was the main trophic mode, accounting for 39.30–60.59% of the total fungi at each stage, and its relative abundance gradually increased with the development stages of G. littoralis ( , ). In addition, pathotroph-saprotroph-symbiotroph was the second trophic mode after Saprotroph, accounting for 7.11–16.36% of fungi at each stage ( ). Functional differences analysis showed that only the abundance of pathotroph-saprotroph-symbiotroph were significantly different at different development stages, and revealed an increasing trend and then a decreasing trend with the development of G. littoralis ( and ). Finally, the PCA analysis of the functions at the three development stages was performed, and distinct differentiation was observed between the functions at the seedling stage and those at the vigorous growth and harvesting stages ( ). The random forest method was used to identify microbial biomarkers in the rhizosphere of G. littoralis . The optimal model of rhizosphere bacteria and fungi ( and ) was established based on the cross-validation results. The biomarkers identified in the samples were sorted according to their importance to the community (Mean Decrease Accuracy, MDA) from large to small to obtain a histogram ( and ). A heatmap analysis of rhizosphere microbes at different development stages was also performed ( and ). At bacterial order level, 47 important biomarkers were identified ( and ). These biomarkers can be used to distinguish samples at different stages accurately ( ). The five orders with the greatest impact on bacterial community were Bdellovibrionales, Actinomycetales, Erysipelotrichales, Enterobacterales, and Lactobacillales ( ). At fungal order level, 22 important biomarkers were identified ( and ). These biomarkers were also able to accurately distinguish samples at different development stages. The top five orders were Mucorales, Cystobasidiales, Ustilaginales, GS11, Wallemiales ( ). The clustering results showed that both the bacterial and fungal biomarkers at three development stages clustered into two branches ( and ), one was composed with the microbiota at the seedling stage, another was composed with the microbiota at the vigorous growth stage and at the harvesting stage, which was consistent with the results of the above PcoA analysis ( and ). Redundancy analysis (RDA) revealed that eight environmental factors drove changes in the microbiome at different development stages ( and ). In the rhizosphere bacterial community, soil organic matter (SOM), pH, urease (UE), and available phosphorus (AP) showed a positive correlation with the bacterial community at the seedling stage; saccharase (SC) and nitrate nitrogen (NN) were positively correlated with the bacterial community at the vigorous growth stage and the harvesting stage; alkaline phosphatase (AKP) and available potassium (AK) impacted the bacterial community at all stages ( ). The bacteria that played a major role at the seedling stage were Acidobacteriota, Actinobacteriota, Armatimonadota, Bacteroidota, Chloroflexi, Dependentiae, Firmicutes, Fusobacteriota, Gemmatimonadota, Methylomirabilota, Nitrospirota, Planctomycetota, and Verrucomicrobiota; The main bacteria at the vigorous growth and the harvesting stages were Proteobacteria, Patescibacteria, Bdellovibrionota, Campylobacterota, Cyanobacteria, Desulfobacterota, Elusimicrobiota, and Myxococcota ( ). In the rhizosphere fungal community, SOM, AP, pH, AK, AKP and UE were the main influencing factors at the seedling stage, while SC and NN were the main influencing factors at the vigorous growth and harvesting stages ( ). Chytridiomycota, Mortierellomycota, Olpidiomycota were the main compositions of fungal community at the seedling stage, and Ascomycota, Basidiomycota, Glomeromycota, Mucoromycota, and Rozellomycota were the main compositions of fungal community at the vigorous growth and harvesting stages ( ). Network analysis was carried out to understand interactions between different rhizosphere microbiota, and between rhizosphere microbiota and environmental factors. It was shown that rhizosphere bacterial community had more complex interaction network than rhizosphere fungal community ( and ), which may be attributed to the higher abundance and diversity of the bacterial community. In bacterial community, all 13 environmental factors revealed positive or negative impacts on bacteria at the Phylum level, with more negative (56.60%) than positive correlations (43.40%) were observed ( ). Planctomycetota contributed the most to the network, followed by Firmicutes, Myxococcota, and Proteobacteria. In fungal community, only 7 of the 13 environmental factors were associated with rhizosphere fungi ( ). AN may be the key factors driving the composition of rhizosphere fungal community. Chytridiomycota, Mortierellomycota, and Olpidiomycota may be the three phyla that had the greatest influences on the fungal community ( ). Interestingly, total hydrogen (TH) and total sulfur (TS) were negatively correlated with rhizosphere bacteria but positively correlated with rhizosphere fungi, suggesting that rhizosphere bacteria and fungi respond differently to environmental factors. A co-occurrence network of bacteria-fungi-environmental factors was established ( ), and more positive interactions (60%) were found in bacteria-bacteria and bacteria-fungi. A remarkable OTUs variation was found at different development stages in G. littoralis . The number of OTUs in both bacteria and fungi communities first decreased and then increased along the seedling-vigorous growth-harvesting development stages. In addition, the proportion of OTUs special to each stage of bacterial and fungal communities also showed similar results, with the values of 14.70–4.81–6.74% and 33.33–20.68–41.46%, respectively. The results may reflect the process of rhizosphere microorganisms colonization. At the seedling stage, due to the weak root system of G. littoralis , OTUs may represent indigenous microorganisms in the soil. At the harvesting stage, the rhizosphere microbes adapted to the root exudates are gradually enriched, and community structure become more stable, thus total and special OTUs number increases. In terms of species abundance, Proteobacteria, Acidobacteria, and Actinobacteria are the main components of bacterial community, and Ascomycota and Basidiomycota are the dominant fungi at each development stage. This is similar to the species composition of the rhizosphere microorganisms in other plants ( ; ). The composition of rhizosphere bacteria of G. littoralis is also consistent with the endophytic bacterial composition previously reported ( ). This is as expected, as previous studies have confirmed that most endophytic bacteria come from the soil ( ). There have been no reports on the composition of endophytic fungi of G. littoralis . Proteobacteria play an important role in plant growth and development by promoting photosynthesis to improve the utilization of C-source ( ). The abundance of Proteobacteria increased significantly from the seedling stage to the vigorous growth stage, but remained stable from the vigorous growth stage to the harvesting stage, which may be related to changes in soil C and N content. Increased abundance of Proteobacteria leads to more efficient use of complex carbohydrates and promotes plant adaptation to high C and N environments. The abundance of Acidobacteria decreased gradually from the seedling stage to the harvesting stage. Acidobacteria mainly degraded plant residues, participated in the metabolism of carbon compounds, and photosynthesis. The reason for the decline of abundance in the late development of G. littoralis may be that the microbial community used simple amino acids in the early stage and complex carbohydrates in the late stage. These characteristics have a positive impact on the growth of G. littoralis . Ascomycota is the largest fungal group and is important to rhizosphere ecosystems ( ). This group contains a large number of beneficial species ( ) and harmful species ( ). Basidiomycota is an important pathogen lineage in the fungal kingdom ( ), and is the main fungi responsible for rust disease of G. littoralis . It was relatively abundant at three development stages (9.88–15.2%), with the greatest abundance occurring at the vigorous growth stage, suggesting that the invasion of pathogenic fungi may have occurred at the vigorous growth stage, but G. littoralis remained healthy. Previous studies have shown that Proteobacteria and Actinobacteria are important phylum associated with plant disease suppression ( ). Therefore, it is speculated that Proteobacteria and Actinobacteria with high abundance may participate in the disease suppression of G. littoralis and thus maintain the health of G. littoralis . Compared with the previously reported microbial abundance in soils at the harvesting stage with that in continuous crop obstacles soil of G. littoralis ( ), the abundance of Proteobacteria (41.15% vs 33%) and Basidiomycota (9.88% vs 3.8%) was significantly higher, while that of Actinobacteria (13.66% vs 13%) and Acidobacteria (17.94% vs 17%) was similar to the former. These results suggest that Proteobacteria with higher abundance may play a key role in disease suppression. In cultivation of G. littoralis , controlling the occurrence of diseases and pests is an important prerequisite to ensure the yield. Grasping the occurrence time and abundance of main phytobacteria can provide reference for field management and yield increase at development stages of G. littoralis . The alpha diversity of bacterial community showed a decline with the seedling-vigorous growth-harvesting stage, consistent with results in other species ( ; ). The alpha diversity of fungal community decreased first and then increased with the development time. These results indicate that soil type is changing from a bacteria-dominated state to a fungi-dominated stage. However, the Ace, Chao1 and Shannon index of bacteria were significantly higher than those of fungi at all development stages. This suggests that the bacterial species in the rhizosphere soil are more abundant and diverse than fungi. Previous studies have suggested that bacteria-dominated soils are healthier. As the composition of rhizosphere soil changes from a bacteria-dominated state to a fungi-dominated stage, diseases and pests increase, leading to lower crop yield and other adverse effects ( ). The results presented here indicate that the healthy state of the rhizosphere may be the basis to ensure the high quality of the medicinal herbs of “Beishashen” at the harvesting stage. Combined with the changes in microbial abundance, the difference in alpha diversity index between bacteria and fungi may represent a change trend in soil health status in the late development stage of G. littoralis , and the high abundance of Proteobacteria maintained the healthy development of G. littoralis . High functional diversity was observed in the rhizosphere bacterial and fungal communities of G. littoralis . For bacterial community, the 10 key functions dominate at all the development stages. Of these, the predictive function of “Membrane transport” was thought to be associated with symbiotic interactions between bacterial community and other organisms ( ; ), suggesting that rhizosphere bacteria may mediate rhizosphere interactions throughout the whole growth process of G. littoralis . The functions of “Carbohydrate metabolism”, “Energy metabolism”, and “Amino acid metabolism” are associated with carbon and nitrogen fixation, and rhizosphere bacteria are speculated to convert carbon and nitrogen from soil, thus providing available substances for plants. Through the functional difference analysis, significant differentiations of the functions at the early development stage and the middle to late development stages were observed, which is supported by the results of PCA analysis. The abundance of “environmental adaptation” function at the harvesting stage was significantly higher than that at the seedling stage. This function mainly related to “plant-pathogen interaction”, indicating that the bacterial community may be involved in plant disease resistance at the harvesting stage. For fungal community, functional differentiation also occurred between the early development stage and the middle to late development stages. The relative abundance of saprotroph, symbiotroph, and pathotroph-saprotroph-symbiotroph fungi increased gradually with the growth and development of G. littoralis . These fungi may accelerate the decomposition of soil organic matter and convert insoluble minerals in the soil into nutrients available to G. littoralis . The results of functional difference analysis also confirmed functional differentiation between early and late development: Only functional changes were found at the seedling stage and vigorous growth stage, and at the vigorous growth stage and harvesting stage, but not at the seedling stage and harvesting stage. Biomarkers at three development stages were identified using a random forest machine learning algorithm. The 47 bacterial and 22 fungal biomarkers exhibit different correlations at different developmental stages, and these characteristics can be used to distinguish G. littoralis at different development stages. Biomarkers may be useful indicators for identifing plant origin, and some studies propose using biomarkers to determine the origin of imported soybeans ( ). As a traditional Chinese medicine, the quality of “Beishashen” is of paramount importance. Medicinal herbs grown in a specific area have better efficacy and are called “authentic medicinal herbs”. The samples collected in this study are from the authentic producing areas of G. littoralis , therefore this study can provide an option for reliable medicinal herbs origin traceability technology. The structure and diversity of rhizosphere microorganisms are influenced by multiple environmental factors. The RDA analysis showed that eight environmental factors had significantly influence on the rhizosphere microbial community, and different environmental factors had different influence on microbial community at different development stages. SOM, pH, UE, and AP positively impacted the bacterial and fungal communities at seedling stage, among which the influence of pH on bacterial and fungal communities was the most remarkable, and bacterial community was significantly affected by SOM, while fungal community was significantly impacted by AP. These results demonstrate the importance of pH, SOM and AP in early development of G. littoralis . Previous studies have shown that rhizosphere microbial communities promote colonization of beneficial microbes by regulating pH to suppress plant immunity ( ). This suggests that pH may play an important role in rhizosphere microorganisms colonization at the seedling stage of G. littoralis . SOM and AP are important indicators of soil fertility, and their effects on early plant rhizosphere need further study. SC and NN showed significant positive effects on bacterial and fungal communities at the vigorous growth stage and harvesting stage. SC can hydrolyse sucrose in soil to produce small molecules of glucose, which is an important carbon source for most microbes. NN is an important nitrogen source for most microorganisms. Therefore, it can be inferred that rhizosphere microbes absorbs a lot of carbon and nitrogen sources and carried out vigorous metabolic activities at the vigorous growth stage and harvesting stage of G. littoralis . The metabolites produced can provide available nutrients for G. littoralis and promote the production of active secondary metabolites. RDA analysis also provides a list of the bacteria and fungi that play a positive role in different development stages, so the microorganisms that are beneficial to the growth of G. littoralis can be screened. Isolation and culture of these beneficial microorganisms are of great significance to improve the yield and quality of G. littoralis . Understanding the influence of various environmental factors on rhizosphere microbial community can help us to construct beneficial microbiome by regulating environmental factors to promote the healthy growth of plants and improve the yield of medicinal herbs. Co-occurrence networks can identify putative interactions between microorganisms in the environment ( ). The results revealed more complex co-occurrence networks in bacterial community than in fungal community. The formation of bacterial community involves more environmental factors than the formation of fungal community. Other studies have found complex co-occurrence networks in rhizosphere bacterial communities ( ; ). Some studies have shown that rhizosphere influences ultimately lead to a decrease in bacterial community diversity and more complex symbiotic networks ( ; ). Our findings support this conclusion. In order to understand all rhizosphere microbial interactions, a co-occurrence network of bacteria-fungi-environmental factors was established. More positive interactions (60%) in bacteria-bacteria and bacteria-fungi were found. The results suggest that the rhizosphere microorganisms form a complex mutualistic symbiosis network, which might be beneficial to the growth and development of G. littoralis . These results are also consistent with previous studies showing that most bacteria cluster together as functional groups, which use plant-derived resources more efficiently and provide a greater number of services ( ). In this study, high-throughput sequencing technology revealed the dynamic changes of rhizosphere microbial communities (bacteria and fungi) at different development stages of G. littoralis . The composition, diversity and function of rhizosphere bacterial and fungal communities are closely related to the development stage of G. littoralis . Eight environmental factors play a vital role in driving rhizosphere microbial changes at different development stages. This study provides data support for understanding the structure and composition of the rhizosphere microbial community in the development period of G. littoralis , and also lays a foundation for improving the yield and quality of G. littoralis by regulating the microbial community in the future. There are still many questions to be explored about how to utilize rhizosphere microbial resources, how to increase microbial diversity in agro-ecosystems, and the mechanisms of microbial diversity contribute to agricultural production. 10.7717/peerj.14988/supp-1 Supplemental Information 1 Principal component analysis (PCA) based on the functional abundance of rhizosphere microorganisms. (A) Bacterial function at the level of class 2 kegg pathway; (B) bacterial function at the level of class 3 kegg pathway; (C) fungal distribution based on trophic modes; Click here for additional data file. 10.7717/peerj.14988/supp-2 Supplemental Information 2 Co-occurrence network of rhizosphere of Bacteria, Fungi and environmental factors. Purple circles represent Bacteria; Green circles represent Fungi; Magenta circles represent environmental factors. Click here for additional data file. 10.7717/peerj.14988/supp-3 Supplemental Information 3 Relative abundance of bacteria at phylum level at each development stage. Click here for additional data file. 10.7717/peerj.14988/supp-4 Supplemental Information 4 Relative abundance of fungi at phylum level at each development stage. Click here for additional data file. 10.7717/peerj.14988/supp-5 Supplemental Information 5 Percentage of top ten functions of rhizosphere bacteria in G. litteralis . Ss, the seedling stage; Vs, the vigorous growth stage; Hs, the harvesting stage. Click here for additional data file. 10.7717/peerj.14988/supp-6 Supplemental Information 6 Percentage of rhizosphere fungi of different trophic modes in G. litteralis . Ss, the seedling stage; Vs, the vigorous growth stage; Hs, the harvesting stage. Click here for additional data file. 10.7717/peerj.14988/supp-7 Supplemental Information 7 Raw data for soil physiochemical properties (environmental factors). Ss, the seedling stage; Vs, the vigorous growth stage; Hs, the harvesting stage. AP, available phosphorus; AK, available potassium; AN, ammonium nitrogen; SOM, soil organic matter; TOC, total organic carbon; NN, nitrate nitrogen; SC, Saccharase; UE, Urease; AKP, alkaline phosphatase; TN, total nitrogen; TH, total hydrogen; TC, total carbon; TS, total sulfur. Click here for additional data file.
Faculty-or senior resident-led SNAPPS for postgraduate teaching in pediatrics
43f6cf83-8331-4838-8cdf-d238d7b33b8a
9997607
Pediatrics[mh]
Institutional Ethics Committee, Maulana Azad Medical College, number F.1/IEC/MAMC/(80/08/2020/no 311) dated 14 Jan 2021. Nil. There are no conflicts of interest.
Aerosolize this: Generation, collection, and analysis of aerosolized virus in laboratory settings
96e11830-df04-4f59-b592-f465c832c359
9997909
Microbiology[mh]
Humans release respiratory droplets following various activities, from breathing and speaking, to coughing and sneezing, and even singing and playing wind instruments. These droplets vary in size (from submicron diameters to droplets visible to the human eye) and are emitted at variable levels and momentum depending on the respiratory action and individual person . In general, respiratory droplets contain a mix of water, inorganic substances, and proteins as well as pathogens if emitted from an infected host; these droplets are subject to evaporation, leading to shrinkage and longer persistence in the air compared to droplets of the original/initial size. Importantly, infection can affect the number, size, and composition of expelled droplets relative to healthy individuals, which collectively modulate the distance traveled, viability of the virus within the droplets, and deposition location if inhaled by a susceptible host . This complexity represents a substantial challenge for rigorous study, and as such, efforts in laboratory settings to measure the role virus-laden aerosols play in transmission events typically focus on one of these properties at a time. Numerous small mammalian models are employed in laboratory settings to evaluate virus transmissibility by the airborne route ; these models separate donor and contact animals to prevent direct or indirect contact, while only permitting air exchange between cages, with or without directional airflow. These stringent models thus implicate virus-laden aerosols as the only source of infectious virus to which the contact animals are exposed, but infrequently include collection and quantification of viral particles released by infected animals. However, inclusion of these assessments has provided critical insight into the role particle size contributes to onward transmission [ – ]. Quantification of total particles emitted into the air is often achieved via use of an aerosol particle sizer, which permits total aerosol counts at different size bins, but does not preserve collected aerosols for subsequent analysis . Aerosols can be generated from liquid suspensions of virus in controlled laboratory settings but can vary widely based on equipment and established procedures between laboratories. The choice of nebulizer and sampler influences the properties of aerosols generated and recovered, respectively, as well as preservation of virus viability . The most rigorous aerosol generation and collection systems need to balance real-time monitoring of parameters, control of airflow, and environmental conditions throughout experimentation, in tandem with safety controls to ensure all infectious material is subsequently inactivated so that laboratorian safety is prioritized . A multitude of protocol-specific parameters can modulate properties of aerosols generated and collected in the laboratory, including temperature and humidity, diluent composition in the nebulizer and sampling device, collection time, the duration and/or manner in which aerosols are aged, and airflow through the system ( ), complicating interpretation of results between laboratories. Side-by-side comparisons of these variables can be extremely valuable but are not often conducted . In laboratory settings, aerosolization of liquids containing high concentrations of infectious virus permits the recovery of high levels of infectious virus in a sampler. In the context of in vivo experimentation, infectious virus is emitted from infected animals at much lower titers, necessitating collection methods that permit sampling for prolonged periods of time, employing low volume suspensions for collection to concentrate virus, maintaining cooler temperatures in the sampler, and employing other design features to preserve virus viability. Therefore, measurement of airborne virus shedding from infected animals during transmission assessments most often involves reporting viral nucleic acid and not infectious virus [ – ]. There are numerous practical, logistical, and methodological reasons that contribute to reporting of viral genome copies and not infectious virus . Levels of genomic material are typically several orders of magnitude higher than infectious virus levels . Samplers are generally inexpensive, easy to use and decontaminate, and may be operated for long periods of time without the need for hands-on laboratory staff or direct manipulation of infected animals [ – ]; however, they often are not designed to preserve virus structure and infectivity. As such, mechanical shearing and desiccation of virus precludes accurate measurement of infectious virus in the air, allowing only for quantification of viral genomic material . Furthermore, since small animal models emit relatively low minute volumes of air into often large animal housing areas, extended (>1 hour) sampling windows may be necessary to collect detectable levels of virus-laden aerosols diluted in large volumes of air, especially if aerosols are being size-fractionated into multiple populations; longer collection durations can compromise virus viability prior to optimal storage and infectious virus quantification. Lastly, specimen handling post-collection routinely involves a freeze–thaw step, decreasing recovery of infectious virus. For these reasons, efforts to incorporate aerosol collection into routine, labor-intensive, and time-consuming laboratory procedures and pandemic risk assessment activities have been most successful when using simple sampling techniques allowing for storage of samples for analysis outside of the workflow, resulting in viral genome detections as a primary readout. These studies nonetheless provide valuable contextual information regarding the kinetics and magnitude of virus emission into the air, linking these features with the frequency and timing of virus transmission between infected animals and susceptible contacts [ – ]. Collection of infectious viruses in aerosols is most desirable, but as discussed above, there are a multitude of practical issues that preclude their routine quantification, especially during in vivo settings. For sampling devices that require animal restraint and/or sedation for optimal collection (e.g., direct collection of animals’ breath into the sampler to avoid aerosol diffusion into the environment during collection), sampling durations are limited by anesthesia schedules. Staging sensitive collection devices within reach of an awake and alert animal can be impractical or impossible if the device is not designed for such wear-and-tear (e.g., contain parts that can be consumed by animals); not all caging or animal restraint systems can support close staging of samplers that require a dedicated resting surface while concurrently prohibiting direct interaction between the device and curious animals. Transmission experiments are frequently conducted with 3 to 4 pairs of donor:contact animals housed in different cages [ , , ]; should concurrent sampling be desired from all animals within the same time frame, this requires multiple instruments and associated equipment, which can be cost- or space-prohibitive. Although multiple samplers have been shown to efficiently preserve infectivity following collection of laboratory-generated aerosols containing high levels of virus , collection of infectious viruses during in vivo experimentation is less efficient due to the low levels of virus emitted into the air, and the potential for virus viability to decrease following prolonged sampling. For example, infectious virus was collected from influenza virus–infected ferrets following tidal breathing or induced sneezing via a cascade impactor , but at much lower titers than following aerosol generation using a Collison nebulizer . Samplers employing water-based condensation have been gaining interest as they employ gentle collection methods resulting in concentrated infectious virus more effectively relative to other methods . However, there is still a need for development of sampling devices, at a range of price points, which balance the benefits of preserving virus viability alongside the scalability and flexibility for routine use in laboratory settings, especially during in vivo experimentation ( ). Despite the challenges highlighted above, there is a need to better define the role of aerosols in laboratory-based viral transmission assessments to discover pearls of wisdom that can translate into substantive benefits to public health. To this end, recent comparative studies have assessed the relative impact strain-specific, diluent-specific, environmental condition–specific, and device-specific changes confer to virus aerosolization . Laboratory-generated aerosols cannot fully emulate the true complexity of airborne particles exhaled from mammalian hosts, but efforts to better understand virus behavior within these particles under defined conditions nonetheless improve our ability to extrapolate results to real-world settings, such as sampling in agricultural environments . Wider adoption of aerosol collection during in vivo assessments of virus transmissibility would be of benefit. Beyond current efforts of quantifying airborne virus released from infected animals and linking these data to virus transmissibility [ – ], future efforts to obtain genomic sequence data from airborne virus will facilitate bridging of within-host and between-host evolution and transmission dynamics. Furthermore, most virus transmission assessments reported have been conducted in serologically naïve, healthy animals; expansion of studies to include hosts with diverse immunological and/or health profiles to better elucidate how altered host states modulate release of virus-laden aerosols post-infection is needed. As laboratory-based transmission studies continue to play crucial roles in virus risk assessment efforts , continued inclusion of the role aerosols play in this dynamic process represents a necessary endeavor.
Person-centred care in the Dutch primary care setting: Refinement of middle-range theory by patients and professionals
ab850b13-f5a1-4784-a563-97dcf489c69c
9997984
Patient-Centered Care[mh]
Healthcare systems are gradually transforming from biomedically-oriented systems towards more person-centred care (PCC) oriented systems . To understand and adequately address a person’s health problem(s) and experience of illness, having a disease-oriented perspective alone is not sufficient . Worldwide, person-centredness has gained more recognition over the years and is considered a core element of high-quality healthcare [ – ]. Driving factors behind this recognition are the growing and changing demand for care, more technological possibilities, and the rising healthcare costs . When PCC addresses also non-medical causes of and solutions for physical distress, it could reduce costs of more expensive (hospital-based) medical specialist care. A core element of PCC is to create a partnership between the healthcare professional and the care recipient, in which the unique needs and beliefs of the latter are the starting point for the provision of care . PCC is considered a core value of primary care . In the Netherlands general practitioners (GPs) have a central role in the healthcare system. As GPs are the first contact point for individuals experiencing health problems and an increasing number of patients with complex care needs ending up in primary care, it is especially important for GPs to provide appropriate support by applying a holistic and person-centred approach that contributes to the overall well-being of individuals . The Dutch healthcare system is recognised for its well-developed primary healthcare . Important elements for this are GPs acting as gatekeepers for specialist care and hence the gradual accessibility of secondary medical specialistic care. The assumption behind this is that a well-functioning primary care setting takes over the care demand as much as possible, which otherwise would end up in the more expensive secondary care. The implementation of practice nurses in Dutch GP practices has increased the interdisciplinary character of care . In addition to the gatekeeping function, empanelment is also considered an important component for building or strengthening primary care . Literature advocating PCC is widespread and the experiences gained with PCC in primary care in the Netherlands are increasingly shared, often in terms of best practices, barriers to implementation and conditions for success . However, despite the conceptual attractiveness of PCC, in daily practice PCC remains poorly understood and implemented . A previously published rapid realist review (RRR) of international literature aimed to provide insight into the question for whom, how and why PCC in primary care does (not) work under what circumstances . The resulting middle-range programme theory (PT) ( ) demonstrated that healthcare providers (HCPs) should be trained and equipped with the knowledge and skills to communicate effectively (i.e., in easy-to-understand words, emphatically, checking whether the patient understands everything, listening attentively) tailored to the wishes, needs and possibilities of the patient, which may lead to higher satisfaction of patients, informal caregivers, and/or healthcare professionals. This way patients will be more involved in their care process and in the shared decision-making process, which may result in improved concordance, and an improved treatment approach. A respectful and empathic attitude of the HCP plays an important role in establishing a strong therapeutic relationship and improved health (system) outcomes. Together with a good accessibility of care for patients, setting up a personalised care planning with all involved parties may positively affect the self-management skills of patients. Good collaboration within the team and between different domains is desirable to ensure good care coordination. However, since the application of PCC in primary care in the Dutch setting is expected to differ from primary care in other countries, it is deemed relevant to assess the relevance of the obtained items from the international RRR for the Dutch setting. In doing so, the active involvement of experts from the field is of great importance, both for providing input and for translating theoretical insights into suggestions for daily practice . Moreover, PCC should also take into account diversity in age, gender, socio-economic status, education, migration background, (multi)morbidity as well as personal preferences and needs . For example, approximately 25% of the Dutch population has a migration background , more than 18% are low-literate , and 30% have insufficient or limited health literacy skills . People from these groups often have poorer health, partly because the care provided insufficiently match their needs and possibilities. Existing treatment protocols and standards of care are largely based on scientific evidence usually obtained from study participants outside these groups and therefore do not or only partially apply to these groups . The objective of this study is to validate the items (face validity) resulting from the international RRR for the Dutch setting by assessing consensus on the relevance of the items among different stakeholders. Patient and public involvement This study was commissioned by the National Health Care Institute, who, amongst others, encourages good healthcare by helping all parties involved to continually improve healthcare quality. This study is part of a larger study for which a steering committee was established. The ten members of the steering committee were purposively selected based on their expertise in the PCC or primary care field and were primary care practitioners, senior researchers, medical specialists, policy makers, patient’s representatives (specifically concerning patients with limited (health-)literacy and a migrant background) (see Acknowledgements). Several meetings with the steering committee were held during the study (February 2018, December 2018, April 2019, December 2019). These meetings were held with the objective to provide feedback and guidance on the methods, the interpretation of (interim) results, and providing overall advice regarding the research. Stakeholder perspectives were considered when testing and refining the PT derived from the RRR. Members of the steering committee were asked to discuss, and to indicate if the identified items on context, mechanisms and outcomes in the literature match with what they see in Dutch practice. Programme theory One of the key elements in doing realist research is to establish a PT. A PT explains what mechanisms will generate the outcomes and what features of the context will affect whether or not those mechanisms operate . Context items refer to wider external factors, and mechanisms are considered enablers, underlying entities, processes, structures, reasoning, choices, or collective beliefs). The interaction between context and mechanisms lead to outcomes (intended and unintended). In the international RRR we established a middle-range PT (see and ), which we aimed to refine based on the findings of this study in the Dutch setting. Study design In this qualitative study, four focus group discussions (FGDs) were held with the objective to encourage group interaction between participants and to explore and clarify individual and shared perspectives . FGD 3 and 4 were combined with a Delphi-study. The four FGDs were held with different stakeholders to validate the findings from the international RRR for the Dutch setting. A FGD lasted approximately 90 minutes. All FGDs were held at a neutral place that participants already knew (i.e., at a research organisation), and where they felt comfortable. Participants of FGD 1 and 2 were patient representatives and patients with limited health literacy skills. Participants of FGD 3 and 4 were various primary care professionals. Due to the different target groups, a target group-specific approach was used. The different approaches are explained in more detail below. Recruitment Participants of FGD 1 and 2 were recruited through purposive sampling. Adult participants were approached using trusted network organisations. These organisations are the Network of Organisations of Older Migrants (NOOM), which focus on diverse groups of migrant older people in the Netherlands, and the ABC foundation, a volunteer organisation for low-literate people throughout the Netherlands. During the recruitment process maximum variation in gender, age, ethnic background, educational level and level of health literacy was aimed to achieve. FGD 1 and 2 were led by a researcher [AA] and another moderator experienced in leading FGDs with people with low (health) literacy skills [NHvR]. FGD 1 and 2 took place in August 2018. Participants of FGD 3 and 4 were various primary care professionals, members of care organisations, policy makers, and researchers. Participants of FGD 3 and 4 were recruited (purposive sampling) through the expert network of the researchers of this project, aiming for variation in gender, age, professional background, and experience with person-centred care. To be included in the FGD, participants needed to have scientific (research) experience and/or practical work experience in a professional or service organisation regarding person-centred care in primary care. FGD 3 and 4 were led by two researchers [AA and HJMV or MvdM]. FGD 3 and 4 took place in December 2018. Data collection For FGD 1 and 2 an open-ended semi-structured topic guide was used by the moderators, which was compiled based on the context items, mechanism, and outcome variables from the RRR ( ). Only patient-related items were included and were presented in the form of simple formulated questions during the FGDs ( and ). Participants could also ask other questions and/or share their own story or experiences. This facilitated the researchers to collect additional data. Participatory learning and action (PLA) techniques were applied to facilitate equal input from participants, thereby stimulating the active participation of participants. PLA is a form of participatory research, which emphasizes the need for stakeholders’ active engagement across the full range of research activities, including data generation and data analysis, and is specifically suitable for meaningful involvement of stakeholders with limited power or skills . Field notes were made during the FGDs. In FGD 3 and 4 validation of the CMO-items by participants took place by means of an e-Delphi questionnaire ( ) and a FGD during the second round ( ). The Delphi technique is a widely used research method, which consists of several rounds of data collection to capture and structure the knowledge and opinions of a panel of participants on a topic in which they have expertise . Field notes were made during the FGDs. Delphi round 1 Participants received a web link to an online version of the questionnaire in SurveyMonkey (version 2018). The questionnaire started with an introduction of the study and its objectives, the structure of the questionnaire, and the definitions of the constructs: context, mechanisms, programme-activities, and outcomes. The questionnaire continued with six general questions regarding gender, age, highest level of education, current job position, number of years working within the position, and number of years of experience with PCC. The questionnaire contained another 63 questions related to CMO-data derived from the RRR. Experts were asked to assess the relevance on a 9-point Likert scale (1 = very irrelevant to 9 = very relevant) of PCC-related items in primary care in the Netherlands of context items (n = 30), mechanisms (n = 19), and outcomes (n = 14) identified in the RRR. The questionnaire ended with two open questions, namely possible additions to the stated context items, mechanisms, and/or outcomes based on personal experiences, and participants were asked if they had any additional comments/suggestions about the questionnaire. The answers of the participants were completely anonymised. The respondents were given a total of two weeks to complete the questionnaire. FGD (second round) Before the second round of the Delphi questionnaire was completed, a FGD was held ( ). The aim of this FGD was to discuss the context items, mechanisms and outcomes for which insufficient consensus/dissensus was found in round 1. During this FGD, the group results from the first Delphi round were provided, including 1) the median assessment results and interquartile range (IQR) on each item), 2) the level of (insufficient) consensus between the participants and, whether consensus was achieved . The IQR is the difference between the 3 rd and 1 st quartile in which 50% of core values lie and also shows the degree of convergence of the answers [ – ]. The items, for which dissensus was found, were presented and discussed during the FGD to give insight into the level of (dis)agreement between experts in the first round and to generate additional insights about the specific item(s). Providing feedback on the level of group agreement reached, influences achieving the level of consensus subsequently . Misinterpretation on item(s) needed to be clarified. Delphi round 2 An online version of the questionnaire was sent including the context items, mechanisms, and outcomes for which no consensus was found in round one . The questionnaire started with the same general questions as round 1. Then, participants were asked to indicate the degree of relevance of context items, mechanisms and outcomes for PCC in primary care in the Netherlands on the same 9-point Likert scale. At the end of the questionnaire, participants had the possibility to add items that were not included in the questionnaire and could also provide general comments/suggestions on the questionnaire. For round 2, the respondents were given a total of two weeks to complete the questionnaire. Data analysis All FGDs were audio-taped and transcribed verbatim manually. Using thematic analysis techniques , text segments were assigned a code if they related to a specific theme/topic, using an inductive, iterative process. Categories with similar content were investigated for inter-relationships, and further refined. Half of the data was coded independently by two researchers [AA, MvdM] to maximise credibility and trustworthiness . Any differences in code application were resolved by discussion with a third researcher [HJMV]. Data were analysed both descriptively and exploratively. For the Delphi rounds in FGD 3 and 4 a 9-point Likert scale (1 = very irrelevant to 9 = very relevant) was used to indicate the degree of relevance of the CMO-items. To collect data from participants in a most sensitive matter, use was made of a 9-point Likert scale. For analysis, data were recorded into: irrelevant (1–3), equivocal (4–6) and relevant (7–9). Recoding enabled us to assess consensus on these meaningful levels and hence derive recommendations for improvement. To determine the level of consensus within the Delphi panel, many studies use a predetermined level of consensus among the experts . However, the literature does not describe a standard threshold for reaching consensus , with thresholds for consensus varying from 55–100% . In this study the level of consensus was 75% or more [ , , ], with the condition that less than 15% of participants scored in the opposite range of that scale namely the 1–3 range . All items with scores in the 4–6 range and without consensus, were presented again to the Delphi panel in round 2. Respondents’ overall consensus on each context, mechanism, and outcome was analysed based on the median of the group’s scores. The analysis was performed in MS Excel 2018. Consensus on items being found relevant by FGD 1 and 2 and/or FGD 3 and 4, remained part of the PT or were added to the PT. Consensus on items being irrelevant or no consensus on items were removed from the PT. Trustworthiness This study largely complies with the COnsolidated criteria for REporting Qualitative research (COREQ) Checklist, a checklist for explicit and comprehensive reporting of qualitative studies (in-depth interviews and focus groups) . To increase the credibility of this study multiple FGDs were held, multiple stakeholders’ perspectives were included, and triangulation of data collection methods took place. Regarding transferability, sampling strategies, detailed descriptions of participants, a description of the topic list, and the procedure of methods were included. With respect to confirmability, (interim) results were presented to the commissioner of this study and the steering committee of this study. Regarding dependability, multiple authors independently coded the transcripts, interpretation of the results took place individually by multiple authors, and participants quotations were included to accurately report their perspectives. Ethics As this study does not involve patients or study subjects, according to the Dutch Medical Research in Human Subjects Act (WMO) in the Netherlands, an ethical approval was not needed. However, all participants provided their (verbal) consent and participation in the survey was anonymous. This study was commissioned by the National Health Care Institute, who, amongst others, encourages good healthcare by helping all parties involved to continually improve healthcare quality. This study is part of a larger study for which a steering committee was established. The ten members of the steering committee were purposively selected based on their expertise in the PCC or primary care field and were primary care practitioners, senior researchers, medical specialists, policy makers, patient’s representatives (specifically concerning patients with limited (health-)literacy and a migrant background) (see Acknowledgements). Several meetings with the steering committee were held during the study (February 2018, December 2018, April 2019, December 2019). These meetings were held with the objective to provide feedback and guidance on the methods, the interpretation of (interim) results, and providing overall advice regarding the research. Stakeholder perspectives were considered when testing and refining the PT derived from the RRR. Members of the steering committee were asked to discuss, and to indicate if the identified items on context, mechanisms and outcomes in the literature match with what they see in Dutch practice. One of the key elements in doing realist research is to establish a PT. A PT explains what mechanisms will generate the outcomes and what features of the context will affect whether or not those mechanisms operate . Context items refer to wider external factors, and mechanisms are considered enablers, underlying entities, processes, structures, reasoning, choices, or collective beliefs). The interaction between context and mechanisms lead to outcomes (intended and unintended). In the international RRR we established a middle-range PT (see and ), which we aimed to refine based on the findings of this study in the Dutch setting. In this qualitative study, four focus group discussions (FGDs) were held with the objective to encourage group interaction between participants and to explore and clarify individual and shared perspectives . FGD 3 and 4 were combined with a Delphi-study. The four FGDs were held with different stakeholders to validate the findings from the international RRR for the Dutch setting. A FGD lasted approximately 90 minutes. All FGDs were held at a neutral place that participants already knew (i.e., at a research organisation), and where they felt comfortable. Participants of FGD 1 and 2 were patient representatives and patients with limited health literacy skills. Participants of FGD 3 and 4 were various primary care professionals. Due to the different target groups, a target group-specific approach was used. The different approaches are explained in more detail below. Participants of FGD 1 and 2 were recruited through purposive sampling. Adult participants were approached using trusted network organisations. These organisations are the Network of Organisations of Older Migrants (NOOM), which focus on diverse groups of migrant older people in the Netherlands, and the ABC foundation, a volunteer organisation for low-literate people throughout the Netherlands. During the recruitment process maximum variation in gender, age, ethnic background, educational level and level of health literacy was aimed to achieve. FGD 1 and 2 were led by a researcher [AA] and another moderator experienced in leading FGDs with people with low (health) literacy skills [NHvR]. FGD 1 and 2 took place in August 2018. Participants of FGD 3 and 4 were various primary care professionals, members of care organisations, policy makers, and researchers. Participants of FGD 3 and 4 were recruited (purposive sampling) through the expert network of the researchers of this project, aiming for variation in gender, age, professional background, and experience with person-centred care. To be included in the FGD, participants needed to have scientific (research) experience and/or practical work experience in a professional or service organisation regarding person-centred care in primary care. FGD 3 and 4 were led by two researchers [AA and HJMV or MvdM]. FGD 3 and 4 took place in December 2018. For FGD 1 and 2 an open-ended semi-structured topic guide was used by the moderators, which was compiled based on the context items, mechanism, and outcome variables from the RRR ( ). Only patient-related items were included and were presented in the form of simple formulated questions during the FGDs ( and ). Participants could also ask other questions and/or share their own story or experiences. This facilitated the researchers to collect additional data. Participatory learning and action (PLA) techniques were applied to facilitate equal input from participants, thereby stimulating the active participation of participants. PLA is a form of participatory research, which emphasizes the need for stakeholders’ active engagement across the full range of research activities, including data generation and data analysis, and is specifically suitable for meaningful involvement of stakeholders with limited power or skills . Field notes were made during the FGDs. In FGD 3 and 4 validation of the CMO-items by participants took place by means of an e-Delphi questionnaire ( ) and a FGD during the second round ( ). The Delphi technique is a widely used research method, which consists of several rounds of data collection to capture and structure the knowledge and opinions of a panel of participants on a topic in which they have expertise . Field notes were made during the FGDs. Delphi round 1 Participants received a web link to an online version of the questionnaire in SurveyMonkey (version 2018). The questionnaire started with an introduction of the study and its objectives, the structure of the questionnaire, and the definitions of the constructs: context, mechanisms, programme-activities, and outcomes. The questionnaire continued with six general questions regarding gender, age, highest level of education, current job position, number of years working within the position, and number of years of experience with PCC. The questionnaire contained another 63 questions related to CMO-data derived from the RRR. Experts were asked to assess the relevance on a 9-point Likert scale (1 = very irrelevant to 9 = very relevant) of PCC-related items in primary care in the Netherlands of context items (n = 30), mechanisms (n = 19), and outcomes (n = 14) identified in the RRR. The questionnaire ended with two open questions, namely possible additions to the stated context items, mechanisms, and/or outcomes based on personal experiences, and participants were asked if they had any additional comments/suggestions about the questionnaire. The answers of the participants were completely anonymised. The respondents were given a total of two weeks to complete the questionnaire. FGD (second round) Before the second round of the Delphi questionnaire was completed, a FGD was held ( ). The aim of this FGD was to discuss the context items, mechanisms and outcomes for which insufficient consensus/dissensus was found in round 1. During this FGD, the group results from the first Delphi round were provided, including 1) the median assessment results and interquartile range (IQR) on each item), 2) the level of (insufficient) consensus between the participants and, whether consensus was achieved . The IQR is the difference between the 3 rd and 1 st quartile in which 50% of core values lie and also shows the degree of convergence of the answers [ – ]. The items, for which dissensus was found, were presented and discussed during the FGD to give insight into the level of (dis)agreement between experts in the first round and to generate additional insights about the specific item(s). Providing feedback on the level of group agreement reached, influences achieving the level of consensus subsequently . Misinterpretation on item(s) needed to be clarified. Delphi round 2 An online version of the questionnaire was sent including the context items, mechanisms, and outcomes for which no consensus was found in round one . The questionnaire started with the same general questions as round 1. Then, participants were asked to indicate the degree of relevance of context items, mechanisms and outcomes for PCC in primary care in the Netherlands on the same 9-point Likert scale. At the end of the questionnaire, participants had the possibility to add items that were not included in the questionnaire and could also provide general comments/suggestions on the questionnaire. For round 2, the respondents were given a total of two weeks to complete the questionnaire. Participants received a web link to an online version of the questionnaire in SurveyMonkey (version 2018). The questionnaire started with an introduction of the study and its objectives, the structure of the questionnaire, and the definitions of the constructs: context, mechanisms, programme-activities, and outcomes. The questionnaire continued with six general questions regarding gender, age, highest level of education, current job position, number of years working within the position, and number of years of experience with PCC. The questionnaire contained another 63 questions related to CMO-data derived from the RRR. Experts were asked to assess the relevance on a 9-point Likert scale (1 = very irrelevant to 9 = very relevant) of PCC-related items in primary care in the Netherlands of context items (n = 30), mechanisms (n = 19), and outcomes (n = 14) identified in the RRR. The questionnaire ended with two open questions, namely possible additions to the stated context items, mechanisms, and/or outcomes based on personal experiences, and participants were asked if they had any additional comments/suggestions about the questionnaire. The answers of the participants were completely anonymised. The respondents were given a total of two weeks to complete the questionnaire. Before the second round of the Delphi questionnaire was completed, a FGD was held ( ). The aim of this FGD was to discuss the context items, mechanisms and outcomes for which insufficient consensus/dissensus was found in round 1. During this FGD, the group results from the first Delphi round were provided, including 1) the median assessment results and interquartile range (IQR) on each item), 2) the level of (insufficient) consensus between the participants and, whether consensus was achieved . The IQR is the difference between the 3 rd and 1 st quartile in which 50% of core values lie and also shows the degree of convergence of the answers [ – ]. The items, for which dissensus was found, were presented and discussed during the FGD to give insight into the level of (dis)agreement between experts in the first round and to generate additional insights about the specific item(s). Providing feedback on the level of group agreement reached, influences achieving the level of consensus subsequently . Misinterpretation on item(s) needed to be clarified. An online version of the questionnaire was sent including the context items, mechanisms, and outcomes for which no consensus was found in round one . The questionnaire started with the same general questions as round 1. Then, participants were asked to indicate the degree of relevance of context items, mechanisms and outcomes for PCC in primary care in the Netherlands on the same 9-point Likert scale. At the end of the questionnaire, participants had the possibility to add items that were not included in the questionnaire and could also provide general comments/suggestions on the questionnaire. For round 2, the respondents were given a total of two weeks to complete the questionnaire. All FGDs were audio-taped and transcribed verbatim manually. Using thematic analysis techniques , text segments were assigned a code if they related to a specific theme/topic, using an inductive, iterative process. Categories with similar content were investigated for inter-relationships, and further refined. Half of the data was coded independently by two researchers [AA, MvdM] to maximise credibility and trustworthiness . Any differences in code application were resolved by discussion with a third researcher [HJMV]. Data were analysed both descriptively and exploratively. For the Delphi rounds in FGD 3 and 4 a 9-point Likert scale (1 = very irrelevant to 9 = very relevant) was used to indicate the degree of relevance of the CMO-items. To collect data from participants in a most sensitive matter, use was made of a 9-point Likert scale. For analysis, data were recorded into: irrelevant (1–3), equivocal (4–6) and relevant (7–9). Recoding enabled us to assess consensus on these meaningful levels and hence derive recommendations for improvement. To determine the level of consensus within the Delphi panel, many studies use a predetermined level of consensus among the experts . However, the literature does not describe a standard threshold for reaching consensus , with thresholds for consensus varying from 55–100% . In this study the level of consensus was 75% or more [ , , ], with the condition that less than 15% of participants scored in the opposite range of that scale namely the 1–3 range . All items with scores in the 4–6 range and without consensus, were presented again to the Delphi panel in round 2. Respondents’ overall consensus on each context, mechanism, and outcome was analysed based on the median of the group’s scores. The analysis was performed in MS Excel 2018. Consensus on items being found relevant by FGD 1 and 2 and/or FGD 3 and 4, remained part of the PT or were added to the PT. Consensus on items being irrelevant or no consensus on items were removed from the PT. This study largely complies with the COnsolidated criteria for REporting Qualitative research (COREQ) Checklist, a checklist for explicit and comprehensive reporting of qualitative studies (in-depth interviews and focus groups) . To increase the credibility of this study multiple FGDs were held, multiple stakeholders’ perspectives were included, and triangulation of data collection methods took place. Regarding transferability, sampling strategies, detailed descriptions of participants, a description of the topic list, and the procedure of methods were included. With respect to confirmability, (interim) results were presented to the commissioner of this study and the steering committee of this study. Regarding dependability, multiple authors independently coded the transcripts, interpretation of the results took place individually by multiple authors, and participants quotations were included to accurately report their perspectives. As this study does not involve patients or study subjects, according to the Dutch Medical Research in Human Subjects Act (WMO) in the Netherlands, an ethical approval was not needed. However, all participants provided their (verbal) consent and participation in the survey was anonymous. FGD 1 and 2 with patient representatives FGD 1 and 2 consisted of a total of 14 participants. In the participants’ characteristics are shown. Participants who were not originally born in the Netherlands have been in the Netherlands for on average of 44 years (SD: 11.4 years). All context items, mechanisms, and outcomes presented to participants were found relevant for PCC in primary care in the Netherlands. This concerns the context items: patients having social support (networks), a good collaboration between HCPs, patient education being provided, sufficient time during consultation, setting up a personalised care planning, and making use of e-health options. The mechanisms deemed relevant for PCC in primary care in the Netherlands are HCPs providing effective communication (including listen attentively), HCPs having a holistic approach, HCPs showing respect and having an open, friendly, and empathic attitude, patients having an active role in their care process, establishing a therapeutic relationship, self-management support, and shared decision-making. The outcomes considered relevant concerned health outcomes, patient involvement, satisfaction of the patient, therapy concordance, self-management skills, and an improved treatment approach. On the items below participants had additional comments next to them being considered relevant. The participants reflected on these items based on their own experience, indicating that they are relevant for PCC in primary care, but not always carried out properly in practice. Communication According to the participants, HCPs did not (yet) adapt their communication sufficiently to the needs and wishes of the patients. Participants stated that “in the communication by the care provider more attention should be paid to diversity” (P1 and P2). One participant expressed that “communication is extremely important when you visit the GP . Often older migrants cannot communicate well in Dutch , but they do know what they want to ask in their own language . They often bring their son or daughter to the GP together with them to ask questions [related to medical health of patient]” (P1). In addition, the use of aids (pictures, attributes, etc.) during the consultation could support communication, which is currently very limited done. Also, patients often had difficulties understanding health information and medical terms, while most of them did not indicate this. This is particularly the case for low-literate people and migrants, who had difficulty with the (Dutch) language and were therefore limited in their communication options. One participant mentioned that "people still don’t have the guts to say they are illiterate , and that’s just because of the shame associated with it" (P3). Reinforcing patients’ language skills and using interpreters can improve communication. Consultation time An important barrier of PCC in primary care according to the participants was the consultation time with the GP, which is too short to actually explain their problem. A participant mentioned that: “In my own GP practice , I am experiencing the third generation of GPs , I noticed that doctors have less and less time . The consultation really just takes 10 minutes , so you can just ask one question . If you have more questions and your time is up , you will be cut off . It becomes very clear that there is no time left” (P4). Patients often felt unheard or misunderstood, because there was insufficient time during the consultation to discuss all relevant matters or to explain everything properly. As a result, the HCP was also unable to provide adequate support based on the patient’s context and to discuss any underlying problems. Participants said: “I would like that he [the GP] gives extra time to people who have difficulties with reading and writing . He [the GP] has knowledge in the medical field , but he should also know which patient have difficulties with reading and writing . Also , it should be pointed out what the rules and regulations are here in the Netherlands compared to other countries [regarding time]” (P5). Patients making a double appointment with the GP could be helpful. Moreover, patients at home writing down points to discuss as preparation of the consultation could contribute to a more efficient use of consultation time. One participant stated that “healthcare is commercialising in such a way that everything is expressed in Euros . The GP would like to take half an hour herself [for the consultation] , but the health insurer , which is focused on the money , plays a very important role here . And it’s getting worse , I feel . Sufficient time and attention for the patient are the building blocks of a relationship of trust , and this is at odds with the available time" (P4). Shared decision-making Participants experienced that shared decision-making in practice was not conducted properly. Partly because of the short consultation time, the pros and cons of different treatment options were not always explained well by the HCP. Some participants stated that due to insufficient insight of patients into the disease and treatment options, as well as the expectation that the HCP is the expert in the medical field, this resulted in both parties being reluctant to make shared decisions. Therefore, the choice of HCP often played a decisive role. The wishes and preferences of the patient often remained underexposed. Overall, participants mentioned that “I really like it when a GP asks you if you want to do something [which is part of care process] and whether you agree [with a treatment plan]” (P6). Collaboration between HCPs The collaboration between HCPs (e.g., between practice nurse and GP or HCPs between primary and secondary care) could be improved. Participants often experienced that the different HCPs involved in the care process were not always well informed. As a result, patients often had to repeat their story, at the expense of the limited time available. For example, (electronic) information transfer often fell short and relevant (medical) documents were insufficiently shared. The HCPs involved also often gave different advices, which led to confusion among patients. Better coordination between HCPs of the agreements and advices made, is necessary to provide PCC. Active role patient In certain groups, such as people with low health literacy skills, patients often lacked confidence to ask questions to the HCPs and take an active role for the benefit of their health. This was partly because patients assigned a high status to the GP and placed him/her on a pedestal. These patients often did not want to bother the GP with their questions. In addition, they did not indicate by themselves that they had low (health) literacy skills because of past unfortunate events (e.g., bullying, bad experiences with HCPs ‘not knowing who the patient is’). The patient was also rarely asked by the HCP whether they had low (health) literacy skills, with the result that the HCP had insufficient knowledge about the patient’s background. As one participant stated: “it would be good if the GP knew the background of the patient and what to consider . It is very important that the doctor knows what is going on behind the person in front of him/her” (P7). Solutions for patients having an active role could be to schedule an intake interview for every new patient in the practice; inform other involved HCPs of important characteristics of the patient (e.g., low literacy); give sufficient room to patients to ask questions, check whether patients have asked all their questions and whether they have understood the answers. On the other hand, patients can go into the consultation better prepared by writing down their discussion points and questions in advance. FGD 3 and 4 with care professionals A total of 18 experts received the invitation to participate in the FGDs, of which eleven experts agreed. In the characteristics of the participants are shown. Quantitative description of consensus level In round 1 consensus was achieved for 46 items out of a total 63 items (73%) among experts. All items were found relevant for the Dutch setting with the overall median lying in the 7–9 range. On 18 out of the 30 context items consensus was found (60%), 17 out of 19 mechanisms (89%), and 11 out of 14 outcomes (79%) ( ). On 17 items dissensus was found with a panel median in the 4–6 point range (3 items) and 7–9 point range (14 items). These items were included in round 2. In the second round, consensus was achieved among experts for 6 out of 17 items (35%), of which 4 out of 12 context items (33%), 1 out of 2 mechanisms (50%), and 1 out of 3 outcomes (33%). The overall median was in the 7–9 range. For 11 items, the relevance remained undecided. The overall median was in the 4–6 range (5 items) and in the 7–9 range (4 items), 2 items equally fell in the 4–6 range and 7–9 range. After both rounds, for 52 items out of 63 items (83%) consensus was found with all items being considered relevant. Qualitative description of context items, mechanisms, and outcomes The outcomes on every context item, mechanism, and outcome of the first and second Delphi round are shown in and Files respectively. The items from round 1 that were found to be equivocal, were included in the second round. Consensus Context items Based on both rounds, context items that were considered relevant for PCC in primary care in the Netherlands on macro-level were shifting the focus from a disease- and complaint-oriented approach to a more holistic approach, using evidence-based guidelines, foreseeing in sufficient capacity and time for patients during consultation, offering (more) space and resources to HCPs to experiment, and having flexible payment systems. Participants believed that “experimenting in its broadest sense should be taken into account to improve PCC towards patients” (P10, P13, P18). “For example , if you have patients with a chronic conditions and you want them to take more control of their health themselves , and as a care provider you have learned a new conversation technique to be applied during consultation in which you approach the person openly and let him/her decide for themselves what they want to change [in their care process] , then you have to have the space to try out the new technique , practice with it , and to improve it” (P16). On an organisational (meso) level, experts found that improving accessibility (e.g., to healthcare organisations, to documents, recorded consultations), having a good collaboration between HCPs and having a shared vision, having a supportive policy in place which strengthens the quality of PCC especially concerning low health literacy, and better integration between information and communications technology (ICT) systems are relevant items. Of the latter a participant mentioned: “Better integration between ICT systems promotes cooperation , care is then better coordinated and it becomes more person-centred . Now everyone works according their own way” (P12). On an individual (micro) level HCPs having PCC skills (e.g., regarding communication, shared decision-making, providing culturally sensitive care) possibly through training or acquired during their medical education was found relevant. In addition, HCPs providing patient education, patients having social support (networks), and patients being involved in organising care was considered relevant. A participant mentioned that “HCPs setting goals and making action plans is also very relevant , because often patients don’t know this by themselves . They often have questions during the consultation , and when the care provider reaches the bottom layer of those questions , you discover why the patient finds that important . Also , other things that are important for the patient emerge” (P10). Mechanisms On meso-level experts found a focus on care coordination and achieving effective collaboration between patient and HCP(s) relevant. On micro-level, it is key that HCPs provide effective communication (e.g., simplifying treatment strategies and information for patients, encouraging patients to ask questions), have an open and empathic attitude, are aware of the patient’s social circumstances, have a holistic focus, respecting the wishes and preferences of patients, applying shared decision-making together with patients, provide self-management support, and establishing a therapeutic relationship. Also, the involvement of patients and their family/informal caregivers in the care process was found relevant. Outcomes The following outcomes were considered relevant for PCC in primary care: an improved treatment approach with a more accurate intensity of support provided, higher therapy concordance, increased patient involvement, improved (psychological) health outcomes, improved health-related quality of life (HRQoL), higher satisfaction of patient, informal caregiver and/or HCP(s), improved relationship between patient and HCP(s), more accessible care, higher quality of care, and a higher cost-effectiveness of healthcare. One participant mentioned: “Intensity of the support provided by the HCP is very important as an outcome . You could consider it as a success factor of PCC , it is tailored support to the patient” (P12). Dissensus Context items After two rounds, a lack of agreement on the relevance of some items for PCC in primary care in the Netherlands was observed, such as the application and efficient use of ICT and e-health initiatives. “The information in e-health applications needs to be in line with what the healthcare provider says . Only if the information is in line and explained well , it will reinforce each other , otherwise it will lose its function . ” (P13) “E-health applications may not work for low-literate people or non-native speakers . Moreover , there are also people that are digitally illiterate” (P14). There was also dissensus on the item having sufficient male and female HCPs per practice, as participants found that “there are people who would like to have a male or a female care provider , it’s nice that people have that choice . But whether you choose a male or female doctor , they both have to provide PCC , regardless of their gender” (P15). Some participants believed that providing better administrative support for HCPs might positively influence PCC, but is not considered relevant to provide PCC. “Providing better administrative support for caregivers can reduce administrative barriers to increase working in a person-centred way . The [consultation] time you can spend on a patient is already limited , so if you can spend less time on administrative things such as electronically saving or capturing what has been discussed with the patient such as setting the goals , you have more time to provide PCC to the patient . But it is not a precondition to provide good PCC and therefore , not relevant” (P16). Regarding the item preparation of consultation by patient it was mentioned that “the preparation of a consultation by the patient is not by definition relevant for the provision of person-centred care by the care provider” (P9). “It is nice if a patient prepares a consultation , it can be very helpful . The question is also whether each patient can prepare the consultation , whether he/she is competent enough to do so . Someone who actively thinks about his/her health makes the conversation easier , but it is not a condition for the provision of PCC , that is the task of the care provider” (P8). About the item patients having a high/low socioeconomic status (SES), some mentioned that “having a high or low SES is not relevant for providing PCC . Most of the time it does require more effort to provide PCC to people with a low SES . But providing care to people with a high SES , such as expats , can also be challenging , as they are not familiar with the systems [in the country] , but are highly educated at the same time . SES is not decisive for PCC” (P12, P15). Dissensus was also found on the items setting up a personalised care planning and, HCPs stimulating patient empowerment. Mechanisms There was no agreement on the relevance concerning HCPs stimulating self-monitoring by patients. It was mentioned that “It is important that the patient can monitor his own medical condition . However , a person with low health literacy skills with for example severe rheumatism may need someone else to monitor him/her . Stimulating by the care provider is important , but you have to take into account what someone is able to do . I don’t think everyone can and will monitor their own health . It is beneficial for those who can” (P11). Outcomes No consensus was found on the items self-management skills of patients and health system outcomes (reduced use of healthcare system, less referrals, less follow-up examinations, reduced emergency department visits, reduced hospital (re)admissions) for PCC in primary care in the Netherlands. Additional items In addition to the items identified in the literature, the participants stated several other items, such as caregivers having more pleasure in their job as an outcome. To enhance (the focus on) PCC in primary care for low health literacy skills groups, the expertise of professionals who are familiar with working and treating these groups from diverse backgrounds could be used (i.e., peer education). Another item mentioned was that when involving patients in their care process, the responsibilities of the patient and HCP need to be clearly defined. Refined programme theory Based on the results of the FGDs, the middle-range PT derived from the international RRR has been refined for the Dutch setting ( ). In this refined PT the context items (C), mechanisms (M), and outcomes (O) that have been added, are underlined. The non-underlined items were already included in the middle-range PT. The refined PT demonstrated that to provide a better intensity of support to the patient (O) and optimally align care to the patient (O), it is necessary that HCPs are equipped with the knowledge and skills and are trained and educated (C) to have a holistic focus (M) taking into account the diversity aspect (C), instead of a biomedical, disease-oriented approach (C). Communication (M) tailored to the needs and health literacy skills of the patient plays an important role, just as tailor-made supporting material (C) being available for patients. By developing these together with the target group (C), it is more likely these will match the target group and contribute to realising a more active role of the patient (and their families) in the care process (M, O), and in the shared decision-making process (M). To communicate effectively (M), HCPs should be provided with sufficient time and space (C), also to become aware of the patient’s (social) circumstances (C), discuss the wishes and preferences of patients (M), and work in a culturally competent way (C). As a result, a higher satisfaction of patient, informal caregivers and/or HCP(s) (O) can be achieved and the PCC treatment approach (O) can be improved. If several HCPs are involved in the care process, good collaboration within the team (C) and between different domains (C) is desirable to ensure good care coordination (M). These elements can be stimulated by including them in the policy of (care) organisations, wherein attention is also paid to people with low health literacy skills (C). HCPs having an open, respectful, and empathic attitude (M) plays an important role in establishing a strong therapeutic relationship (M). Patient’s social support networks (C) also help to improve the patients’ (psychological) health (O). In addition, better integration between ICT systems (C), offering e-health options and access to documents, recorded consultations (C), play a key role in a more accessible care (O). Flexible payment models (C) could facilitate PCC in primary care (O). Next to providing patient education (C), HCPs should provide self-management support to patients (M), stimulating patient’s self-management skills (O), self-efficacy (O) and therapy concordance (O). When goals and action plans are set up together during personalised care planning (C), HCPs and patients have a shared vision (C), the patient has more confidence to ask questions (C) about the treatment (possibilities), and has more insight into the importance of his/her treatment (M), this may lead to improved HRQoL (O). On the long-term, higher cost-effectiveness of healthcare (O) and a higher quality of care (O) can be accomplished. FGD 1 and 2 consisted of a total of 14 participants. In the participants’ characteristics are shown. Participants who were not originally born in the Netherlands have been in the Netherlands for on average of 44 years (SD: 11.4 years). All context items, mechanisms, and outcomes presented to participants were found relevant for PCC in primary care in the Netherlands. This concerns the context items: patients having social support (networks), a good collaboration between HCPs, patient education being provided, sufficient time during consultation, setting up a personalised care planning, and making use of e-health options. The mechanisms deemed relevant for PCC in primary care in the Netherlands are HCPs providing effective communication (including listen attentively), HCPs having a holistic approach, HCPs showing respect and having an open, friendly, and empathic attitude, patients having an active role in their care process, establishing a therapeutic relationship, self-management support, and shared decision-making. The outcomes considered relevant concerned health outcomes, patient involvement, satisfaction of the patient, therapy concordance, self-management skills, and an improved treatment approach. On the items below participants had additional comments next to them being considered relevant. The participants reflected on these items based on their own experience, indicating that they are relevant for PCC in primary care, but not always carried out properly in practice. Communication According to the participants, HCPs did not (yet) adapt their communication sufficiently to the needs and wishes of the patients. Participants stated that “in the communication by the care provider more attention should be paid to diversity” (P1 and P2). One participant expressed that “communication is extremely important when you visit the GP . Often older migrants cannot communicate well in Dutch , but they do know what they want to ask in their own language . They often bring their son or daughter to the GP together with them to ask questions [related to medical health of patient]” (P1). In addition, the use of aids (pictures, attributes, etc.) during the consultation could support communication, which is currently very limited done. Also, patients often had difficulties understanding health information and medical terms, while most of them did not indicate this. This is particularly the case for low-literate people and migrants, who had difficulty with the (Dutch) language and were therefore limited in their communication options. One participant mentioned that "people still don’t have the guts to say they are illiterate , and that’s just because of the shame associated with it" (P3). Reinforcing patients’ language skills and using interpreters can improve communication. Consultation time An important barrier of PCC in primary care according to the participants was the consultation time with the GP, which is too short to actually explain their problem. A participant mentioned that: “In my own GP practice , I am experiencing the third generation of GPs , I noticed that doctors have less and less time . The consultation really just takes 10 minutes , so you can just ask one question . If you have more questions and your time is up , you will be cut off . It becomes very clear that there is no time left” (P4). Patients often felt unheard or misunderstood, because there was insufficient time during the consultation to discuss all relevant matters or to explain everything properly. As a result, the HCP was also unable to provide adequate support based on the patient’s context and to discuss any underlying problems. Participants said: “I would like that he [the GP] gives extra time to people who have difficulties with reading and writing . He [the GP] has knowledge in the medical field , but he should also know which patient have difficulties with reading and writing . Also , it should be pointed out what the rules and regulations are here in the Netherlands compared to other countries [regarding time]” (P5). Patients making a double appointment with the GP could be helpful. Moreover, patients at home writing down points to discuss as preparation of the consultation could contribute to a more efficient use of consultation time. One participant stated that “healthcare is commercialising in such a way that everything is expressed in Euros . The GP would like to take half an hour herself [for the consultation] , but the health insurer , which is focused on the money , plays a very important role here . And it’s getting worse , I feel . Sufficient time and attention for the patient are the building blocks of a relationship of trust , and this is at odds with the available time" (P4). Shared decision-making Participants experienced that shared decision-making in practice was not conducted properly. Partly because of the short consultation time, the pros and cons of different treatment options were not always explained well by the HCP. Some participants stated that due to insufficient insight of patients into the disease and treatment options, as well as the expectation that the HCP is the expert in the medical field, this resulted in both parties being reluctant to make shared decisions. Therefore, the choice of HCP often played a decisive role. The wishes and preferences of the patient often remained underexposed. Overall, participants mentioned that “I really like it when a GP asks you if you want to do something [which is part of care process] and whether you agree [with a treatment plan]” (P6). Collaboration between HCPs The collaboration between HCPs (e.g., between practice nurse and GP or HCPs between primary and secondary care) could be improved. Participants often experienced that the different HCPs involved in the care process were not always well informed. As a result, patients often had to repeat their story, at the expense of the limited time available. For example, (electronic) information transfer often fell short and relevant (medical) documents were insufficiently shared. The HCPs involved also often gave different advices, which led to confusion among patients. Better coordination between HCPs of the agreements and advices made, is necessary to provide PCC. Active role patient In certain groups, such as people with low health literacy skills, patients often lacked confidence to ask questions to the HCPs and take an active role for the benefit of their health. This was partly because patients assigned a high status to the GP and placed him/her on a pedestal. These patients often did not want to bother the GP with their questions. In addition, they did not indicate by themselves that they had low (health) literacy skills because of past unfortunate events (e.g., bullying, bad experiences with HCPs ‘not knowing who the patient is’). The patient was also rarely asked by the HCP whether they had low (health) literacy skills, with the result that the HCP had insufficient knowledge about the patient’s background. As one participant stated: “it would be good if the GP knew the background of the patient and what to consider . It is very important that the doctor knows what is going on behind the person in front of him/her” (P7). Solutions for patients having an active role could be to schedule an intake interview for every new patient in the practice; inform other involved HCPs of important characteristics of the patient (e.g., low literacy); give sufficient room to patients to ask questions, check whether patients have asked all their questions and whether they have understood the answers. On the other hand, patients can go into the consultation better prepared by writing down their discussion points and questions in advance. According to the participants, HCPs did not (yet) adapt their communication sufficiently to the needs and wishes of the patients. Participants stated that “in the communication by the care provider more attention should be paid to diversity” (P1 and P2). One participant expressed that “communication is extremely important when you visit the GP . Often older migrants cannot communicate well in Dutch , but they do know what they want to ask in their own language . They often bring their son or daughter to the GP together with them to ask questions [related to medical health of patient]” (P1). In addition, the use of aids (pictures, attributes, etc.) during the consultation could support communication, which is currently very limited done. Also, patients often had difficulties understanding health information and medical terms, while most of them did not indicate this. This is particularly the case for low-literate people and migrants, who had difficulty with the (Dutch) language and were therefore limited in their communication options. One participant mentioned that "people still don’t have the guts to say they are illiterate , and that’s just because of the shame associated with it" (P3). Reinforcing patients’ language skills and using interpreters can improve communication. An important barrier of PCC in primary care according to the participants was the consultation time with the GP, which is too short to actually explain their problem. A participant mentioned that: “In my own GP practice , I am experiencing the third generation of GPs , I noticed that doctors have less and less time . The consultation really just takes 10 minutes , so you can just ask one question . If you have more questions and your time is up , you will be cut off . It becomes very clear that there is no time left” (P4). Patients often felt unheard or misunderstood, because there was insufficient time during the consultation to discuss all relevant matters or to explain everything properly. As a result, the HCP was also unable to provide adequate support based on the patient’s context and to discuss any underlying problems. Participants said: “I would like that he [the GP] gives extra time to people who have difficulties with reading and writing . He [the GP] has knowledge in the medical field , but he should also know which patient have difficulties with reading and writing . Also , it should be pointed out what the rules and regulations are here in the Netherlands compared to other countries [regarding time]” (P5). Patients making a double appointment with the GP could be helpful. Moreover, patients at home writing down points to discuss as preparation of the consultation could contribute to a more efficient use of consultation time. One participant stated that “healthcare is commercialising in such a way that everything is expressed in Euros . The GP would like to take half an hour herself [for the consultation] , but the health insurer , which is focused on the money , plays a very important role here . And it’s getting worse , I feel . Sufficient time and attention for the patient are the building blocks of a relationship of trust , and this is at odds with the available time" (P4). Participants experienced that shared decision-making in practice was not conducted properly. Partly because of the short consultation time, the pros and cons of different treatment options were not always explained well by the HCP. Some participants stated that due to insufficient insight of patients into the disease and treatment options, as well as the expectation that the HCP is the expert in the medical field, this resulted in both parties being reluctant to make shared decisions. Therefore, the choice of HCP often played a decisive role. The wishes and preferences of the patient often remained underexposed. Overall, participants mentioned that “I really like it when a GP asks you if you want to do something [which is part of care process] and whether you agree [with a treatment plan]” (P6). The collaboration between HCPs (e.g., between practice nurse and GP or HCPs between primary and secondary care) could be improved. Participants often experienced that the different HCPs involved in the care process were not always well informed. As a result, patients often had to repeat their story, at the expense of the limited time available. For example, (electronic) information transfer often fell short and relevant (medical) documents were insufficiently shared. The HCPs involved also often gave different advices, which led to confusion among patients. Better coordination between HCPs of the agreements and advices made, is necessary to provide PCC. In certain groups, such as people with low health literacy skills, patients often lacked confidence to ask questions to the HCPs and take an active role for the benefit of their health. This was partly because patients assigned a high status to the GP and placed him/her on a pedestal. These patients often did not want to bother the GP with their questions. In addition, they did not indicate by themselves that they had low (health) literacy skills because of past unfortunate events (e.g., bullying, bad experiences with HCPs ‘not knowing who the patient is’). The patient was also rarely asked by the HCP whether they had low (health) literacy skills, with the result that the HCP had insufficient knowledge about the patient’s background. As one participant stated: “it would be good if the GP knew the background of the patient and what to consider . It is very important that the doctor knows what is going on behind the person in front of him/her” (P7). Solutions for patients having an active role could be to schedule an intake interview for every new patient in the practice; inform other involved HCPs of important characteristics of the patient (e.g., low literacy); give sufficient room to patients to ask questions, check whether patients have asked all their questions and whether they have understood the answers. On the other hand, patients can go into the consultation better prepared by writing down their discussion points and questions in advance. A total of 18 experts received the invitation to participate in the FGDs, of which eleven experts agreed. In the characteristics of the participants are shown. Quantitative description of consensus level In round 1 consensus was achieved for 46 items out of a total 63 items (73%) among experts. All items were found relevant for the Dutch setting with the overall median lying in the 7–9 range. On 18 out of the 30 context items consensus was found (60%), 17 out of 19 mechanisms (89%), and 11 out of 14 outcomes (79%) ( ). On 17 items dissensus was found with a panel median in the 4–6 point range (3 items) and 7–9 point range (14 items). These items were included in round 2. In the second round, consensus was achieved among experts for 6 out of 17 items (35%), of which 4 out of 12 context items (33%), 1 out of 2 mechanisms (50%), and 1 out of 3 outcomes (33%). The overall median was in the 7–9 range. For 11 items, the relevance remained undecided. The overall median was in the 4–6 range (5 items) and in the 7–9 range (4 items), 2 items equally fell in the 4–6 range and 7–9 range. After both rounds, for 52 items out of 63 items (83%) consensus was found with all items being considered relevant. Qualitative description of context items, mechanisms, and outcomes The outcomes on every context item, mechanism, and outcome of the first and second Delphi round are shown in and Files respectively. The items from round 1 that were found to be equivocal, were included in the second round. In round 1 consensus was achieved for 46 items out of a total 63 items (73%) among experts. All items were found relevant for the Dutch setting with the overall median lying in the 7–9 range. On 18 out of the 30 context items consensus was found (60%), 17 out of 19 mechanisms (89%), and 11 out of 14 outcomes (79%) ( ). On 17 items dissensus was found with a panel median in the 4–6 point range (3 items) and 7–9 point range (14 items). These items were included in round 2. In the second round, consensus was achieved among experts for 6 out of 17 items (35%), of which 4 out of 12 context items (33%), 1 out of 2 mechanisms (50%), and 1 out of 3 outcomes (33%). The overall median was in the 7–9 range. For 11 items, the relevance remained undecided. The overall median was in the 4–6 range (5 items) and in the 7–9 range (4 items), 2 items equally fell in the 4–6 range and 7–9 range. After both rounds, for 52 items out of 63 items (83%) consensus was found with all items being considered relevant. The outcomes on every context item, mechanism, and outcome of the first and second Delphi round are shown in and Files respectively. The items from round 1 that were found to be equivocal, were included in the second round. Context items Based on both rounds, context items that were considered relevant for PCC in primary care in the Netherlands on macro-level were shifting the focus from a disease- and complaint-oriented approach to a more holistic approach, using evidence-based guidelines, foreseeing in sufficient capacity and time for patients during consultation, offering (more) space and resources to HCPs to experiment, and having flexible payment systems. Participants believed that “experimenting in its broadest sense should be taken into account to improve PCC towards patients” (P10, P13, P18). “For example , if you have patients with a chronic conditions and you want them to take more control of their health themselves , and as a care provider you have learned a new conversation technique to be applied during consultation in which you approach the person openly and let him/her decide for themselves what they want to change [in their care process] , then you have to have the space to try out the new technique , practice with it , and to improve it” (P16). On an organisational (meso) level, experts found that improving accessibility (e.g., to healthcare organisations, to documents, recorded consultations), having a good collaboration between HCPs and having a shared vision, having a supportive policy in place which strengthens the quality of PCC especially concerning low health literacy, and better integration between information and communications technology (ICT) systems are relevant items. Of the latter a participant mentioned: “Better integration between ICT systems promotes cooperation , care is then better coordinated and it becomes more person-centred . Now everyone works according their own way” (P12). On an individual (micro) level HCPs having PCC skills (e.g., regarding communication, shared decision-making, providing culturally sensitive care) possibly through training or acquired during their medical education was found relevant. In addition, HCPs providing patient education, patients having social support (networks), and patients being involved in organising care was considered relevant. A participant mentioned that “HCPs setting goals and making action plans is also very relevant , because often patients don’t know this by themselves . They often have questions during the consultation , and when the care provider reaches the bottom layer of those questions , you discover why the patient finds that important . Also , other things that are important for the patient emerge” (P10). Mechanisms On meso-level experts found a focus on care coordination and achieving effective collaboration between patient and HCP(s) relevant. On micro-level, it is key that HCPs provide effective communication (e.g., simplifying treatment strategies and information for patients, encouraging patients to ask questions), have an open and empathic attitude, are aware of the patient’s social circumstances, have a holistic focus, respecting the wishes and preferences of patients, applying shared decision-making together with patients, provide self-management support, and establishing a therapeutic relationship. Also, the involvement of patients and their family/informal caregivers in the care process was found relevant. Outcomes The following outcomes were considered relevant for PCC in primary care: an improved treatment approach with a more accurate intensity of support provided, higher therapy concordance, increased patient involvement, improved (psychological) health outcomes, improved health-related quality of life (HRQoL), higher satisfaction of patient, informal caregiver and/or HCP(s), improved relationship between patient and HCP(s), more accessible care, higher quality of care, and a higher cost-effectiveness of healthcare. One participant mentioned: “Intensity of the support provided by the HCP is very important as an outcome . You could consider it as a success factor of PCC , it is tailored support to the patient” (P12). Based on both rounds, context items that were considered relevant for PCC in primary care in the Netherlands on macro-level were shifting the focus from a disease- and complaint-oriented approach to a more holistic approach, using evidence-based guidelines, foreseeing in sufficient capacity and time for patients during consultation, offering (more) space and resources to HCPs to experiment, and having flexible payment systems. Participants believed that “experimenting in its broadest sense should be taken into account to improve PCC towards patients” (P10, P13, P18). “For example , if you have patients with a chronic conditions and you want them to take more control of their health themselves , and as a care provider you have learned a new conversation technique to be applied during consultation in which you approach the person openly and let him/her decide for themselves what they want to change [in their care process] , then you have to have the space to try out the new technique , practice with it , and to improve it” (P16). On an organisational (meso) level, experts found that improving accessibility (e.g., to healthcare organisations, to documents, recorded consultations), having a good collaboration between HCPs and having a shared vision, having a supportive policy in place which strengthens the quality of PCC especially concerning low health literacy, and better integration between information and communications technology (ICT) systems are relevant items. Of the latter a participant mentioned: “Better integration between ICT systems promotes cooperation , care is then better coordinated and it becomes more person-centred . Now everyone works according their own way” (P12). On an individual (micro) level HCPs having PCC skills (e.g., regarding communication, shared decision-making, providing culturally sensitive care) possibly through training or acquired during their medical education was found relevant. In addition, HCPs providing patient education, patients having social support (networks), and patients being involved in organising care was considered relevant. A participant mentioned that “HCPs setting goals and making action plans is also very relevant , because often patients don’t know this by themselves . They often have questions during the consultation , and when the care provider reaches the bottom layer of those questions , you discover why the patient finds that important . Also , other things that are important for the patient emerge” (P10). On meso-level experts found a focus on care coordination and achieving effective collaboration between patient and HCP(s) relevant. On micro-level, it is key that HCPs provide effective communication (e.g., simplifying treatment strategies and information for patients, encouraging patients to ask questions), have an open and empathic attitude, are aware of the patient’s social circumstances, have a holistic focus, respecting the wishes and preferences of patients, applying shared decision-making together with patients, provide self-management support, and establishing a therapeutic relationship. Also, the involvement of patients and their family/informal caregivers in the care process was found relevant. The following outcomes were considered relevant for PCC in primary care: an improved treatment approach with a more accurate intensity of support provided, higher therapy concordance, increased patient involvement, improved (psychological) health outcomes, improved health-related quality of life (HRQoL), higher satisfaction of patient, informal caregiver and/or HCP(s), improved relationship between patient and HCP(s), more accessible care, higher quality of care, and a higher cost-effectiveness of healthcare. One participant mentioned: “Intensity of the support provided by the HCP is very important as an outcome . You could consider it as a success factor of PCC , it is tailored support to the patient” (P12). Context items After two rounds, a lack of agreement on the relevance of some items for PCC in primary care in the Netherlands was observed, such as the application and efficient use of ICT and e-health initiatives. “The information in e-health applications needs to be in line with what the healthcare provider says . Only if the information is in line and explained well , it will reinforce each other , otherwise it will lose its function . ” (P13) “E-health applications may not work for low-literate people or non-native speakers . Moreover , there are also people that are digitally illiterate” (P14). There was also dissensus on the item having sufficient male and female HCPs per practice, as participants found that “there are people who would like to have a male or a female care provider , it’s nice that people have that choice . But whether you choose a male or female doctor , they both have to provide PCC , regardless of their gender” (P15). Some participants believed that providing better administrative support for HCPs might positively influence PCC, but is not considered relevant to provide PCC. “Providing better administrative support for caregivers can reduce administrative barriers to increase working in a person-centred way . The [consultation] time you can spend on a patient is already limited , so if you can spend less time on administrative things such as electronically saving or capturing what has been discussed with the patient such as setting the goals , you have more time to provide PCC to the patient . But it is not a precondition to provide good PCC and therefore , not relevant” (P16). Regarding the item preparation of consultation by patient it was mentioned that “the preparation of a consultation by the patient is not by definition relevant for the provision of person-centred care by the care provider” (P9). “It is nice if a patient prepares a consultation , it can be very helpful . The question is also whether each patient can prepare the consultation , whether he/she is competent enough to do so . Someone who actively thinks about his/her health makes the conversation easier , but it is not a condition for the provision of PCC , that is the task of the care provider” (P8). About the item patients having a high/low socioeconomic status (SES), some mentioned that “having a high or low SES is not relevant for providing PCC . Most of the time it does require more effort to provide PCC to people with a low SES . But providing care to people with a high SES , such as expats , can also be challenging , as they are not familiar with the systems [in the country] , but are highly educated at the same time . SES is not decisive for PCC” (P12, P15). Dissensus was also found on the items setting up a personalised care planning and, HCPs stimulating patient empowerment. Mechanisms There was no agreement on the relevance concerning HCPs stimulating self-monitoring by patients. It was mentioned that “It is important that the patient can monitor his own medical condition . However , a person with low health literacy skills with for example severe rheumatism may need someone else to monitor him/her . Stimulating by the care provider is important , but you have to take into account what someone is able to do . I don’t think everyone can and will monitor their own health . It is beneficial for those who can” (P11). Outcomes No consensus was found on the items self-management skills of patients and health system outcomes (reduced use of healthcare system, less referrals, less follow-up examinations, reduced emergency department visits, reduced hospital (re)admissions) for PCC in primary care in the Netherlands. After two rounds, a lack of agreement on the relevance of some items for PCC in primary care in the Netherlands was observed, such as the application and efficient use of ICT and e-health initiatives. “The information in e-health applications needs to be in line with what the healthcare provider says . Only if the information is in line and explained well , it will reinforce each other , otherwise it will lose its function . ” (P13) “E-health applications may not work for low-literate people or non-native speakers . Moreover , there are also people that are digitally illiterate” (P14). There was also dissensus on the item having sufficient male and female HCPs per practice, as participants found that “there are people who would like to have a male or a female care provider , it’s nice that people have that choice . But whether you choose a male or female doctor , they both have to provide PCC , regardless of their gender” (P15). Some participants believed that providing better administrative support for HCPs might positively influence PCC, but is not considered relevant to provide PCC. “Providing better administrative support for caregivers can reduce administrative barriers to increase working in a person-centred way . The [consultation] time you can spend on a patient is already limited , so if you can spend less time on administrative things such as electronically saving or capturing what has been discussed with the patient such as setting the goals , you have more time to provide PCC to the patient . But it is not a precondition to provide good PCC and therefore , not relevant” (P16). Regarding the item preparation of consultation by patient it was mentioned that “the preparation of a consultation by the patient is not by definition relevant for the provision of person-centred care by the care provider” (P9). “It is nice if a patient prepares a consultation , it can be very helpful . The question is also whether each patient can prepare the consultation , whether he/she is competent enough to do so . Someone who actively thinks about his/her health makes the conversation easier , but it is not a condition for the provision of PCC , that is the task of the care provider” (P8). About the item patients having a high/low socioeconomic status (SES), some mentioned that “having a high or low SES is not relevant for providing PCC . Most of the time it does require more effort to provide PCC to people with a low SES . But providing care to people with a high SES , such as expats , can also be challenging , as they are not familiar with the systems [in the country] , but are highly educated at the same time . SES is not decisive for PCC” (P12, P15). Dissensus was also found on the items setting up a personalised care planning and, HCPs stimulating patient empowerment. There was no agreement on the relevance concerning HCPs stimulating self-monitoring by patients. It was mentioned that “It is important that the patient can monitor his own medical condition . However , a person with low health literacy skills with for example severe rheumatism may need someone else to monitor him/her . Stimulating by the care provider is important , but you have to take into account what someone is able to do . I don’t think everyone can and will monitor their own health . It is beneficial for those who can” (P11). No consensus was found on the items self-management skills of patients and health system outcomes (reduced use of healthcare system, less referrals, less follow-up examinations, reduced emergency department visits, reduced hospital (re)admissions) for PCC in primary care in the Netherlands. In addition to the items identified in the literature, the participants stated several other items, such as caregivers having more pleasure in their job as an outcome. To enhance (the focus on) PCC in primary care for low health literacy skills groups, the expertise of professionals who are familiar with working and treating these groups from diverse backgrounds could be used (i.e., peer education). Another item mentioned was that when involving patients in their care process, the responsibilities of the patient and HCP need to be clearly defined. Based on the results of the FGDs, the middle-range PT derived from the international RRR has been refined for the Dutch setting ( ). In this refined PT the context items (C), mechanisms (M), and outcomes (O) that have been added, are underlined. The non-underlined items were already included in the middle-range PT. The refined PT demonstrated that to provide a better intensity of support to the patient (O) and optimally align care to the patient (O), it is necessary that HCPs are equipped with the knowledge and skills and are trained and educated (C) to have a holistic focus (M) taking into account the diversity aspect (C), instead of a biomedical, disease-oriented approach (C). Communication (M) tailored to the needs and health literacy skills of the patient plays an important role, just as tailor-made supporting material (C) being available for patients. By developing these together with the target group (C), it is more likely these will match the target group and contribute to realising a more active role of the patient (and their families) in the care process (M, O), and in the shared decision-making process (M). To communicate effectively (M), HCPs should be provided with sufficient time and space (C), also to become aware of the patient’s (social) circumstances (C), discuss the wishes and preferences of patients (M), and work in a culturally competent way (C). As a result, a higher satisfaction of patient, informal caregivers and/or HCP(s) (O) can be achieved and the PCC treatment approach (O) can be improved. If several HCPs are involved in the care process, good collaboration within the team (C) and between different domains (C) is desirable to ensure good care coordination (M). These elements can be stimulated by including them in the policy of (care) organisations, wherein attention is also paid to people with low health literacy skills (C). HCPs having an open, respectful, and empathic attitude (M) plays an important role in establishing a strong therapeutic relationship (M). Patient’s social support networks (C) also help to improve the patients’ (psychological) health (O). In addition, better integration between ICT systems (C), offering e-health options and access to documents, recorded consultations (C), play a key role in a more accessible care (O). Flexible payment models (C) could facilitate PCC in primary care (O). Next to providing patient education (C), HCPs should provide self-management support to patients (M), stimulating patient’s self-management skills (O), self-efficacy (O) and therapy concordance (O). When goals and action plans are set up together during personalised care planning (C), HCPs and patients have a shared vision (C), the patient has more confidence to ask questions (C) about the treatment (possibilities), and has more insight into the importance of his/her treatment (M), this may lead to improved HRQoL (O). On the long-term, higher cost-effectiveness of healthcare (O) and a higher quality of care (O) can be accomplished. Principal findings In this study the middle-range PT from the international RRR was refined for PCC in primary care in the Netherlands by assessing the level of consensus on the relevance of items derived from the RRR by means of FGDs and a Delphi-panel. Based on the FGDs, several items have been added to refine the PT. The context items that were added concern HCPs being aware of the patient’s (social) circumstances, working in a culturally competent way, HCPs and patients having a shared vision and setting up goals and action plans together, patients having more confidence to ask questions, providing tailor-made supporting material, developing supporting material and tools together with the target group, a better integration between ICT systems, providing patient access to documents and recorded consultations, and flexible payment models being in place. No mechanisms were added. Outcomes that were added include better alignment of care to the patient, having accessible care, improving the patient’s self-efficacy, improving HRQoL, higher cost-effectiveness of healthcare, and a higher quality of care. One item was excluded from the middle-range PT to refine the PT as not all FGDs found this item relevant for PCC in primary care in the Dutch setting, namely improved health system outcomes (outcome). This study makes clear that sufficient attention needs to be paid to the complex interplay of the context items, mechanisms and outcomes concerning PCC in primary care in the Netherlands. Bypassing this complexity will most likely not lead to the desired effectiveness of PCC in primary care. The use of all items in their mutual coherence is necessary to truly realise PCC. Strengths and limitations One of the strengths of this study is the use of the combination of FGDs and the Delphi method. The participation of both–the often thought of as hard to reach—patients with low (health) literacy levels and primary care professionals increase the face validity of the results of this study. A possible limitation concerns the limited number of FGDs.It is suggested to conduct two to three FGDs to capture 80% of themes, and three to six groups for 90% of themes . However, data saturation seemed to be reached as in the second and fourth FGD no new items were mentioned than in the first and third FGD. Also, there were no specific inclusion criteria for participants of FGD 1 and 2. These participants were recruited through convenience sampling. A third limitation to be considered is that the group moderators of FGD 3 and 4 were not impartial to the study. Nevertheless, they only moderated the discussion and did not share their own opinions. Comparison to previous studies Consistent with our refined PT, studies have found that in order to deliver effective PCC the patient wishes, needs, and abilities need to be taken into account to align care to the patient . Also, HCPs should stimulate patients to set and achieve their own treatment goals, and access to care should be optimised . The importance of providing tailored supporting materials, culturally competent working, and self-efficacy of the patient has also been reported [ – ]. Individualised care plans, physical comfort at GP practice, and providing patients emotional support were also mentioned, but not found in our study . Implications for practice and research Given the complexity of the interplay of all items, it is recommended for healthcare organisations to develop and implement an all-encompassing approach and to divide the approach into phases, to make it manageable. During the first phase (initiation) HCPs need to acquire relevant knowledge and skills through education and training. Patients need to be aware of their role in their care process and that they have social support networks. In the second phase (decision & adoption) adjustments regarding the healthcare system, policy-making, financing issues, integration between ICT systems, and creating sufficient experimental space, time and resources are made concrete. In the third phase (execution) the focus is on the implementation of a good collaboration between HCPs, the provision of self-management support, patient education, shared decision-making, whereby information and communication should be simplified. In the fourth phase (monitoring & evaluation) it is necessary to gain insight into (unexpected) problems and challenges, to find out to what extent the intended results/effectiveness are being achieved and to meet the needs for resources. With respect to further research, it is recommended to assess how and to what extent the items have been collectively implemented and to evaluate how effective PCC is in practice, for whom, how and why. Also, items on which dissensus was found need to further examined why they were found irrelevant for the Dutch setting. Our understanding of PCC is likely to increase (faster) when applying realist research iteratively and in different settings. In this study the middle-range PT from the international RRR was refined for PCC in primary care in the Netherlands by assessing the level of consensus on the relevance of items derived from the RRR by means of FGDs and a Delphi-panel. Based on the FGDs, several items have been added to refine the PT. The context items that were added concern HCPs being aware of the patient’s (social) circumstances, working in a culturally competent way, HCPs and patients having a shared vision and setting up goals and action plans together, patients having more confidence to ask questions, providing tailor-made supporting material, developing supporting material and tools together with the target group, a better integration between ICT systems, providing patient access to documents and recorded consultations, and flexible payment models being in place. No mechanisms were added. Outcomes that were added include better alignment of care to the patient, having accessible care, improving the patient’s self-efficacy, improving HRQoL, higher cost-effectiveness of healthcare, and a higher quality of care. One item was excluded from the middle-range PT to refine the PT as not all FGDs found this item relevant for PCC in primary care in the Dutch setting, namely improved health system outcomes (outcome). This study makes clear that sufficient attention needs to be paid to the complex interplay of the context items, mechanisms and outcomes concerning PCC in primary care in the Netherlands. Bypassing this complexity will most likely not lead to the desired effectiveness of PCC in primary care. The use of all items in their mutual coherence is necessary to truly realise PCC. One of the strengths of this study is the use of the combination of FGDs and the Delphi method. The participation of both–the often thought of as hard to reach—patients with low (health) literacy levels and primary care professionals increase the face validity of the results of this study. A possible limitation concerns the limited number of FGDs.It is suggested to conduct two to three FGDs to capture 80% of themes, and three to six groups for 90% of themes . However, data saturation seemed to be reached as in the second and fourth FGD no new items were mentioned than in the first and third FGD. Also, there were no specific inclusion criteria for participants of FGD 1 and 2. These participants were recruited through convenience sampling. A third limitation to be considered is that the group moderators of FGD 3 and 4 were not impartial to the study. Nevertheless, they only moderated the discussion and did not share their own opinions. Consistent with our refined PT, studies have found that in order to deliver effective PCC the patient wishes, needs, and abilities need to be taken into account to align care to the patient . Also, HCPs should stimulate patients to set and achieve their own treatment goals, and access to care should be optimised . The importance of providing tailored supporting materials, culturally competent working, and self-efficacy of the patient has also been reported [ – ]. Individualised care plans, physical comfort at GP practice, and providing patients emotional support were also mentioned, but not found in our study . Given the complexity of the interplay of all items, it is recommended for healthcare organisations to develop and implement an all-encompassing approach and to divide the approach into phases, to make it manageable. During the first phase (initiation) HCPs need to acquire relevant knowledge and skills through education and training. Patients need to be aware of their role in their care process and that they have social support networks. In the second phase (decision & adoption) adjustments regarding the healthcare system, policy-making, financing issues, integration between ICT systems, and creating sufficient experimental space, time and resources are made concrete. In the third phase (execution) the focus is on the implementation of a good collaboration between HCPs, the provision of self-management support, patient education, shared decision-making, whereby information and communication should be simplified. In the fourth phase (monitoring & evaluation) it is necessary to gain insight into (unexpected) problems and challenges, to find out to what extent the intended results/effectiveness are being achieved and to meet the needs for resources. With respect to further research, it is recommended to assess how and to what extent the items have been collectively implemented and to evaluate how effective PCC is in practice, for whom, how and why. Also, items on which dissensus was found need to further examined why they were found irrelevant for the Dutch setting. Our understanding of PCC is likely to increase (faster) when applying realist research iteratively and in different settings. This study shows that for PCC to be effective in primary care, the complex interplay of context, mechanisms, and outcomes deemed relevant to a setting must be met. Added items to refine the PT for the Dutch primary care setting indicated that to optimally align care to the patient, next to tailored communication, also tailored supporting material that is developed together with the target group is key. HCPs and patients need to have a shared vision and set up goals and action plans together. HCPs should stimulate patient’s self-efficacy, need to be aware of the patient’s (social) circumstances and work in a culturally sensitive way. Better integration between ICT-systems, flexible payment models, and patients access to documents, recorded consultations should be in place. On the long-term higher cost-effectiveness and a higher quality of healthcare can be realised when sufficient attention is paid to the interplay of relevant context items, mechanisms and outcomes. S1 File Topic guide for FGD 1 and 2. (PDF) Click here for additional data file. S2 File Delphi questionnaire for FGD 3 and 4. (PDF) Click here for additional data file. S3 File Results Delphi round 1. (PDF) Click here for additional data file. S4 File Results Delphi round 2. (PDF) Click here for additional data file.
Severe asthma in children
2e7b52e2-b5a2-4a6b-b301-6b4ed665a04d
9998054
Internal Medicine[mh]
Introduction Asthma is the most common chronic lower respiratory disease that commonly begins in childhood and has a wide range of symptoms and phenotypes that can progress or subside over time. Only 2 to 5% of children have severe asthma; however, its burden on the economy and resource usage is significant. - Even though most asthmatic children can be effectively treated with currently available drugs, many asthmatic children remain difficult-to-treat (DTA). Much remains unknown on the optimal techniques for treating these patients. Unlike adults, children with severe asthma have higher total serum immunoglobulin E (IgE), increased blood eosinophils, and multiple aeroallergen sensitization. The main comorbidities detected in pediatrics were bronchial hyperresponsiveness (BHR) and decreased lung function. - The Saudi Pediatric Pulmonology Association (SPPA)-Pediatric Severe Asthma Task Force includes clinicians with expertise in severe asthma, representing most Saudi health authorities. The task force decided to write a consensus on definitions, phenotypes, pathophysiology, evaluation, and management of severe asthma with a specific recommendation for practice. The methods employed in this document to develop clinical recommendations follow local and worldwide guidelines. The task force provides the basis for rational decisions in managing severe asthma according to international standards. Methods The task force consisted of 14 invited pediatric asthma experts. The subject was initially subdivided into many topics. At least, 2 specialists were selected for each topic. Topic writers carried out their own literature searches and created their own databases based on the results of those searches. There was no attempt to assess the evidence or the recommendations. Experts were provided opportunities to have their ideas heard and considered by their peers through the use of the nominal group technique (NGT), an organized face-to-face group interaction. As of July 2021, the literature search was completed and the findings were presented. There were 2 virtual sessions held in April and July 2021 in which the experts provided draft reports and received feedback from the rest of the panel. The whole panel examined and discussed the recommendations and supporting evidence during these meetings. A consensus was necessary for the recommendations to be approved, and that was defined as a majority approval. The recommendations were updated several times until everyone agreed with them. Although the guidelines, medications, and technologies on the market varied, the panel made an effort to produce a consensus statement that would be applicable worldwide. Definition The severity assessment of asthmatic children in the clinical setting is essential as it guides the management plan and determines the need for referral to a specialist. The asthma severity assessment is dictated by the treatment step needed to control the patient’s symptoms. 1 Even though severe asthma has multiple definitions, the differences between them are subtle. The recent guidelines by European Respiratory Society/American Thoracic Society and Global Initiative for Asthma define it as “asthma that requires step 4 or 5 therapy (high-dose inhaled corticosteroids [ICS]) plus a second controller)- to be controlled or uncontrolled”. A study discovered that 4.5% of children diagnosed with asthma had “severe asthma,” with an estimated prevalence of 0.5%. Difficult-to-treat asthma is defined as uncontrolled asthma related to a poor inhaler technique, suboptimal adherence to therapy, untreated modifiable factors, or an incorrect diagnosis of asthma. Labels such as “refractory asthma” and “treatment-resistant asthma” are no longer appropriate with the emergence of biological therapies. Burden and epidemiology of severe asthma Asthma remains a prevalent global health and socio-economic problem, despite several decades of progress in asthma management. Severe asthma risk factors have been identified on the basis of several epidemiological studies. Though severe asthma in children occasionally presents during school age; however, it tends to start earlier (in the first 3 years of life) in those with severe asthma compared to those with mild-to-moderate asthma who tend to have their symptoms onset relatively later (5 years of age or later). , Babies born with lower lung function, assessed by maximal expiratory flows at functional residual capacity (Vmax [FRC]), shortly after birth have a higher risk of severe childhood asthma. , Atopic dermatitis, bronchial hyperresponsiveness, airway obstruction, high fractional exhaled nitric oxide (FeNO), and African American race are all risk factors for severe childhood asthma. , - There are no sufficient data on severe asthma in childhood in Saudi Arabia. Recent investigations in Saudi Arabia demonstrated that between 1986 and 2017, the prevalence of childhood asthma varied across the country, from 9% in the Southern region to 33.7% in the Eastern region. According to the Saudi Initiative for Asthma, children in Saudi Arabia have an asthma prevalence rate of 8 to 25%. This discrepancy could be explained by the different surveying methods that were used during the assessment of prevalence or the different age groups that were assessed. Approximately 30% of Saudi Arabian citizens are under the age of 15, and 68% fall somewhere between 15 and 64. As a result, childhood asthma is likely to remain a serious public health concern in Saudi Arabia. Chronic symptoms, acute exacerbations, and drug side effects are common in patients with severe asthma. Patients with severe asthma may experience disruptions in their ordinary activities, sleep, physical activity, social life, and mental health. Severe asthma has a significant financial impact on society. The total cost of asthma in the United States in 2013, based on the pooled sample, was $81.9 billion, including expenses associated with absenteeism and mortality. The most significant cost drivers of direct costs were discovered to be hospitalization and drugs. Controlling asthma has the ability to enhance not just one’s health but also save money on hospital costs and increase productivity. More research is needed to determine the prevalence of severe childhood asthma and its burden on the healthcare system in Saudi Arabia. Pathogenesis of severe asthma Asthma has been known to be an eosinophilic airway inflammatory disease linked to BHR. - Indeed, the quantity of eosinophils in the lungs is correlated with the severity of the disease and has been used to classify clinical phenotypes and guide treatment in severe asthma. The immunopathogenesis of severe asthma is different from mild to moderate asthma as there are significant differences in the immune response and the extent and type of subsequent inflammatory cytokine production. Another subset of severe asthma is glucocorticoid-resistant asthma, which occurs due to multiple pathophysiological mechanisms. The inflammatory cascade in severe asthma is mainly caused by T-helper 2 cells (Th-2) activation and the release of Th-2 related cytokines, predominantly Interleukin-4 (IL-4), IL-5, and IL-13. , The extent of expression of these cytokines correlates with asthma severity. Moreover, severe asthma is associated with inflammatory responses by other T-helper cells, which are Th-17 and Th-1. The Th-17 response is initiated by IL-6 and maintained by IL-23, which releases IL-17, enhancing the neutrophilic production. Interferon-gamma (IFN-g) is another cytokine that has been implicated in severe asthma, and it is released through the activation of Th-1 cells. Finally, innate immunity, precisely, innate lymphoid cell type 2, has a major role in severe asthma pathogenesis. Iymphoid cell type 2 mediators include thymic stromal lymphopoietin (TSLP), IL-25, and IL-33. IL-33, in particular, is linked to severe asthma and changes in the airways ( ). Glucocorticoids (GC) act by binding to the glucocorticoid receptors (GR) in the cytoplasm, forming a complex that binds to the DNA, causing an anti-inflammatory effect. There are 2 subtypes of glucocorticoid receptors, GR-α and GR-α. Normally GC binds to GR-a to elicit the anti-inflammatory reaction. Unlike GR-α, GR-α does not bind to GC and acts as a weak dominant inhibitor of GR-α. , Reduced GR-binding affinity, GR-overexpression, decreased histone deacetylase (HDAC) activity, and genetic predisposition all contribute to GC resistance asthma. - The reduction in GR binding affinity has been linked to the expression of IL-4 and IL-2. Meanwhile, the evidence is inconclusive regarding the relationship between GR-α overexpression and severe asthma. Alternatively, the reduction of HDAC activity is linked to phosphorylation by a phosphoinositide 3-kinase that is activated by oxidative stress. , GC-resistant asthma might have a genetic predisposition, and there is speculation that it’s linked to certain genetic variants, in particular, glucocorticoid-induced transcript 1 gene (GLCCI1). , Type 2-low asthma In the pediatric population, T2-low asthma is less common than T2-high asthma and has not been fully understood yet. In moderate to severe asthma patients, the activation of Th-1 and Th-17 cells by T2-low asthma is detected. These patients are usually older, less prone to allergies, and less responsive to corticosteroids. There has been little progress in the research of therapeutic medications for T2-low asthma. The promising efficacy of azithromycin and bronchial thermoplasty has been reported. Type 2-high asthma Both allergic and non-allergic eosinophilic asthma are classified as T2-high asthma. In allergic asthma, IgE-dependent mechanisms are crucial, while non-allergic asthma may be dominated by T2 cytokine inflammation. In addition, IL-33, IL-25, and thymine stromal lymphopoietin are activated by the interaction between the airway epithelium and the pollutants, inhaled allergens, and microorganisms in T2-high asthma, leading to further activation of IL-4 and IL-5, enhancing the upregulation of vascular endothelium attachment receptors and participate in the maturity and survival of eosinophils, respectively. The stimulation of the prostaglandin dopamine receptor causes eosinophils to be attracted to the lung mucous membrane. Inflammation of the bronchial epithelium causes bronchial obstruction and leukotriene generation. Immunoglobulin E is produced in B cells by IL-4, and IgE unites with mast cells to induce cell death, which guarantees cytokines and eicosanoids, promoting airway inflammation. Airway smooth muscle hypersensitivity and mucus hypersecretion are also linked to IL-13. The response to biologics can be predicted by the count of sputum and blood eosinophil, serum periostin, and IgE. In the pediatric population, T2-low asthma is less common than T2-high asthma and has not been fully understood yet. In moderate to severe asthma patients, the activation of Th-1 and Th-17 cells by T2-low asthma is detected. These patients are usually older, less prone to allergies, and less responsive to corticosteroids. There has been little progress in the research of therapeutic medications for T2-low asthma. The promising efficacy of azithromycin and bronchial thermoplasty has been reported. Both allergic and non-allergic eosinophilic asthma are classified as T2-high asthma. In allergic asthma, IgE-dependent mechanisms are crucial, while non-allergic asthma may be dominated by T2 cytokine inflammation. In addition, IL-33, IL-25, and thymine stromal lymphopoietin are activated by the interaction between the airway epithelium and the pollutants, inhaled allergens, and microorganisms in T2-high asthma, leading to further activation of IL-4 and IL-5, enhancing the upregulation of vascular endothelium attachment receptors and participate in the maturity and survival of eosinophils, respectively. The stimulation of the prostaglandin dopamine receptor causes eosinophils to be attracted to the lung mucous membrane. Inflammation of the bronchial epithelium causes bronchial obstruction and leukotriene generation. Immunoglobulin E is produced in B cells by IL-4, and IgE unites with mast cells to induce cell death, which guarantees cytokines and eicosanoids, promoting airway inflammation. Airway smooth muscle hypersensitivity and mucus hypersecretion are also linked to IL-13. The response to biologics can be predicted by the count of sputum and blood eosinophil, serum periostin, and IgE. Evaluation of severe asthma in children The evaluation of a patient with severe asthma can be challenging. Severe asthma is a heterogeneous and dynamic disease, and a careful approach with temporal observation and follow-up is paramount. With that being said, the main objective of the evaluation is to tease out other causes of problematic asthma,as seen in Figure 2, so we can define outpatients with “true” severe asthma, as most patients who present with what is labeled as “severe” asthma end up not having it. , - The evaluation is usually carried out in specialized centers with a dedicated multidisciplinary team. Each member will have a clear role with usually pre-determined forms and checklists. This guarantees a uniform evaluation and decreases any interpersonal variations. Some centers will also add a home visit to the evaluation. , , The evaluation approach for severe asthma can be simplified into 3 steps: Confirm diagnosis of asthma Full clinical evaluation: This is the best and most affordable technique to diagnose asthma. As many asthma mimickers are classified as “severe asthma,” a thorough history is required. The clinical history should be documented. Chronic obstructive pulmonary disease is a common cause of difficulty breathing (DOB). Similarly, not all wheezes are expiratory noises or indicate airway obstruction. Many people use the phrase “wheeze” to describe any noisy breathing or even DOB. Furthermore, it is imperative to say a direct translation may affect the accuracy of history taking. 11 For example, many patients have variable bedtimes during holidays. So a “night” cough might arise throughout the day. Try to identify symptoms or events that offer alternate diagnoses. A child “wheezing” since birth, having a year-round wet cough, not responding to bronchodilator, and coughing exclusively during wakefulness is most likely not asthmatics. A detailed physical examination should follow this, with a focus on the symptoms and signs of other illnesses, like aches and pain. A child with failure to thrive, stridor, and crackles (of course, among others) is most likely not asthmatic. As a result of the detailed history and physical examinations, a personalized action plan will be put in place. This plan will tell you which tests, investigations, and interventions need to be carriedout. Pulmonary function tests : These are used to assess the patient’s degree of airflow limitations, response to bronchodilators, lung volumes, and air trapping, among others. Spirometry should always be carried out to determine severe pediatric asthma with elevated bronchodilator response, which may be linked with impaired lung function. , A prolonged bronchodilator reversibility (BDR) may be linked to poor medication compliance or incorrect inhaler technique and may be indicative of a favorable response to ICSs. Bronchoprovocation tests (such as the methacholine challenge test) can be used if an asthma diagnosis is in question. Other investigations: The European respiratory society suggested that children between the ages of 5 and 16 who are suspected of having asthma should be tested for FeNO. shows a wide range of tests for severe asthma diagnosis. Check barriers for asthma control Adherence: Poor adherence to severe asthma medications is common; it may be difficult to identify non-adherent patients. Therefore, it is crucial to question the patient every time he attends the clinic regarding his compliance and double-check that with his prescription refills, medication counters, and caregiver feedback. Techniques: A very important factor causing the poor response to medication is the inappropriate technique used in administering the medications. Children should never receive metered dose inhaler inhaled directly through the mouth. Proper use of valved holding chambers (spacers) is necessary, and it should be reviewed every visit. Environment: Both home and school environments should be checked for possible indoor allergens triggering asthma, such as house dust mites, pets, mold, smoking, etc. Outdoor allergens such as air pollutants, sandstorms, and plants contribute to the poor control of severe asthma. Exclude comorbidities Comorbidities are often linked to asthma severity and may contribute to poor control. In the Severe Asthma Research Program III cohort of children, body mass index, gastroesophageal reflux disease (GERD), and sinusitis were significantly linked with exacerbation frequency, hence identifying and controlling GERD I is critical. Allergic rhinitis, adenoid hypertrophy, and obesity should be also controlled. shows a list of common comorbidities that need to be looked for. Full clinical evaluation: This is the best and most affordable technique to diagnose asthma. As many asthma mimickers are classified as “severe asthma,” a thorough history is required. The clinical history should be documented. Chronic obstructive pulmonary disease is a common cause of difficulty breathing (DOB). Similarly, not all wheezes are expiratory noises or indicate airway obstruction. Many people use the phrase “wheeze” to describe any noisy breathing or even DOB. Furthermore, it is imperative to say a direct translation may affect the accuracy of history taking. 11 For example, many patients have variable bedtimes during holidays. So a “night” cough might arise throughout the day. Try to identify symptoms or events that offer alternate diagnoses. A child “wheezing” since birth, having a year-round wet cough, not responding to bronchodilator, and coughing exclusively during wakefulness is most likely not asthmatics. A detailed physical examination should follow this, with a focus on the symptoms and signs of other illnesses, like aches and pain. A child with failure to thrive, stridor, and crackles (of course, among others) is most likely not asthmatic. As a result of the detailed history and physical examinations, a personalized action plan will be put in place. This plan will tell you which tests, investigations, and interventions need to be carriedout. Pulmonary function tests : These are used to assess the patient’s degree of airflow limitations, response to bronchodilators, lung volumes, and air trapping, among others. Spirometry should always be carried out to determine severe pediatric asthma with elevated bronchodilator response, which may be linked with impaired lung function. , A prolonged bronchodilator reversibility (BDR) may be linked to poor medication compliance or incorrect inhaler technique and may be indicative of a favorable response to ICSs. Bronchoprovocation tests (such as the methacholine challenge test) can be used if an asthma diagnosis is in question. Other investigations: The European respiratory society suggested that children between the ages of 5 and 16 who are suspected of having asthma should be tested for FeNO. shows a wide range of tests for severe asthma diagnosis. Adherence: Poor adherence to severe asthma medications is common; it may be difficult to identify non-adherent patients. Therefore, it is crucial to question the patient every time he attends the clinic regarding his compliance and double-check that with his prescription refills, medication counters, and caregiver feedback. Techniques: A very important factor causing the poor response to medication is the inappropriate technique used in administering the medications. Children should never receive metered dose inhaler inhaled directly through the mouth. Proper use of valved holding chambers (spacers) is necessary, and it should be reviewed every visit. Environment: Both home and school environments should be checked for possible indoor allergens triggering asthma, such as house dust mites, pets, mold, smoking, etc. Outdoor allergens such as air pollutants, sandstorms, and plants contribute to the poor control of severe asthma. Comorbidities are often linked to asthma severity and may contribute to poor control. In the Severe Asthma Research Program III cohort of children, body mass index, gastroesophageal reflux disease (GERD), and sinusitis were significantly linked with exacerbation frequency, hence identifying and controlling GERD I is critical. Allergic rhinitis, adenoid hypertrophy, and obesity should be also controlled. shows a list of common comorbidities that need to be looked for. Biological treatment of severe asthma in children There are now 5 biological drugs, including Omalizumab, which binds to the high-affinity IgE receptor, Mepolizumab and Reslizumab that bind IL5, Benralizumab which binds to the IL5 receptor subunit, and Dupilumab which attaches to the IL4 receptor subunit and so blocks both IL4 and IL13, . Tezepilumab, a TSLP-binding antibody, is now in phase 2B studies. Only 2 (Mepolizumab and Omalizumab) have been approved for use in children with asthma, while dupilumab has been approved for use in children with atopic dermatitis. , Patients should be prescribed carefully: are they asthmatics who could be controlled with low-dose ICS if used effectively, in which case the TH2 endotype is likely to be critical, or are they true severe therapy-resistant asthma (STRA), in which case numerous endotypes are likely to be important? The first priority is to establish who should be given Omalizumab and who should be given Mepolizumab, as these are the 2 biologicals approved for use in children. , Omalizumab It is used to treat severe allergic asthma that does not respond to high doses of corticosteroids, and it’s also used to treat persistent spontaneous urticaria in some cases. Omalizumab was approved for use in pediatrics with severe asthma, which acts by interaction with the peripheral blood IgE and preventing their binding with the IgE receptor (FCεR1) on the surface of the basophil and mast cells, leading to pro-inflammatory mediators inhibition. Furthermore, Omalizumab indirectly inhibits the upregulation of FCR1. It does not bind to IgE that has already been bound by the FCεR1 on the surface of mast cells, basophils, and antigen-presenting dendritic cells, unlike typical anti-IgE antibodies. For children with allergic asthma and increased serum IgE, omalizumab can be prescribed as an additional therapy. It has been reported that Omalizumab has favorable outcomes in asthmatic children with high peripheral eosinophil counts, elevated serum periostin, FeNO >20 ppb, and multiple-allergic comorbidities. However, there are no sufficient data on the prediction of Omalizumab therapy response by validated biomarkers in children; therefore, further investigations are necessary. , The effectiveness and safety of Omalizumab in pediatrics have been demonstrated by many randomized controlled trials. , Pediatric trials have notably shown that the frequency of asthma attacks, hospitalization, and the necessity for oral corticosteroids (OCS) has been lowered with Omalizumab. , In addition, Omalizumab greatly enhanced patients’ asthma management and quality of life (QOL). Finally, the number of seasonal exacerbations in patients who received Omalizumab was lower than that of the control. Many studies have shown that Omalizumab in children and adolescents is generally well-tolerated. - In 10 studies with 3261 patients, Omalizumab was associated with a significant reduction in asthma attacks (OR [odds ratio]=0.55, 95% confidence interval [CI]: 0.42-0.60), with an absolute reduction rate of 16% to 26%. Moreover, hospital admission was observed to be reduced in 4 studies with 1824 (OR=0.16, 95% CI: 0.06-0.42), with an absolute reduction rate of 0.5% to 3%. Serious or life-threatening conditions related to the medication, such as anaphylaxis, have been observed in 0.2% of adolescents who received Omalizumab; however, there is no evidence of being exciting in children. , The most common side events described in the literature are skin reactions and pain at the site of injection, which usually resolve quickly. In addition, there is no evidence that Omalizumab is associated with an elevated risk of cancer. Nevertheless, longitudinal studies on children are still needed to endorse favorable safety records. Mepolizumab Mepolizumab was licensed as a supplemental maintenance medication for many conditions, including serious eosinophilic asthma, an eosinophilic phenotype, and asthma exacerbation history. Adults and children over 12 should take 100 mg, while children aged 6-11 should take 40 mg. Despite the lack of standardized response criteria, clinical and laboratory indicators have been proposed as prediction tools. The reduction in FEV1 value and the blood eosinophil count of 300 cells/L are now considered to be measures of responsiveness to Mepolizumab therapy. Furthermore, clinical predictors of response to therapy include improvements in QoL, exacerbations, and physical fitness. Mepolizumab was studied in patients with eosinophilic asthma who did not respond to medication in 2 trials that demonstrated a significant reduction in asthma exacerbations. Mepolizumab demonstrated a considerable decline in OCS use and a notable enhancement in the symptoms and lung function of patients. , Further studies are needed to assess the role of Mepolizumab in children less than 12 years. Mepolizumab had a good safety profile and was shown to be well-tolerated in placebo-controlled trials. , Respiratory infections, reactions at the injection site, fatigue, headaches, and asthma exacerbation were the most frequently described side events. Dupilumab It targets the IL-4 receptors that is released by CD4+ Th2 cells and enhance the generation of IgE and the recruitment of inflammatory cells. , Moreover, the levels of T2 inflammation markers such as FeNO, eotaxin-3, and IgE demonstrated a significant reduction. , , In cases of moderate-to-severe asthma, it also enhances lung function. Eosinophils and FeNO in peripheral blood are efficient indicators of therapy response. Dupilumab is now licensed for adolescents and adults who have moderate to severe asthma with oral corticosteroid-dependent asthma or an eosinophilic phenotype. , Dupilumab has a good safety record, with injection site reactions and transient blood eosinophilia being the most prevalent side effects, and it is now being assessed by Food and Drug Administration. Selection of biologics for severe asthma The best biological drug cannot be detected because there are no direct comparisons between them. In selecting certain biologics, it is essential to measure the mechanism of action of medication, comorbidities and drug cost, atopic state, serum levels of IgE and FeNO, and blood levels of eosinophil. Omalizumab may be first prescribed for allergic asthma patients. Anti-IL-5 therapy may be considered as a first-line treatment for eosinophilic asthma patients with a history of exacerbations. Dupilumab may be first used in severe asthmatic individuals with atopic dermatitis. It was suggested to use some factors, including inflammatory biomarkers, exacerbations, symptom onset, and associated-allergic tendencies, to establish a strategy for finding appropriate biologics. Furthermore, the algorithm must be updated regularly, taking into account recent research findings on outcome predictors and drug development. Additionally, using adult data and applying it to pediatric populations with asthma should be avoided, and more pediatric clinical studies are needed to accurately define the usage of biological therapy in severely asthmatic children. It is used to treat severe allergic asthma that does not respond to high doses of corticosteroids, and it’s also used to treat persistent spontaneous urticaria in some cases. Omalizumab was approved for use in pediatrics with severe asthma, which acts by interaction with the peripheral blood IgE and preventing their binding with the IgE receptor (FCεR1) on the surface of the basophil and mast cells, leading to pro-inflammatory mediators inhibition. Furthermore, Omalizumab indirectly inhibits the upregulation of FCR1. It does not bind to IgE that has already been bound by the FCεR1 on the surface of mast cells, basophils, and antigen-presenting dendritic cells, unlike typical anti-IgE antibodies. For children with allergic asthma and increased serum IgE, omalizumab can be prescribed as an additional therapy. It has been reported that Omalizumab has favorable outcomes in asthmatic children with high peripheral eosinophil counts, elevated serum periostin, FeNO >20 ppb, and multiple-allergic comorbidities. However, there are no sufficient data on the prediction of Omalizumab therapy response by validated biomarkers in children; therefore, further investigations are necessary. , The effectiveness and safety of Omalizumab in pediatrics have been demonstrated by many randomized controlled trials. , Pediatric trials have notably shown that the frequency of asthma attacks, hospitalization, and the necessity for oral corticosteroids (OCS) has been lowered with Omalizumab. , In addition, Omalizumab greatly enhanced patients’ asthma management and quality of life (QOL). Finally, the number of seasonal exacerbations in patients who received Omalizumab was lower than that of the control. Many studies have shown that Omalizumab in children and adolescents is generally well-tolerated. - In 10 studies with 3261 patients, Omalizumab was associated with a significant reduction in asthma attacks (OR [odds ratio]=0.55, 95% confidence interval [CI]: 0.42-0.60), with an absolute reduction rate of 16% to 26%. Moreover, hospital admission was observed to be reduced in 4 studies with 1824 (OR=0.16, 95% CI: 0.06-0.42), with an absolute reduction rate of 0.5% to 3%. Serious or life-threatening conditions related to the medication, such as anaphylaxis, have been observed in 0.2% of adolescents who received Omalizumab; however, there is no evidence of being exciting in children. , The most common side events described in the literature are skin reactions and pain at the site of injection, which usually resolve quickly. In addition, there is no evidence that Omalizumab is associated with an elevated risk of cancer. Nevertheless, longitudinal studies on children are still needed to endorse favorable safety records. Mepolizumab was licensed as a supplemental maintenance medication for many conditions, including serious eosinophilic asthma, an eosinophilic phenotype, and asthma exacerbation history. Adults and children over 12 should take 100 mg, while children aged 6-11 should take 40 mg. Despite the lack of standardized response criteria, clinical and laboratory indicators have been proposed as prediction tools. The reduction in FEV1 value and the blood eosinophil count of 300 cells/L are now considered to be measures of responsiveness to Mepolizumab therapy. Furthermore, clinical predictors of response to therapy include improvements in QoL, exacerbations, and physical fitness. Mepolizumab was studied in patients with eosinophilic asthma who did not respond to medication in 2 trials that demonstrated a significant reduction in asthma exacerbations. Mepolizumab demonstrated a considerable decline in OCS use and a notable enhancement in the symptoms and lung function of patients. , Further studies are needed to assess the role of Mepolizumab in children less than 12 years. Mepolizumab had a good safety profile and was shown to be well-tolerated in placebo-controlled trials. , Respiratory infections, reactions at the injection site, fatigue, headaches, and asthma exacerbation were the most frequently described side events. It targets the IL-4 receptors that is released by CD4+ Th2 cells and enhance the generation of IgE and the recruitment of inflammatory cells. , Moreover, the levels of T2 inflammation markers such as FeNO, eotaxin-3, and IgE demonstrated a significant reduction. , , In cases of moderate-to-severe asthma, it also enhances lung function. Eosinophils and FeNO in peripheral blood are efficient indicators of therapy response. Dupilumab is now licensed for adolescents and adults who have moderate to severe asthma with oral corticosteroid-dependent asthma or an eosinophilic phenotype. , Dupilumab has a good safety record, with injection site reactions and transient blood eosinophilia being the most prevalent side effects, and it is now being assessed by Food and Drug Administration. The best biological drug cannot be detected because there are no direct comparisons between them. In selecting certain biologics, it is essential to measure the mechanism of action of medication, comorbidities and drug cost, atopic state, serum levels of IgE and FeNO, and blood levels of eosinophil. Omalizumab may be first prescribed for allergic asthma patients. Anti-IL-5 therapy may be considered as a first-line treatment for eosinophilic asthma patients with a history of exacerbations. Dupilumab may be first used in severe asthmatic individuals with atopic dermatitis. It was suggested to use some factors, including inflammatory biomarkers, exacerbations, symptom onset, and associated-allergic tendencies, to establish a strategy for finding appropriate biologics. Furthermore, the algorithm must be updated regularly, taking into account recent research findings on outcome predictors and drug development. Additionally, using adult data and applying it to pediatric populations with asthma should be avoided, and more pediatric clinical studies are needed to accurately define the usage of biological therapy in severely asthmatic children. Other medications used for severe asthma Systemic corticosteroids Several studies have shown that short-term OCS therapy (3 or 5 days) can reduce the intensity and duration of an asthma exacerbation in children. Oral corticosteroids therapy can be given to some children and adolescents for longer than a month daily or alternate daily. Despite being recommended in asthma guidelines, “maintenance” OCS has little evidence of effectiveness. Using OCS for short periods is known to cause side effects in children (sleep disturbance, vomiting, and behavior change) and for intervals longer than 14 days (susceptibility to infection, cushingoid features, growth retardation, and weight gain). , Intramuscular triamcinolone Intramuscular triamcinolone therapy may help identify steroid-responsive asthma and treat severe asthma. The evidence is limited to case series using various dosages of triamcinolone. - A study showed that triamcinolone therapy reduced blood eosinophil count and FeNO. The relative failure of triamcinolone in non-severe asthmatic children is likely owing to adequate baseline FEV1, mild symptoms, and limited sample size. Another study evaluated symptoms and physiological responses one month following triamcinolone administration. The Asthma Control Test showed better symptom scores and spirometry in children who received triamcinolone. Treatment decreased sputum eosinophilia, FeNO, and intensive care unit hospitalizations, but only in white children. Triamcinolone, like other asthma medications, has a variable response. For children and adolescents, it is appropriate to start a brief trial of triamcinolone therapy to see if symptoms respond to steroid treatment. If after 2 months of treatment there is no improvement or adverse effects arise, treatment may be terminated. Several studies have shown that short-term OCS therapy (3 or 5 days) can reduce the intensity and duration of an asthma exacerbation in children. Oral corticosteroids therapy can be given to some children and adolescents for longer than a month daily or alternate daily. Despite being recommended in asthma guidelines, “maintenance” OCS has little evidence of effectiveness. Using OCS for short periods is known to cause side effects in children (sleep disturbance, vomiting, and behavior change) and for intervals longer than 14 days (susceptibility to infection, cushingoid features, growth retardation, and weight gain). , Intramuscular triamcinolone therapy may help identify steroid-responsive asthma and treat severe asthma. The evidence is limited to case series using various dosages of triamcinolone. - A study showed that triamcinolone therapy reduced blood eosinophil count and FeNO. The relative failure of triamcinolone in non-severe asthmatic children is likely owing to adequate baseline FEV1, mild symptoms, and limited sample size. Another study evaluated symptoms and physiological responses one month following triamcinolone administration. The Asthma Control Test showed better symptom scores and spirometry in children who received triamcinolone. Treatment decreased sputum eosinophilia, FeNO, and intensive care unit hospitalizations, but only in white children. Triamcinolone, like other asthma medications, has a variable response. For children and adolescents, it is appropriate to start a brief trial of triamcinolone therapy to see if symptoms respond to steroid treatment. If after 2 months of treatment there is no improvement or adverse effects arise, treatment may be terminated. Impact of severe asthma on children’s quality of life Severe asthma control and quality of life were also shown to be linked, according to the many studies. Research carried out in the United States found that patients with severe asthma who had insufficient management of their condition had clinically significant levels of behavioral problems. Another study found that the prevalence of emotional and behavioral problems among asthmatic adolescents was 20.6%, compared to 9% for nonasthmatic adolescents. In addition, anxiety, depression, and behavioral changes are more prevalent in uncontrolled asthma. , Banjari et al, showed that among 106 Saudi children with severe asthma, 84% had poor asthma control. Children with uncontrolled asthma had a significantly worse quality of life ( p <0.001). The psychological well-being of children with and without asthma control was comparable ( p =0.58); however, both groups were negatively impacted. Therefore, they concluded that psychosocial well-being should be measured during clinic visits, in order to take a more holistic approach and enhance outcomes. Requirements Severe pediatric asthma service goals Proper assessment, enhancing self-management, controlling the triggers, reducing the comorbidities, and providing opportunities for high-quality research and training are essential. The assessment of severe asthma might be complicated by misdiagnosis and symptom misattribution. Therefore, objective confirmation of an asthma diagnosis by showing the defining characteristic of asthma is required. Many tests can be used to achieve this, including airway hyperresponsiveness, assessments of bronchodilator responsiveness, and variability of airflow over time. - The airway hyperresponsiveness can be measured using hypertonic saline, mannitol, or methacholine, while bronchodilator responsiveness can be assessed using pre-and post-bronchodilator spirometry. , In terms of airflow variability, peak expiratory flow readings or serial spirometry can be used. After confirming an asthma diagnosis, it is crucial to look for potential aggravating or coexisting variables that could make asthma management more difficult. Another important goal is to enhance self-management skills, which can directly improve asthma control. - Self-monitoring, inhaler technique, written action plan, and medication adherence are critical skills for asthma management that should be targeted in a severe asthma service. , Studies showed that early optimization of these skills is essential to achieve adequate control. In addition, it is critical to identify and assess potential trigger factors. Allergens, industrial pollutants, cigarettes, and recurrent infections are all triggers. Asthma control can be improved by removing these triggers. A severe asthma clinic’s structured multidisciplinary approach provides high-quality training for various healthcare providers. The role of the multidisciplinary team The minimum required team to run the severe asthma service includes a pediatric pulmonologist, pediatric nurse, and respiratory therapist. Further team members necessary for multidisciplinary care include speech pathologist, dietitian, physiotherapist, psychologist, gastroenterologist, pharmacist, and administrative support. Many specialties are required to confirm the diagnosis, including respiratory physician, pulmonary function scientist, and radiographer. Optimize self-management needs respiratory physicians and nurse specialists. Regarding the treatment of asthma triggers and comorbidities, pharmacists, respiratory physicians, advanced trainees, nurse specialists, dietitians, psychologists, gastroenterologists, sleep physicians, and physiotherapy are required. Each clinic should conduct a multidisciplinary case review meeting to evaluate patient progress. These meetings will improve the team-based approach and increase the skills of the clinicians, which in turn will enhance the patients’ outcomes. Facilities A proper location to give drugs such as Omalizumab is also required. Adrenaline and other vital life-saving supplies need to be readily available at this place in the event of a medical emergency. Pharmacies should be close to doctors or an emergency response team. Telephone support should be available at all times in order to provide timely management of acute exacerbations or treatment-related adverse effects. Senior nursing professionals, advanced trainees, or registrars can provide this support in consultation with the respiratory physician. The use of a conference room with access to healthcare information for a multidisciplinary case review is highly suggested. To confirm the diagnosis, clinics, pulmonary function laboratories, and medical imaging are needed. Regarding the treatment of asthma triggers, a sleep laboratory, rapid access clinic, drug administration clinic, facilities for aspirin-sensitive asthma desensitization, sputum clearance devices, and thoracic radiology are required. Proper assessment, enhancing self-management, controlling the triggers, reducing the comorbidities, and providing opportunities for high-quality research and training are essential. The assessment of severe asthma might be complicated by misdiagnosis and symptom misattribution. Therefore, objective confirmation of an asthma diagnosis by showing the defining characteristic of asthma is required. Many tests can be used to achieve this, including airway hyperresponsiveness, assessments of bronchodilator responsiveness, and variability of airflow over time. - The airway hyperresponsiveness can be measured using hypertonic saline, mannitol, or methacholine, while bronchodilator responsiveness can be assessed using pre-and post-bronchodilator spirometry. , In terms of airflow variability, peak expiratory flow readings or serial spirometry can be used. After confirming an asthma diagnosis, it is crucial to look for potential aggravating or coexisting variables that could make asthma management more difficult. Another important goal is to enhance self-management skills, which can directly improve asthma control. - Self-monitoring, inhaler technique, written action plan, and medication adherence are critical skills for asthma management that should be targeted in a severe asthma service. , Studies showed that early optimization of these skills is essential to achieve adequate control. In addition, it is critical to identify and assess potential trigger factors. Allergens, industrial pollutants, cigarettes, and recurrent infections are all triggers. Asthma control can be improved by removing these triggers. A severe asthma clinic’s structured multidisciplinary approach provides high-quality training for various healthcare providers. The minimum required team to run the severe asthma service includes a pediatric pulmonologist, pediatric nurse, and respiratory therapist. Further team members necessary for multidisciplinary care include speech pathologist, dietitian, physiotherapist, psychologist, gastroenterologist, pharmacist, and administrative support. Many specialties are required to confirm the diagnosis, including respiratory physician, pulmonary function scientist, and radiographer. Optimize self-management needs respiratory physicians and nurse specialists. Regarding the treatment of asthma triggers and comorbidities, pharmacists, respiratory physicians, advanced trainees, nurse specialists, dietitians, psychologists, gastroenterologists, sleep physicians, and physiotherapy are required. Each clinic should conduct a multidisciplinary case review meeting to evaluate patient progress. These meetings will improve the team-based approach and increase the skills of the clinicians, which in turn will enhance the patients’ outcomes. A proper location to give drugs such as Omalizumab is also required. Adrenaline and other vital life-saving supplies need to be readily available at this place in the event of a medical emergency. Pharmacies should be close to doctors or an emergency response team. Telephone support should be available at all times in order to provide timely management of acute exacerbations or treatment-related adverse effects. Senior nursing professionals, advanced trainees, or registrars can provide this support in consultation with the respiratory physician. The use of a conference room with access to healthcare information for a multidisciplinary case review is highly suggested. To confirm the diagnosis, clinics, pulmonary function laboratories, and medical imaging are needed. Regarding the treatment of asthma triggers, a sleep laboratory, rapid access clinic, drug administration clinic, facilities for aspirin-sensitive asthma desensitization, sputum clearance devices, and thoracic radiology are required. Conclusion The severity assessment of asthmatic children is dictated by the treatment step needed to control the patient’s symptoms. The evaluation approach to severe asthma can be simplified into 3 steps: i) Confirm diagnosis of asthma, using full clinical evaluation, pulmonary function tests, psychosocial assessment, and other investigations; ii) Check barriers for good control as poor adherence, poor techniques skills, and improper environment; and iii) Exclude comorbidities that significantly associated with exacerbation frequency. The best biological drug cannot be detected because there are no direct comparisons between them, as well as there are no efficient biomarkers for predicting or monitoring the treatment response. Regarding the service requirements, multifactorial services, including proper assessment, enhancing self-management, controlling the triggers, reducing the comorbidities, and providing opportunities for high-quality research and training are essential. The minimum required multidisciplinary team to run the severe asthma service includes a pediatric pulmonologist, pediatric nurse, and respiratory therapist. Further team members necessary for multidisciplinary care include speech pathologist, dietitian, physiotherapist, psychologist, gastroenterologist, pharmacist, and administrative support. Finally, further epidemiological studies are required to assess the prevalence of severe asthma in Saudi children and identify the regular clinical practice used in primary healthcare centers in Saudi Arabia.
Antibiotic expectation, behaviour, and receipt among patients presenting to emergency departments with uncomplicated upper respiratory tract infection during the COVID-19 pandemic
bac6133b-d75f-4dbc-a61d-8489c5eae905
9998126
Patient Education as Topic[mh]
Introduction The rise of antimicrobial resistance (AMR) has been a long-standing threat to public health . Bacteria that become antibiotic-resistant can cause human infections leading to higher medical costs, decreased work productivity, and increased mortality . Antibiotic misuse, both from inappropriate prescribing by health care providers and overuse by the public, drives the development of AMR, necessitating an urgent need for behavioural change in antibiotic use to slow its progression . The emergence of AMR, rendering the ineffectiveness of antibiotics, will outstrip the pace of development of new antibiotics and lead to a post-antibiotic era if no action is taken , . Some antimicrobial stewardship programs, such as delaying or shortening the duration of antibiotic prescription, have effectively reduced antibiotic use in the inpatient setting , . However, such programmes are under-established in ambulatory care where there is greater patient involvement in shared clinical decision-making. Interventions targeting patient education have shown only minor effects in reducing antibiotic prescribing . One study found that providing patients with information on the efficacy and side-effects of antibiotics reduces but does not eliminate clinically inappropriate expectations and requests for antibiotics . Despite the already lacklustre progress in tackling AMR pre-COVID-19 pandemic, the focus on pandemic response during the pandemic further disrupted actions against AMR . Uncertainties surrounding the pandemic are changing patients’ health-seeking behaviour and may either shift their focus away from antibiotics or increase their expectations for receiving them. Before the COVID-19 pandemic, patients’ expectations for antibiotics often contributed to physicians’ decisions to prescribe antibiotics. Experimental evidence from the United Kingdom showed that physicians were more willing to prescribe antibiotics if they believed that patients expected them, even if they thought the probability of a bacterial infection was low . Patients’ expectations for antibiotics stems from their socio-cognitive knowledge, attitudes, and beliefs on the indications of antibiotics . Studies have observed that lower education level, perceived severity of illness, previous positive experiences with antibiotics, history of antibiotics misuse, and the belief that antibiotics are effective are predictors of antibiotic expectation for upper respiratory tract infections (URTIs) , , . During the COVID-19 pandemic, emergency departments (EDs) worldwide, including Singapore, experienced surges in attendance for acute respiratory illness, which accentuated the problem of overcrowding in EDs , , , , . Although uncomplicated URTI should not be managed in EDs, the uncertainties surrounding COVID-19 management and the public's lack of understanding about COVID-19 could have changed health-seeking behaviour and influenced patients’ expectations for antibiotics when seeking care for URTI in the ED. Hence, we assessed the factors associated with the expectation for and receipt of antibiotics for uncomplicated URTI in four adult EDs in Singapore during the COVID-19 pandemic. We also assessed the reasons for their expectations and their behaviour surrounding the use of antibiotics. Materials and methods 2.1 Study design and setting We conducted a cross-sectional study on adults seeking medical care at the ED for uncomplicated URTI. Our study included EDs in four acute hospitals (Changi General Hospital, Khoo Teck Puat Hospital, National University Hospital, and Tan Tock Seng Hospital), covering all three healthcare clusters in Singapore. 2.2 Participants We recruited 681 adults who attended the EDs with a diagnosis of URTI (ICD-10 J00-J06) between March 2021 and March 2022. Patients were asked to complete a survey questionnaire post-consultation; we excluded hospitalised patients and patients with multiple attendances to the ED within 30 days for the same illness to omit possible complicated URTI cases. These exclusion criteria were verified through review of electronic medical records prior to recruitment. We initially excluded COVID-19 suspects from the study because of a default hospital admission policy but included them after the national policy was revised in July 2021. Since then, the Singaporean government has advocated home recovery for COVID-19, as most of the population has been fully vaccinated against COVID-19 and the illness is predominantly mild. Study recruitment was suspended at one study site (from May 2021) because of operational restrictions in response to the ramp-up in COVID-19 response in the ED. 2.3 Questionnaire We collected information on the patient's demographics (age, sex, race, nationality, education level), health status (vaccination status, illness symptoms, smoking status, Charlson's co-morbidity index), health-seeking behaviour (reasons for the ED visit, prior healthcare consultation for the same illness episode, payment method), and their expectations, knowledge, attitudes, and behaviour (KAB) on the use of antibiotics (Supplementary Table S1). The attitude and behaviour questions were measured on the five-point agreement Likert scale. We adapted the KAB questions on antibiotics from a literature review and a priori knowledge of our previous research . The questionnaire was interviewer-administered to enable interpretation consistency across all participants. All data collectors were independent of the patients’ care team and were trained to minimise bias in data collection. 2.4 Analysis The outcome variables of interest are whether the patient 1) expected and 2) received an antibiotic prescription during the ED visit. We performed descriptive statistics to assess the differences between patients expecting antibiotics and patients prescribed antibiotics during the ED visit. We considered a positive vaccination status as follows: influenza vaccination within 12 months; ever had a pneumococcal vaccination; and at least a week after two doses of COVID-19 vaccination. The Charlson's co-morbidity index (CCI) was computed and classified into three categories (no co-morbidity, CCI 0; mild, CCI 1–2; moderate/severe, score CCI >2). We considered patients to have poor knowledge of antibiotics and AMR if they answered correctly ≤4 out of the 10 knowledge questions; moderate knowledge if they answered correctly 5 to 7 questions; and good knowledge if they answered correctly ≥8 questions. We first performed univariate analyses to assess the differences between categories in the outcome variables to inform variable selection for the subsequent multivariable model. Next, we explored the independent factors associated with antibiotic expectation and the receipt of antibiotics using multivariable logistic regression by adding and dropping variables from an initial model. The best model was chosen based on the likelihood ratio tests of nested models and the lowest Akaike's Information Criteria (Supplementary Table S2A,S2B). We then present an anchored divergent graph on the reasons for expecting antibiotics and elaborate on other reasons that were not part of the Likert scale items. In addition, we performed principal components analysis to classify the antibiotic use behaviours (Supplementary Table S1). Likert items with smaller coefficients were removed stepwise while optimising the total variance explained (the higher the better) and internal consistency (Cronbach's alpha) of each factor. Ungrouped behaviour statements were also dropped from the analysis. All analyses were performed with Stata version 15.0 (StataCorp LP, College Station, TX) and RStudio version 2022.02.3 (RStudio, PBC, Boston, MA). Study design and setting We conducted a cross-sectional study on adults seeking medical care at the ED for uncomplicated URTI. Our study included EDs in four acute hospitals (Changi General Hospital, Khoo Teck Puat Hospital, National University Hospital, and Tan Tock Seng Hospital), covering all three healthcare clusters in Singapore. Participants We recruited 681 adults who attended the EDs with a diagnosis of URTI (ICD-10 J00-J06) between March 2021 and March 2022. Patients were asked to complete a survey questionnaire post-consultation; we excluded hospitalised patients and patients with multiple attendances to the ED within 30 days for the same illness to omit possible complicated URTI cases. These exclusion criteria were verified through review of electronic medical records prior to recruitment. We initially excluded COVID-19 suspects from the study because of a default hospital admission policy but included them after the national policy was revised in July 2021. Since then, the Singaporean government has advocated home recovery for COVID-19, as most of the population has been fully vaccinated against COVID-19 and the illness is predominantly mild. Study recruitment was suspended at one study site (from May 2021) because of operational restrictions in response to the ramp-up in COVID-19 response in the ED. Questionnaire We collected information on the patient's demographics (age, sex, race, nationality, education level), health status (vaccination status, illness symptoms, smoking status, Charlson's co-morbidity index), health-seeking behaviour (reasons for the ED visit, prior healthcare consultation for the same illness episode, payment method), and their expectations, knowledge, attitudes, and behaviour (KAB) on the use of antibiotics (Supplementary Table S1). The attitude and behaviour questions were measured on the five-point agreement Likert scale. We adapted the KAB questions on antibiotics from a literature review and a priori knowledge of our previous research . The questionnaire was interviewer-administered to enable interpretation consistency across all participants. All data collectors were independent of the patients’ care team and were trained to minimise bias in data collection. Analysis The outcome variables of interest are whether the patient 1) expected and 2) received an antibiotic prescription during the ED visit. We performed descriptive statistics to assess the differences between patients expecting antibiotics and patients prescribed antibiotics during the ED visit. We considered a positive vaccination status as follows: influenza vaccination within 12 months; ever had a pneumococcal vaccination; and at least a week after two doses of COVID-19 vaccination. The Charlson's co-morbidity index (CCI) was computed and classified into three categories (no co-morbidity, CCI 0; mild, CCI 1–2; moderate/severe, score CCI >2). We considered patients to have poor knowledge of antibiotics and AMR if they answered correctly ≤4 out of the 10 knowledge questions; moderate knowledge if they answered correctly 5 to 7 questions; and good knowledge if they answered correctly ≥8 questions. We first performed univariate analyses to assess the differences between categories in the outcome variables to inform variable selection for the subsequent multivariable model. Next, we explored the independent factors associated with antibiotic expectation and the receipt of antibiotics using multivariable logistic regression by adding and dropping variables from an initial model. The best model was chosen based on the likelihood ratio tests of nested models and the lowest Akaike's Information Criteria (Supplementary Table S2A,S2B). We then present an anchored divergent graph on the reasons for expecting antibiotics and elaborate on other reasons that were not part of the Likert scale items. In addition, we performed principal components analysis to classify the antibiotic use behaviours (Supplementary Table S1). Likert items with smaller coefficients were removed stepwise while optimising the total variance explained (the higher the better) and internal consistency (Cronbach's alpha) of each factor. Ungrouped behaviour statements were also dropped from the analysis. All analyses were performed with Stata version 15.0 (StataCorp LP, College Station, TX) and RStudio version 2022.02.3 (RStudio, PBC, Boston, MA). Results and discussion 3.1 Baseline characteristics of respondents Overall, 31.0% (211/681) of patients were expecting antibiotics, while 8.7% (59/681) received antibiotics during the ED visit. Of patients expecting antibiotics, 15.6% (33/211) received an antibiotic prescription. shows the characteristics of patients expecting/not expecting antibiotics and patients who received/did not receive antibiotics. The mean age of participants was 34.5 (12.7) and between 21 and 88 years old. Half of the patients were male (49.8%), 46.1% were of the Chinese race, 73.1% were Singaporeans, and 32.9% had tertiary education. Approximately a third (36.4%) of patients had a fever during the visit, 91.2% had no comorbidities, 69.6% had not seen another healthcare provider for the same episode of illness, and 81.3% had poor to moderate knowledge of antibiotics (Scored <80% on the knowledge questionnaire). 3.2 Antibiotic expectation There were no statistically significant differences between patients who expected antibiotics and those who did not expect antibiotics during their ED visit, except for prior health care consult for the same episode of illness and knowledge on antibiotics and AMR. A higher proportion of patients who were expecting antibiotics during the ED visit (14.7% vs. 2.8%, P < 0.001) received antibiotics from a prior consult (primary care or specialist outpatient clinic) for the same episode of illness. A higher proportion of patients who were expecting antibiotics during the ED visit also had poor to moderate knowledge (89.2% vs. 77.9%, P = 0.001) of antibiotics and AMR. 3.3 Antibiotic receipt There were no statistically significant differences between patients who received antibiotics and those who did not receive antibiotics during their ED visit, except for prior health care consult for the same episode of illness and expectation for antibiotics. A higher proportion of patients who received antibiotics during their ED visit received antibiotics from prior consultations (primary care or specialist outpatient clinic) for the same episode of illness (20.3% vs. 5.1%, P < 0.001). A higher proportion of patients who received antibiotics expected antibiotics during the ED visit (78.0% vs. 26.5%, P < 0.001). 3.4 Determinants of expectation for antibiotics Patients with a prior clinical consultation for the same illness were more likely to expect antibiotics during the ED visit. Compared with patients without prior consultation, patients who received antibiotics during a prior consultation were 6.5 times (adjusted odds ratio [aOR]: 6.56, 95% confidence interval [CI] 3.30–13.11, P < 0.001) more likely to expect antibiotics, while patients who did not receive antibiotics during their prior consultation were 1.5 times (aOR: 1.50, 95% CI 1.01–2.23, P = 0.046) more likely to expect antibiotics during the ED visit ( ). Patients with poor (aOR: 2.16, 95% CI 1.26–3.68, P = 0.005) to moderate (aOR: 2.26, 95% CI 1.33–3.84, P = 0.002) knowledge of antibiotics and AMR were twice as likely to expect antibiotics compared with patients with good knowledge of antibiotics. In addition, patients expecting a COVID-19 test were 1.5 times (aOR: 1.56, 95% CI 1.01–2.41, P = 0.045) more likely to expect antibiotics ( ). 3.5 Determinants of antibiotics receipt Patients expecting antibiotics during their ED visit were 10.6 times (aOR: 10.64, 95% CI 5.34–21.17, P < 0.001) more likely to receive antibiotics. Patients who received antibiotics during a prior consultation were thrice (aOR: 2.97, 95% CI 1.26–7.00, P = 0.013) as likely to receive antibiotics compared with patients with no prior consultation. Tertiary-educated patients were also twice (aOR: 2.20, 95% CI 1.09–4.43, P = 0.027) as likely to receive antibiotics. Although we did not observe statistical significance regarding severity of pre-existing comorbidity, the odds of patients receiving antibiotics increased with a higher severity of comorbidities compared with patients without any co-morbidity ( ). 3.6 Reasons for expecting antibiotics The top five reasons for patients expecting antibiotics in the emergency departments are: 1) feeling extremely unwell (73% agreement); 2) perception that the illness will take longer to recover without antibiotics (66% agreement); 3) having previous experiences of receiving antibiotics for similar illness (65% agreement); 4) prolonged symptoms without improvement (64% agreement); and 5) perception that recovery from the illness is only possible with antibiotics (52% agreement). In addition, 48% agreed that antibiotics could boost their immunity; 45% felt that they had to obtain antibiotics because they were at the ED; 43% wanted antibiotics for standby; 38% had yellow/green phlegm; and 21% were influenced by their friends and/or relatives ( ). In addition to the Likert scale statements, patients mentioned other reasons for expecting antibiotics during their ED visit. A few patients mistakenly thought that antibiotics were effective in treating viruses (including cough and flu), resolving inflammation, and improving their immunity. Some thought that antibiotics could generate antibodies and treat or prevent any infection. One patient mistakenly thought of antibiotics as a ‘cure-all’ medication. A few patients thought that the standard procedure for physicians was to prescribe antibiotics for their medical consultation, as they had prior experiences receiving antibiotics for similar illnesses. One patient wanted a stronger antibiotic, as the previous antibiotic received did not ‘cure his/her illness’, while one thought that antibiotics could substitute a sleeping pill. Another patient had concerns about developing URTI before his/her second dose of COVID-19 vaccination and was expecting antibiotics to speed up the recovery of URTI and 3. 3.7 Antibiotic use behaviour Four factors emerged from the factor analysis of antibiotics use behaviour. The first is the perception of the need for antibiotics. More than half of patients (56%) agreed that antibiotics are needed for a severe illness, while 46% agreed that antibiotics are needed if they do not feel better in the next few days. More than half of respondents disagreed that they would take or expect antibiotics to prevent/recover from the flu/cold during the COVID-19 pandemic ( ). The second factor is sharing and reusing antibiotics. Three quarters (75%) of patients disagreed that they would keep stocks of antibiotics at home for an emergency, while about 80% of respondents disagreed that they would save and use leftover antibiotics or share antibiotics with their friends and family members. The third and fourth factor had low internal consistency but shows interesting findings on antibiotic use behaviour. The third factor is the instructional use of antibiotics. More than 90% of patients agreed that they would take antibiotics according to instructions and would trust the ED physician on the need to use antibiotics. The last factor involves concerns about the side effects of antibiotics. Seventy-nine per cent of patients agreed that they would stop taking antibiotics if they experienced side effects, but a smaller proportion (49%) agreed that they worry about the side effects of antibiotics. Our study explored patient-related factors associated with the expectation for and receipt of antibiotics for uncomplicated URTI in four EDs in Singapore during the COVID-19 pandemic. We also assessed the reasons patients expect antibiotics and their antibiotic use behaviours. Although these were extensively studied prior to the pandemic , , , , many studies were from western countries where different cultural settings and health systems can generate different patient expectations than those from Asian settings. We also took the perspective of patients seeking care for URTI in the EDs during surges of cases in the COVID-19 pandemic. Baseline characteristics of respondents Overall, 31.0% (211/681) of patients were expecting antibiotics, while 8.7% (59/681) received antibiotics during the ED visit. Of patients expecting antibiotics, 15.6% (33/211) received an antibiotic prescription. shows the characteristics of patients expecting/not expecting antibiotics and patients who received/did not receive antibiotics. The mean age of participants was 34.5 (12.7) and between 21 and 88 years old. Half of the patients were male (49.8%), 46.1% were of the Chinese race, 73.1% were Singaporeans, and 32.9% had tertiary education. Approximately a third (36.4%) of patients had a fever during the visit, 91.2% had no comorbidities, 69.6% had not seen another healthcare provider for the same episode of illness, and 81.3% had poor to moderate knowledge of antibiotics (Scored <80% on the knowledge questionnaire). Antibiotic expectation There were no statistically significant differences between patients who expected antibiotics and those who did not expect antibiotics during their ED visit, except for prior health care consult for the same episode of illness and knowledge on antibiotics and AMR. A higher proportion of patients who were expecting antibiotics during the ED visit (14.7% vs. 2.8%, P < 0.001) received antibiotics from a prior consult (primary care or specialist outpatient clinic) for the same episode of illness. A higher proportion of patients who were expecting antibiotics during the ED visit also had poor to moderate knowledge (89.2% vs. 77.9%, P = 0.001) of antibiotics and AMR. Antibiotic receipt There were no statistically significant differences between patients who received antibiotics and those who did not receive antibiotics during their ED visit, except for prior health care consult for the same episode of illness and expectation for antibiotics. A higher proportion of patients who received antibiotics during their ED visit received antibiotics from prior consultations (primary care or specialist outpatient clinic) for the same episode of illness (20.3% vs. 5.1%, P < 0.001). A higher proportion of patients who received antibiotics expected antibiotics during the ED visit (78.0% vs. 26.5%, P < 0.001). Determinants of expectation for antibiotics Patients with a prior clinical consultation for the same illness were more likely to expect antibiotics during the ED visit. Compared with patients without prior consultation, patients who received antibiotics during a prior consultation were 6.5 times (adjusted odds ratio [aOR]: 6.56, 95% confidence interval [CI] 3.30–13.11, P < 0.001) more likely to expect antibiotics, while patients who did not receive antibiotics during their prior consultation were 1.5 times (aOR: 1.50, 95% CI 1.01–2.23, P = 0.046) more likely to expect antibiotics during the ED visit ( ). Patients with poor (aOR: 2.16, 95% CI 1.26–3.68, P = 0.005) to moderate (aOR: 2.26, 95% CI 1.33–3.84, P = 0.002) knowledge of antibiotics and AMR were twice as likely to expect antibiotics compared with patients with good knowledge of antibiotics. In addition, patients expecting a COVID-19 test were 1.5 times (aOR: 1.56, 95% CI 1.01–2.41, P = 0.045) more likely to expect antibiotics ( ). Determinants of antibiotics receipt Patients expecting antibiotics during their ED visit were 10.6 times (aOR: 10.64, 95% CI 5.34–21.17, P < 0.001) more likely to receive antibiotics. Patients who received antibiotics during a prior consultation were thrice (aOR: 2.97, 95% CI 1.26–7.00, P = 0.013) as likely to receive antibiotics compared with patients with no prior consultation. Tertiary-educated patients were also twice (aOR: 2.20, 95% CI 1.09–4.43, P = 0.027) as likely to receive antibiotics. Although we did not observe statistical significance regarding severity of pre-existing comorbidity, the odds of patients receiving antibiotics increased with a higher severity of comorbidities compared with patients without any co-morbidity ( ). Reasons for expecting antibiotics The top five reasons for patients expecting antibiotics in the emergency departments are: 1) feeling extremely unwell (73% agreement); 2) perception that the illness will take longer to recover without antibiotics (66% agreement); 3) having previous experiences of receiving antibiotics for similar illness (65% agreement); 4) prolonged symptoms without improvement (64% agreement); and 5) perception that recovery from the illness is only possible with antibiotics (52% agreement). In addition, 48% agreed that antibiotics could boost their immunity; 45% felt that they had to obtain antibiotics because they were at the ED; 43% wanted antibiotics for standby; 38% had yellow/green phlegm; and 21% were influenced by their friends and/or relatives ( ). In addition to the Likert scale statements, patients mentioned other reasons for expecting antibiotics during their ED visit. A few patients mistakenly thought that antibiotics were effective in treating viruses (including cough and flu), resolving inflammation, and improving their immunity. Some thought that antibiotics could generate antibodies and treat or prevent any infection. One patient mistakenly thought of antibiotics as a ‘cure-all’ medication. A few patients thought that the standard procedure for physicians was to prescribe antibiotics for their medical consultation, as they had prior experiences receiving antibiotics for similar illnesses. One patient wanted a stronger antibiotic, as the previous antibiotic received did not ‘cure his/her illness’, while one thought that antibiotics could substitute a sleeping pill. Another patient had concerns about developing URTI before his/her second dose of COVID-19 vaccination and was expecting antibiotics to speed up the recovery of URTI and 3. Antibiotic use behaviour Four factors emerged from the factor analysis of antibiotics use behaviour. The first is the perception of the need for antibiotics. More than half of patients (56%) agreed that antibiotics are needed for a severe illness, while 46% agreed that antibiotics are needed if they do not feel better in the next few days. More than half of respondents disagreed that they would take or expect antibiotics to prevent/recover from the flu/cold during the COVID-19 pandemic ( ). The second factor is sharing and reusing antibiotics. Three quarters (75%) of patients disagreed that they would keep stocks of antibiotics at home for an emergency, while about 80% of respondents disagreed that they would save and use leftover antibiotics or share antibiotics with their friends and family members. The third and fourth factor had low internal consistency but shows interesting findings on antibiotic use behaviour. The third factor is the instructional use of antibiotics. More than 90% of patients agreed that they would take antibiotics according to instructions and would trust the ED physician on the need to use antibiotics. The last factor involves concerns about the side effects of antibiotics. Seventy-nine per cent of patients agreed that they would stop taking antibiotics if they experienced side effects, but a smaller proportion (49%) agreed that they worry about the side effects of antibiotics. Our study explored patient-related factors associated with the expectation for and receipt of antibiotics for uncomplicated URTI in four EDs in Singapore during the COVID-19 pandemic. We also assessed the reasons patients expect antibiotics and their antibiotic use behaviours. Although these were extensively studied prior to the pandemic , , , , many studies were from western countries where different cultural settings and health systems can generate different patient expectations than those from Asian settings. We also took the perspective of patients seeking care for URTI in the EDs during surges of cases in the COVID-19 pandemic. Discussion The COVID-19 pandemic seemingly had a minor effect on the antibiotic expectation and prescribing rates for URTI despite changes in health-seeking behaviours. Approximately a third of patients (31%) expected antibiotics during their ED visit in our study, which is similar to a study (33%) conducted at one of our institutions pre-pandemic . The antibiotic prescribing rate in our study (∼9%) was also similar to a pre-pandemic study which assessed antibiotic use for URTI in the ED in the United States . We also found that patients expecting antibiotics were 10.6 times more likely to receive them, although the same US study reported that physicians were 5.3 times more likely to prescribe antibiotics if they believed that patients expected antibiotics . The sheer number of COVID-19-related ED attendances should have brought the antibiotic prescribing rate down, but the uncertainties surrounding new variants of COVID-19 may have prompted physicians to loosen their prescribing criteria for anxious patients who perceived their illness as severe , . We found that the top reason for expecting antibiotics was the perceived severity of illness. This reason was also a predictor for antibiotic expectation in another study assessing the expectation for antibiotics in the Singapore primary care setting pre-pandemic . Poor knowledge of the indications of antibiotics was a significant predictor of antibiotic expectation pre-COVID-19 and has remained a strong predictor of antibiotic expectation during the pandemic , , . The misconceptions that antibiotics improve immunity, help one recover faster from an illness, and are a cure-all medication exist in our study and were deep-seated among the public , , , . These misconceptions likely occurred because of ingrained myths surrounding the effectiveness of antibiotics and patients’ past experiences with antibiotic use. We also found that patients with prior clinical consultation with an antibiotic prescription were 6.6 times more likely to expect antibiotics during their ED consultation, highlighting the importance of antimicrobial stewardship in primary care and the role of primary care physicians in promoting appropriate antibiotic use for URTI . Highly educated patients were more likely to receive antibiotics as these patients may have appeared more confident about their needs and could have challenged the physician's decision in their care . Given the diagnostic uncertainty of URTI and the time-strapped ED environment , physicians may compromise by prescribing antibiotics to patients, but further investigation is needed to support this hypothesis. Antibiotic use behaviour was not substantially different pre- and during the COVID-19 pandemic, according to a community survey in Singapore . The survey found that 24% (3% increment from pre-pandemic) of respondents would expect antibiotics and 21% (2% increment from pre-pandemic) would take antibiotics to prevent their condition from getting worse during the pandemic. Although a higher proportion of our study respondents (∼30%) agreed with the above two statements, ED respondents, who were unwell at the point of the survey, could have perceived a lower health status compared with community respondents who may not have been unwell at the point of the survey. One-fifth of respondents would keep stocks of antibiotics at home, and 10% to 16% would use or share them with their family without advice from a physician. Although most respondents did not agree to sharing or reusing antibiotics, these proportions have not improved over the years despite calls for actions to change the public's antibiotic use behaviour , . Our study had several limitations. The hospital and national protocols regarding the COVID-19 pandemic were evolving during our data collection. We initially excluded patients with COVID-19 infection from the study because of a default hospital admission policy. However, with mass vaccination and the transition to home recovery for COVID-19 infections due to milder illness, we subsequently included them in our study if they were medically diagnosed with URTI. Patients’ health-seeking behaviours could have varied at different periods of the pandemic. In addition, the antibiotic prescribing rate was self-reported by patients, which may differ from the rates prescribed by the physician. However, we expect the discrepancy between the self-reported and actual antibiotic prescribing rates to be low as we verified the antibiotic prescribing rates with the electronic medical records of one ED in this study and observed the discrepancy to be minimal (<5%). The pandemic provides an invaluable opportunity for leveraging the mass communication channels to educate the public on uncomplicated URTI, as well as increase the public's knowledge of antibiotics and AMR , . Furthermore, since prior experiences with antibiotics likely occurred in primary care, future work can explore interventions in primary care to address patients’ expectations for antibiotics in the ED. Conclusion In conclusion, patients with URTI who expected antibiotics remained more likely than those who did not expect them to receive antibiotics during the COVID-19 pandemic. Perceived severity of illness and effectiveness of antibiotics in speeding up recovery were the top reasons for expecting antibiotics, while poor knowledge and prior experiences were strong predictors for expecting antibiotics. Our findings highlighted an opportunity for leveraging the COVID-19 mass communication channels to educate the public on the non-necessity of antibiotics for URTI to address the problem of antibiotic misuse and AMR. The final dataset is partially available on request. This project is supported by 10.13039/501100001349 National Medical Research Council Clinician Scientist Award (award number: MOH-CSAINV18may-0002). The authors declare that they have no competing interests.
Comparison of Gold Biosensor Combined with Light Microscope Imaging System with ELISA for Detecting
98b8b987-0931-4c44-a1c1-90b89d9af2f2
9998200
Microbiology[mh]
Poultry is the second highest source of meat consumption, and its demand is continuously increasing due to rapid production under automated processing facilities and affordable price . Moreover, poultry is the most common source of Salmonella , which is a threat to human health worldwide [ - ]. A study reported that 279 of 1,114 outbreaks (25%) from 1998 to 2012 in the U.S. were linked to poultry . Of these 279 outbreaks, 149 could be traced back to several confirmed pathogens, including Salmonella spp. (43%), Clostridium perfringens (26%), Campylobacter (7%), Staphylococcus aureus (5%), Bacillus cereus (3%), and Listeria monocytogenes (3%). Several preventive and hygiene measures have been developed and implemented for controlling Salmonella in processing facilities using post-chilling immersion tank as well as spraying applications with various antimicrobials, such as sulfuric acid, sodium sulfate, sodium chloride, calcium hypochlorite, organic acids, and bacteriophage solution . However, these measures are limited by low effectiveness and the rapid resistance development of Salmonella (~1 log reduction) . Although a conventional method has been recognized as the “gold standard of detection” , its time-consuming and labor-intensive procedures remain problematic for employing it on-site [ , , ]. Numerous biosensor methods have been developed for use in clinical diagnostics, environmental monitoring, and foodborne pathogen detection [ - ]. A biosensor consists of a bioreceptor for identifying and binding with a specific target and a transducer for integrating the binding of the bioreceptor with the target on the sensor platform [ - ]. In the past two decades, antibodies, as one of the major bioreceptors, have been commonly used in various biosensor methods due to their excellent binding capability with each target pathogen . However, only a few biosensors have been practically used in food processing facilities for monitoring and detecting foodborne pathogens . Non-specific bindings of the food matrix, when employed in a food sample, could interfere or block the binding of target pathogens with bioreceptors, resulting in a significant reduction in sensitivity, specificity, and reliability of biosensor methods [ - ]. Herein, a gold biosensor combined with light microscope imaging system (GB-LMIS) was developed by our research group . The method employing the GB-LMIS is based on the binding of antibodies with target pathogens on a gold sensor, following almost the same principle as that of the enzyme-linked immunosorbent assay (ELISA). The difference is the introduction of a gold sensor in GB-LMIS for the immobilization of antibodies. Moreover, no extra enzymes or secondary antibodies are required for quantifying target pathogens. In the GB-LMIS, a square-cut section of glass is coated with a nanometer-scale, thin gold layer for facilitating antibody binding. Upon placing the antibody-immobilized sensor in food, the antibodies on the sensor bind with a foodborne pathogen. The bound target pathogen on the sensor is visualized and enumerated using a light microscope equipped with a charged-coupled device (CCD) camera. So far, the GB-LMIS has been employed to detect Escherichia coli O157:H7 in turnip greens and L. monocytogenes in chicken as target pathogens. Using the binding of a foodborne pathogen with a specific antibody on the sensor, the GB-LMIS can capture and visualize these target pathogens, thereby aiding in differentiation of target pathogens from several unavoidable food matrices. This study aimed to compare the performance of the GB-LMIS and ELISA for detecting Salmonella in chicken using anti- Salmonella polyclonal antibodies (pAbs) as an on-site applicable detection method in poultry processing facilities. Bacteria and Culture Condition The bacterial species tested in this study ( ) were obtained from the Food Microbiology Laboratory at Auburn University (USA). Salmonella Typhimurium and S . Enteritidis were incubated in 20 ml of Trypticase Soy Broth (TSB, Difco Laboratories Inc., USA) for 16 h at 37°C. After cultivation, each bacterial culture was washed 3 times with phosphate-buffered saline (PBS, pH 7.2, Sigma-Aldrich Co., USA) by centrifugation at 5,000 × g for 5 min. The precipitated cells were resuspended in PBS and each bacterial concentration was determined using a preconstructed standard curve. A Salmonella cocktail was prepared by mixing equal amounts of S . Typhimurium and S . Enteritidis. Other bacterial species ( ), except Listeria spp., were cultured in TSB whereas two strains of Listeria were cultured in TSB containing 0.6% yeast extract (TSBYE) for 16 h at 37ºC. After cultivation, each bacterial culture was washed and centrifuged for preparing a bacterial suspension according to the abovementioned procedures. Purification of Anti- Salmonella pAbs Ascites fluid with anti- Salmonella pAbs (Hybridomas Laboratory, Auburn University) was produced from a white rabbit (New Zealand) against Salmonella cocktail and purified through ammonium sulfate precipitation and protein A affinity column chromatography (Sigma Chemical Co.). After confirming its purity using 12%sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS–PAGE), the concentration of purified anti- Salmonella pAbs was finally determined using the Bradford method . Preparation of Gold Sensor A glass square (5 mm × 5 mm) with a thickness of 0.17 mm was cut using a micro-dicing saw (MPE Inc., USA). After ultrasonic cleaning, the sensor was cleaned further using acetone, ethanol, and filtered distilled water (FDW). The cleaned sensor was coated with Cr and Au with a thickness of 40 nm using a Pelco SC-6 sputter (Ted Pella Inc., USA). Reactivity and Specificity of Anti- Salmonella pAbs Using ELISA For the reactivity of anti- Salmonella pAbs, 100 μl of Salmonella cocktail (10 8 CFU/ml) was placed in an ELISA plate (Costar, USA) and incubated at 37°C for 1 h. After washing 3 times with 200 μl of PBS containing 0.1% Tween 20 (PBST), the unbound area of the wells was blocked with 200 μl of 1% bovine serum albumin (BSA, Sigma-Aldrich Co.) for 1 h at 37°C, followed by washing thrice with PBST. An aliquot of 100 μl of anti- Salmonella pAbs (0.6–400 μg/ml) was stored at room temperature (RT) for 2 h. After washing with PBST, 100 μl of alkaline phosphatase-conjugated anti-rabbit goat IgG (0.5 μg/ml, Sigma-Aldrich Co.) was added and incubated for 1 h at RT. Finally, after washing 3 times with PBST, 100 μl of p-nitrophenyl phosphate (p-npp, Sigma Chemical Co.) was added as a substrate and the absorbance of each well was measured at 405 nm using a microplate reader (Thermo Labsystems, Finland). After 15 min of incubation in the dark at RT, the absorbance was measured again. The absorbance results were expressed as the means of the absorbance difference with standard deviations. For determining the specificity of anti- Salmonella pAbs, 100 μl of each bacterial suspension was incubated at 37°C for 1 h. After washing with PBST, the unbound area was blocked with 200 μl of 1% BSA at 37°C for 1 h. Finally, 100 μl of anti- Salmonella pAbs (50 μg/ml) was added and the abovementioned procedures were performed. A cutoff value was determined based on the mean of the negative control plus 0.25 OD units . Reactivity and Specificity of Anti- Salmonella pAbs Using GB-LMIS A gold sensor was immobilized with various concentrations of 100 μl of anti- Salmonella pAbs (3.0–400 μg/ml) against the Salmonella cocktail to evaluate their reactivity. The gold sensor was immobilized with the same amount of anti- Salmonella pAbs (100 μg/ml) against various foodborne pathogens for determining their specificity. A control sensor was also immobilized with 100 μl of DW. After incubation at 37°C for 1 h, the sensor was washed 3 times with PBS and the unbound areas of the sensor were blocked with 100 μl of 1% BSA at RT for 1 h. Then, the blocked sensor was washed 3 times with PBS and air-dried for use as an immunosensor. The immunosensor was then incubated with 100 μl of Salmonella cocktail (10 8 CFU/ml) for determining the reactivity of anti- Salmonella pAbs with other bacterial suspensions (10 8 CFU/ml) as well as their specificity at 22°C for 1 h. After incubation, the immunosensor was washed with FDW, dried, and then treated with 4% OsO 4 (Sigma-Aldrich Co.) for 1 h. The bacteria captured on the immunosensor were observed under a light microscope equipped with a CCD camera (Nikon Eclipse L 150, Nikon Instruments Inc., USA) at 1,000× magnification. The captured bacterial images were enumerated from 10 selected areas on the surface of the immunosensor. The detected number of bacteria on the immunosensor was determined from the average number of bacteria counted in each area and expressed as cell per mm 2 (cell/mm 2 ). Comparison of GB-LMIS with ELISA for Salmonella Detection in Chicken After Exposure to Chilling Conditions Chicken skins were randomly collected from Koch Food Company (USA) and sliced into 10 cm × 10 cm samples. To minimize contamination, chicken skin was washed with 200 ppm chlorine solution (Sigma-Aldrich Co.) and sterilized DW. Then, 200 ml of the Salmonella cocktail was inoculated onto the chicken at concentrations ranging from 10 1 to 10 3 CFU/100 cm 2 . An equal amount of PBS was added onto other chicken skins as negative controls. The inoculated chicken skins were dried under a biosafety cabinet for bacterial attachment and placed in an Erlenmeyer flask prior to further incubation in a refrigerator (4°C) for 48 h. Next, 100 ml of brain heart infusion (BHI, EMD Science, Germany) or brilliant green (BG, Difco Laboratories Inc.) broth was added to each flask and incubated at 37°C in an orbital shaker at 250 rpm. Then, 100 μl of sample was collected from BHI and BG broths at 0, 2, 4, and 6 h, and the resuscitated bacterial population was measured using xylose lysine deoxycholate agar (Difco Laboratories Inc.) and recorded as log CFU/chicken for comparison. Subsequently, 20 ml of samples were obtained from both broths and washed 3 times by centrifugation at 4,000 × g for 20 min. After resuspending with 1 ml of PBS, 100 μl of Salmonella suspension was used for ELISA and GB-LMIS, as described in the previous section. The results are expressed as log CFU/chicken for the comparison. Statistical Analysis Experimental results are expressed as mean ± SD. Comparisons between various treatments and/or groups were performed using one-way analysis of variance with Tukey's multiple comparison test and Student’s paired t -test. Statistical analysis was performed using GraphPad InStat v.3 (GraphPad, USA). The bacterial species tested in this study ( ) were obtained from the Food Microbiology Laboratory at Auburn University (USA). Salmonella Typhimurium and S . Enteritidis were incubated in 20 ml of Trypticase Soy Broth (TSB, Difco Laboratories Inc., USA) for 16 h at 37°C. After cultivation, each bacterial culture was washed 3 times with phosphate-buffered saline (PBS, pH 7.2, Sigma-Aldrich Co., USA) by centrifugation at 5,000 × g for 5 min. The precipitated cells were resuspended in PBS and each bacterial concentration was determined using a preconstructed standard curve. A Salmonella cocktail was prepared by mixing equal amounts of S . Typhimurium and S . Enteritidis. Other bacterial species ( ), except Listeria spp., were cultured in TSB whereas two strains of Listeria were cultured in TSB containing 0.6% yeast extract (TSBYE) for 16 h at 37ºC. After cultivation, each bacterial culture was washed and centrifuged for preparing a bacterial suspension according to the abovementioned procedures. Salmonella pAbs Ascites fluid with anti- Salmonella pAbs (Hybridomas Laboratory, Auburn University) was produced from a white rabbit (New Zealand) against Salmonella cocktail and purified through ammonium sulfate precipitation and protein A affinity column chromatography (Sigma Chemical Co.). After confirming its purity using 12%sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS–PAGE), the concentration of purified anti- Salmonella pAbs was finally determined using the Bradford method . A glass square (5 mm × 5 mm) with a thickness of 0.17 mm was cut using a micro-dicing saw (MPE Inc., USA). After ultrasonic cleaning, the sensor was cleaned further using acetone, ethanol, and filtered distilled water (FDW). The cleaned sensor was coated with Cr and Au with a thickness of 40 nm using a Pelco SC-6 sputter (Ted Pella Inc., USA). Salmonella pAbs Using ELISA For the reactivity of anti- Salmonella pAbs, 100 μl of Salmonella cocktail (10 8 CFU/ml) was placed in an ELISA plate (Costar, USA) and incubated at 37°C for 1 h. After washing 3 times with 200 μl of PBS containing 0.1% Tween 20 (PBST), the unbound area of the wells was blocked with 200 μl of 1% bovine serum albumin (BSA, Sigma-Aldrich Co.) for 1 h at 37°C, followed by washing thrice with PBST. An aliquot of 100 μl of anti- Salmonella pAbs (0.6–400 μg/ml) was stored at room temperature (RT) for 2 h. After washing with PBST, 100 μl of alkaline phosphatase-conjugated anti-rabbit goat IgG (0.5 μg/ml, Sigma-Aldrich Co.) was added and incubated for 1 h at RT. Finally, after washing 3 times with PBST, 100 μl of p-nitrophenyl phosphate (p-npp, Sigma Chemical Co.) was added as a substrate and the absorbance of each well was measured at 405 nm using a microplate reader (Thermo Labsystems, Finland). After 15 min of incubation in the dark at RT, the absorbance was measured again. The absorbance results were expressed as the means of the absorbance difference with standard deviations. For determining the specificity of anti- Salmonella pAbs, 100 μl of each bacterial suspension was incubated at 37°C for 1 h. After washing with PBST, the unbound area was blocked with 200 μl of 1% BSA at 37°C for 1 h. Finally, 100 μl of anti- Salmonella pAbs (50 μg/ml) was added and the abovementioned procedures were performed. A cutoff value was determined based on the mean of the negative control plus 0.25 OD units . Salmonella pAbs Using GB-LMIS A gold sensor was immobilized with various concentrations of 100 μl of anti- Salmonella pAbs (3.0–400 μg/ml) against the Salmonella cocktail to evaluate their reactivity. The gold sensor was immobilized with the same amount of anti- Salmonella pAbs (100 μg/ml) against various foodborne pathogens for determining their specificity. A control sensor was also immobilized with 100 μl of DW. After incubation at 37°C for 1 h, the sensor was washed 3 times with PBS and the unbound areas of the sensor were blocked with 100 μl of 1% BSA at RT for 1 h. Then, the blocked sensor was washed 3 times with PBS and air-dried for use as an immunosensor. The immunosensor was then incubated with 100 μl of Salmonella cocktail (10 8 CFU/ml) for determining the reactivity of anti- Salmonella pAbs with other bacterial suspensions (10 8 CFU/ml) as well as their specificity at 22°C for 1 h. After incubation, the immunosensor was washed with FDW, dried, and then treated with 4% OsO 4 (Sigma-Aldrich Co.) for 1 h. The bacteria captured on the immunosensor were observed under a light microscope equipped with a CCD camera (Nikon Eclipse L 150, Nikon Instruments Inc., USA) at 1,000× magnification. The captured bacterial images were enumerated from 10 selected areas on the surface of the immunosensor. The detected number of bacteria on the immunosensor was determined from the average number of bacteria counted in each area and expressed as cell per mm 2 (cell/mm 2 ). Salmonella Detection in Chicken After Exposure to Chilling Conditions Chicken skins were randomly collected from Koch Food Company (USA) and sliced into 10 cm × 10 cm samples. To minimize contamination, chicken skin was washed with 200 ppm chlorine solution (Sigma-Aldrich Co.) and sterilized DW. Then, 200 ml of the Salmonella cocktail was inoculated onto the chicken at concentrations ranging from 10 1 to 10 3 CFU/100 cm 2 . An equal amount of PBS was added onto other chicken skins as negative controls. The inoculated chicken skins were dried under a biosafety cabinet for bacterial attachment and placed in an Erlenmeyer flask prior to further incubation in a refrigerator (4°C) for 48 h. Next, 100 ml of brain heart infusion (BHI, EMD Science, Germany) or brilliant green (BG, Difco Laboratories Inc.) broth was added to each flask and incubated at 37°C in an orbital shaker at 250 rpm. Then, 100 μl of sample was collected from BHI and BG broths at 0, 2, 4, and 6 h, and the resuscitated bacterial population was measured using xylose lysine deoxycholate agar (Difco Laboratories Inc.) and recorded as log CFU/chicken for comparison. Subsequently, 20 ml of samples were obtained from both broths and washed 3 times by centrifugation at 4,000 × g for 20 min. After resuspending with 1 ml of PBS, 100 μl of Salmonella suspension was used for ELISA and GB-LMIS, as described in the previous section. The results are expressed as log CFU/chicken for the comparison. Experimental results are expressed as mean ± SD. Comparisons between various treatments and/or groups were performed using one-way analysis of variance with Tukey's multiple comparison test and Student’s paired t -test. Statistical analysis was performed using GraphPad InStat v.3 (GraphPad, USA). The successful performance of an antibody-based detection method is absolutely dependent on the reactivity and specificity of the antibody. Anti- Salmonella pAbs (6.5 mg/ml) were purified through ammonium sulfate precipitation and protein A affinity column chromatography. The reactivity of anti- Salmonella pAbs was determined using GB-LMIS and ELISA ( ). The binding of Salmonella on the immunosensor significantly increased up to antibody concentrations of 100 μg/ml ( p < 0.05) and then remained steady, indicating no significant differences. Therefore, the optimum concentration of anti- Salmonella pAbs was 100 μg/ml for GB-LMIS. Similar to the result of ELISA, the reactivity of anti- Salmonella pAbs with Salmonella was significantly increased up to 12.5 μg/ml ( ) ( p < 0.05). Further, the increase in antibody concentrations did not exhibit a significant and positive influence on the binding reactivity with Salmonella . Therefore, the optimum concentrations of anti- Salmonella pAbs were determined to be 12.5 and 100 μg/ml for ELISA and GB-LMIS, respectively. The optimum concentration of anti- Salmonella pAbs (100 μg/ml) for GB-LMIS was approximately 8-fold higher than that for ELISA. Since poultry may coexist with other microorganisms, such as Micrococci , Pseudomonas , E. coli , L. monocytogenes , S. aureus , Campylobacter jejuni , and Salmonella , the antibody needs to react with a target among other heterogeneous microorganisms . For comparing the specificity of anti- Salmonella pAbs ( ), the cutoff value of ELISA was 0.379 based on the mean of the negative control (0.129) plus 0.25 OD units . Anti- Salmonella pAbs demonstrated a significantly greater specificity against all Salmonella strains tested using both methods ( p < 0.001). Although few bacteria were captured on the immunosensor, they were negligible and similar results were obtained from the control sensors (devoid of anti- Salmonella pAbs). Thus, anti- Salmonella pAbs exhibited a sufficient specificity against Salmonella only by providing a greater absorbance and bacterial bindings on the immunosensor for ELISA and GB-LMIS. Purified anti- Salmonella pAbs demonstrated a sufficient specificity against S . Typhimurium, S . Enteritidis, and S . Heidelberg, which are representative strains in poultry . More importantly, GB-LMIS exhibited a competitive and robust specificity compared with ELISA. Following the US regulations, poultry carcasses should be chilled to ≤ 4.4°C for a certain period to ensure a high-quality and safe product . Under similar chilling condition, chicken was inoculated with Salmonella cocktail prior to placing at 4°C for 48 h. A previous study revealed that the minimum growth temperature of Salmonella in poultry was 5°C. Thus, it was hypothesized that Salmonella inoculated on chicken after exposure to 4°C for 48 h might be injured. An enrichment procedure is inevitably required to reach the detectable number of bacteria (detection limit) and resuscitate injured Salmonella to prevent false-negative results. The enrichment procedure will increase the number of Salmonella and recover the injured cells during the chilling period, although the enrichment period may increase the total detection time and diminish the on-site applicability of GB-LMIS. As our previous study showed that BHI and BG broths were the most efficient non-selective and selective media for Salmonella on chicken, respectively, these two media were selected for culturing Salmonella . The populations of Salmonella in BHI and BG enrichment broths after chilling at 4°C for 48 h were compared with those that were not exposed to chilling condition at every 2-h interval ( ). As the enrichment time and inoculation concentration increased, the bacterial growth increased. However, the overall growth of Salmonella under chilling condition was significantly lower than that under non-exposure to chilling condition ( p < 0.05). No significant differences in bacterial growths were observed between BHI and BG broth during the whole incubation time, as long as the initially inoculated bacterial concentration was the same ( p > 0.05). These results confirmed the suitability of both enrichment broths for the recovery of Salmonella injured through chilling and provided an approximate estimation of bacterial growth rate during the 6-h enrichment period. Finally, both methods were employed to detect Salmonella at a 2 h interval ( ). Unlike the GB-LMIS, ELISA requires conversion work to represent the number of Salmonella from OD values using an equation (Y = 0.159 × - 0.189) ( ). Based on a previous study , the detection limit of the GB-LMIS for Salmonella detection was determined to be 10 3 CFU/sensor. Both methods could detect Salmonella on chicken samples with initial inoculation concentration of 10 2 and 10 3 CFU on chicken and a 4-h enrichment period, and those with initial concentration of 10 1 , 10 2 , and 10 3 CFU and a 6-h enrichment period. Following the pattern of slightly greater populations of Salmonella in BHI ( ), the detected bacterial numbers in BHI were greater than those in BG for both methods. Although a greater number of Salmonella was detected using ELISA than GB-LMIS, no significant differences were observed between the tested methods, except for a chicken sample with initial inoculation concentration of 10 2 CFU and a 4-h enrichment period, and those with initial inoculation concentration of 101 CFU and a 6-h enrichment period ( p < 0.05). Higher numbers of Salmonella were detected using ELISA because the quantification of bacteria relies on a sensitive enzyme reaction. The enzyme used in ELISA is generally conjugated to secondary antibodies, thereby requiring a substrate for reacting with the enzyme. The introduction of secondary antibody and substrate in ELISA requires additional incubation time and washing procedure. Meanwhile, the GB-LMIS could detect and visualize Salmonella without additional chemicals, thereby demonstrating its competitive and comparable detection capability in an easy and simple manner. There was some potential influence of media (broth) and/or interference of the food matrix on the performance of both methods. Other studies [ - ] showed that Rappaport–Vassiliadis medium reduced the sensitivity of ELISA, although the RV medium was more effective in increasing the number of Salmonella compared with other media. In a previous study , an unknown component of RV medium was found to impact the expression of the antigenic epitope, thereby interfering with the binding of antigen and antibody. The GB-LMIS captured the target pathogen based on the antigen and antibody binding on the sensor and enabled visualization in a user-friendly and rapid manner. As shown in , the GB-LMIS exhibited a competitive and robust specificity for detecting Salmonella without any aid of enzyme labeling to the antibody for enhancing the reactivity as compared with ELISA. GB-LMIS is cost effective because there is no need for enzyme or fluorescent conjugation to quantify the bacterial bindings. In contrast, the necessity of a label-conjugated secondary antibody in ELISA increased the detection time and decreased its practicability as an on-site applicable method in the food industry . Although the required concentration of antibodies for the GS-LMIS was 8-fold higher than that required for ELISA, the GB-LMIS overcame the limitation of ELISA without unnecessary conversion procedure after the measurement of OD. Thus, the GB-LMIS was a more cost-effective and time-effective method as it decreased the detection cost and time from ~ $1.80 to ~ $0.31 and ~5.5 to ~2.5 h, respectively, excluding the enrichment period. Although the enrichment period increased the overall detection time, the bacterial concentration should reach at least a detectable level, regardless of the detection method. Hence, it can be concluded that the GB-LMIS is a feasible, novel, and rapid method for detecting Salmonella in poultry facilities.
null
00d7d844-ae40-467b-925e-deb2c50be303
9998209
Microbiology[mh]
The genus Microbacterium was classified by Orla-Jensen S based on the d-lactic acid-producing bacteria in milk. At that time, the genus Microbacterium comprised 157 species, including Microbacterium aerolatum V-73 T , M. agarici CC-SBCK-209 T , M. album SYSU D8007 T , M. algeriense G1 T , M. amylolyticum N5 T , M. aoyamense KV-492 T , M. aquimaris JS54-2 T . Microbacterium can be isolated from various sources, such as seawater, desert soil, maize rhizosphere, cow dung, and microfiltered milk. Members of this genus are Gram-stain-positive, rod-shaped, and have an optimum growth temperature of 20–30°C. Here, we report a taxonomic analysis of the novel bacterial strain, KUDC0405 T , isolated from the rhizospheric soil of Elymus tsukushiensis , a plant native to the Dokdo Islands (37°14′24.2′′ N, 131°52′12.2′′ E). During microbial diversity monitoring in April 2014, rhizospheric soil samples were collected from native plants of the Ulleungdo and Dokdo Islands (Ulleung-gun, Gyeongsangbuk-do). E. tsukushiensis var. trasiens (Hack.) Osada is native to the Dokdo Islands and is the dominant plant species on these islands, and its distribution is expanding . The Dokdo Islands are a group of volcanic islands located in the middle of the East Sea, east of the Republic of Korea (37°14′24.2′′N, 131°52′12.2′′E; Uljin-gun, Gyeongsangbuk-do). The island group consists of two large islands (Dong-do and Seo-do) and 89 small islets, and high concentrations of Ba, K, and Rb have been detected in soil . Despite the difficult survival environment for plants, various plants inhabit Dokdo, and various useful microorganisms and novel species of bacteria have been discovered associated with plants and characterized for plant growth-promoting traits. Some of these microorganisms include Brevibacterium iodinum KUDC1716 , Ochrobactrum lupini KUDC1013 and Novosphingobium pentaromativorans KUDC1065 . Microorganism Isolation and Cultivation Plant samples were collected and stored as described previously . The samples were suspended in 0.85%NaCl (w/v) and serial dilutions (10 -4 ‒10 -6 ) were prepared. A 100 μL aliquot dilution was plated onto R2A and 1/10 diluted tryptic soy agar (TSA) (Difco, USA) and incubated at 25°C for 7 days. Morphologically different colonies were selected, and individual colonies were further purified by repeated streaking onto TSA media. The type strains used in this study were obtained from the China General Microbiological Culture Collection Centre (CGMCC) and the Korea Collection for Type Cultures (KCTC). The strains were cultivated on TSA at 25°C and maintained at -70°C in saline solution (0.85% NaCl, w/v) supplemented with 15% glycerol (v/v). 16S rRNA Gene Sequencing and Phylogenetic Analysis The phylogenetics of the isolated strain was determined based on a comparative analysis of the 16S rRNA gene sequence. The 16S rRNA gene sequence was amplified and the PCR products were purified as described previously . Universal primer pairs 518F (5′-CCA GCA GCC GCG GTA ATA CG-3′) and 800R (5′-TAC CAG GGT ATC TAA TCC-3′) were used for amplification . Direct sequencing of the 16S rRNA gene was conducted by Macrogen ( http://macrogen.com/ ) using the sequencing primers (518F and 800R) and an automated sequencer (ABI 3730xl; Applied Biosystems, USA). The EzBioCloud server ( https://ezbiocloud.net/ ) was used to identify the closest phylogenetic neighbors and calculate pairwise 16S rRNA gene sequence similarity values with respect to the novel strain. All 16S rRNA gene sequences of the closest type strains were aligned using CLUSTAL_W and the sequence data were analyzed using BioEdit . Gaps at the 5′ and 3′ ends of the alignment were omitted from further analysis. Bayesian inference was performed using a Markov Chain Monte Carlo (MCMC) algorithm in MrBayes v.3.2.7a . The six simultaneous Markov chains were run for 1,000,000 generations. Trees were started from random trees and saved every 1,000th generation. Burn-in was set at 25% generations. The Jukes and Cantor model was used to generate an evolutionary distance matrix for the neighbor-joining algorithm. Phylogenetic trees were constructed using the neighbor-joining algorithm in PHYLIP version 3.696 . The SEQBOOT and CONSENSE programs in the PHYLIP package were used to evaluate the resulting tree topologies by bootstrap analysis with 1,000 replications . The phylogenetic trees were indicated using maximum parsimony and maximum likelihood tree-making algorithms in the MEGA-X 10.1.8 software with 1,000 bootstrap replicates. Maximum likelihood phylogeny was generated using the Kimura-2-parameter model . Rathayibacter rathayi VKM Ac-1601 T , which is not affiliated with the genus Microbacterium was used as an outgroup. Trees were rooted and constructed using MEGA-X in the Newick format. Whole-Genome Sequence Analysis Chromosomal DNA was extracted in accordance with Sambrook et al . and was purified as described by Yoon et al . . The cell biomass for DNA was cultured on TSA at 30°C for 2 days. The complete genome of the strain KUDC0405 T was sequenced using a MinION platform (Oxford Nanopore Technologies, UK). The reads were assembled de novo using Flye (version 2.9) . The automatic NCBI Prokaryotic Genomes Annotation Pipeline (PGAP) and the Rapid Annotation of microbial genomes using the Subsystems Technology (RAST) server were used for combine the complete genome sequence. To identify biosynthetic gene clusters (BGCs) for secondary metabolites using The AntiSMASH server (version 6.0) . The average nucleotide identity (ANI) values based on the blast+ algorithm (ANIb) and the MUMmer (ANIm) ultra-rapid alignment tool between strain KUDC0405 T and closely related strains were calculated using the JSpeciesWS website ( https://jspecies.ribohost.com/jspeciesws/ ) . The average amino acid identity (AAI) values were obtained using Kostas lab ( http://enve-omics.ce.gatech.edu/ ) . The dDDH values were calculated using server-based genome-to-genome distance calculator version 2.1 ( http://ggdc.dsmz.de/distcalc2.php ) . The dDDH results were based on recommended formula 2 (identities/HSP length). Orthologous genes were identified between strain KUDC0405 T and its two closest relatives, M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T , using protein sequences annotated by Hyatt et al . and the OrthoVenn diagram . A multi-locus species tree based on the whole genome sequences of each reference strain was established using autoMLST ( http://automlst.ziemertlab.com ) . Morphological Characteristics The scanning electron micrograph of strain KUDC0405 T was observed with a Zentech digital camera (Sw 804255; Samwon Optics and Seige, Korea) for cell morphology and size, using cells grown on TSA. The cells were treated with 1% osmium tetroxide 25°C for 1 h, and dehydrated with graded series of ethanol (50%, 70%, 80%, 90%, and 100%), followed by isoamyl acetate. After lyophilization, the samples were coated with platinum (20 mA, 90 s), and the cell morphology was observed using a field emission scanning electron microscope (FE-SEM, SU8220; Hitachi, Japan). Growth Capabilities and Motility Growth capability of strain KUDC0405 T and the reference strains ( M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T ) at different temperatures (4, 10, 25, 30, 37, and 42°C) and different pH values (pH 3–12, in 1 pH unit intervals). The pH values were adjusted as described by Kämpfer et al . in sterilized TSA. The growth capability in the different NaCl concentrations (0‒10% w/v, in 0.5% increments) was also evaluated in tryptic soy broth (TSB). Oxidase activity was determined using BBL Oxidase Reagent (Becton Dickinson Biosciences, USA), and catalase activity was evaluated in 3% (v/v) hydrogen peroxide solution. Physiological and Chemotaxonomical Characteristics To determine hydrolysis of starch, urea, Tween 20, 40, 60, and 80, the isolate was cultured on TSA at 30°C for a week, as described by Cowan and Steel . The enzyme activity was evaluated using API ZYM kit, API 20NE kit (bioMérieux, France), and acid production was tested API 50CH test (bioMérieux, France) according to the manufacturer’s instructions at 30°C for 2 days. The type strains, M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T , which are related to KUDC0405 T , were analyzed under the same conditions. The cell wall peptidoglycan was analyzed using an amino acid analyzer (L-8900; Hitachi, Japan). To analyze the polar lipids, two-dimensional thin layer chromatography (TLC) analysis were used according to Minnikin et al . . The fatty acid composition analysis was performed using the Microbial Identification System from cells of the strain KUDC0405 T , and reference strains were incubated on TSA at 30°C for 7 days. To determine siderophore production by strain KUDC0405 T , chrome azurol S (CAS) media were used as previously described . Plant samples were collected and stored as described previously . The samples were suspended in 0.85%NaCl (w/v) and serial dilutions (10 -4 ‒10 -6 ) were prepared. A 100 μL aliquot dilution was plated onto R2A and 1/10 diluted tryptic soy agar (TSA) (Difco, USA) and incubated at 25°C for 7 days. Morphologically different colonies were selected, and individual colonies were further purified by repeated streaking onto TSA media. The type strains used in this study were obtained from the China General Microbiological Culture Collection Centre (CGMCC) and the Korea Collection for Type Cultures (KCTC). The strains were cultivated on TSA at 25°C and maintained at -70°C in saline solution (0.85% NaCl, w/v) supplemented with 15% glycerol (v/v). The phylogenetics of the isolated strain was determined based on a comparative analysis of the 16S rRNA gene sequence. The 16S rRNA gene sequence was amplified and the PCR products were purified as described previously . Universal primer pairs 518F (5′-CCA GCA GCC GCG GTA ATA CG-3′) and 800R (5′-TAC CAG GGT ATC TAA TCC-3′) were used for amplification . Direct sequencing of the 16S rRNA gene was conducted by Macrogen ( http://macrogen.com/ ) using the sequencing primers (518F and 800R) and an automated sequencer (ABI 3730xl; Applied Biosystems, USA). The EzBioCloud server ( https://ezbiocloud.net/ ) was used to identify the closest phylogenetic neighbors and calculate pairwise 16S rRNA gene sequence similarity values with respect to the novel strain. All 16S rRNA gene sequences of the closest type strains were aligned using CLUSTAL_W and the sequence data were analyzed using BioEdit . Gaps at the 5′ and 3′ ends of the alignment were omitted from further analysis. Bayesian inference was performed using a Markov Chain Monte Carlo (MCMC) algorithm in MrBayes v.3.2.7a . The six simultaneous Markov chains were run for 1,000,000 generations. Trees were started from random trees and saved every 1,000th generation. Burn-in was set at 25% generations. The Jukes and Cantor model was used to generate an evolutionary distance matrix for the neighbor-joining algorithm. Phylogenetic trees were constructed using the neighbor-joining algorithm in PHYLIP version 3.696 . The SEQBOOT and CONSENSE programs in the PHYLIP package were used to evaluate the resulting tree topologies by bootstrap analysis with 1,000 replications . The phylogenetic trees were indicated using maximum parsimony and maximum likelihood tree-making algorithms in the MEGA-X 10.1.8 software with 1,000 bootstrap replicates. Maximum likelihood phylogeny was generated using the Kimura-2-parameter model . Rathayibacter rathayi VKM Ac-1601 T , which is not affiliated with the genus Microbacterium was used as an outgroup. Trees were rooted and constructed using MEGA-X in the Newick format. Chromosomal DNA was extracted in accordance with Sambrook et al . and was purified as described by Yoon et al . . The cell biomass for DNA was cultured on TSA at 30°C for 2 days. The complete genome of the strain KUDC0405 T was sequenced using a MinION platform (Oxford Nanopore Technologies, UK). The reads were assembled de novo using Flye (version 2.9) . The automatic NCBI Prokaryotic Genomes Annotation Pipeline (PGAP) and the Rapid Annotation of microbial genomes using the Subsystems Technology (RAST) server were used for combine the complete genome sequence. To identify biosynthetic gene clusters (BGCs) for secondary metabolites using The AntiSMASH server (version 6.0) . The average nucleotide identity (ANI) values based on the blast+ algorithm (ANIb) and the MUMmer (ANIm) ultra-rapid alignment tool between strain KUDC0405 T and closely related strains were calculated using the JSpeciesWS website ( https://jspecies.ribohost.com/jspeciesws/ ) . The average amino acid identity (AAI) values were obtained using Kostas lab ( http://enve-omics.ce.gatech.edu/ ) . The dDDH values were calculated using server-based genome-to-genome distance calculator version 2.1 ( http://ggdc.dsmz.de/distcalc2.php ) . The dDDH results were based on recommended formula 2 (identities/HSP length). Orthologous genes were identified between strain KUDC0405 T and its two closest relatives, M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T , using protein sequences annotated by Hyatt et al . and the OrthoVenn diagram . A multi-locus species tree based on the whole genome sequences of each reference strain was established using autoMLST ( http://automlst.ziemertlab.com ) . The scanning electron micrograph of strain KUDC0405 T was observed with a Zentech digital camera (Sw 804255; Samwon Optics and Seige, Korea) for cell morphology and size, using cells grown on TSA. The cells were treated with 1% osmium tetroxide 25°C for 1 h, and dehydrated with graded series of ethanol (50%, 70%, 80%, 90%, and 100%), followed by isoamyl acetate. After lyophilization, the samples were coated with platinum (20 mA, 90 s), and the cell morphology was observed using a field emission scanning electron microscope (FE-SEM, SU8220; Hitachi, Japan). Growth capability of strain KUDC0405 T and the reference strains ( M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T ) at different temperatures (4, 10, 25, 30, 37, and 42°C) and different pH values (pH 3–12, in 1 pH unit intervals). The pH values were adjusted as described by Kämpfer et al . in sterilized TSA. The growth capability in the different NaCl concentrations (0‒10% w/v, in 0.5% increments) was also evaluated in tryptic soy broth (TSB). Oxidase activity was determined using BBL Oxidase Reagent (Becton Dickinson Biosciences, USA), and catalase activity was evaluated in 3% (v/v) hydrogen peroxide solution. To determine hydrolysis of starch, urea, Tween 20, 40, 60, and 80, the isolate was cultured on TSA at 30°C for a week, as described by Cowan and Steel . The enzyme activity was evaluated using API ZYM kit, API 20NE kit (bioMérieux, France), and acid production was tested API 50CH test (bioMérieux, France) according to the manufacturer’s instructions at 30°C for 2 days. The type strains, M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T , which are related to KUDC0405 T , were analyzed under the same conditions. The cell wall peptidoglycan was analyzed using an amino acid analyzer (L-8900; Hitachi, Japan). To analyze the polar lipids, two-dimensional thin layer chromatography (TLC) analysis were used according to Minnikin et al . . The fatty acid composition analysis was performed using the Microbial Identification System from cells of the strain KUDC0405 T , and reference strains were incubated on TSA at 30°C for 7 days. To determine siderophore production by strain KUDC0405 T , chrome azurol S (CAS) media were used as previously described . Identification by 16S rRNA Gene Sequencing and Phylogenetic Analysis The 16S rRNA gene sequence of the strain KUDC0405 T (1,490 bp) was determined as previously described . This strain exhibited the highest 16S rRNA gene sequence similarity (97.72%) with M. bovistercoris NEAU-LLE T , followed by M. pseudoresistence CC-5209 T (97.58%), M. resistens NBRC 103078 T (97.51%), M. oleivorans NBRC 103075 T (97.51%), M. testaceum NBRC 12675 T (97.51%), and M. paraoxydans NBRC 103076 T (97.30%). A comparison of the preliminary 16S rRNA gene sequences revealed that strain KUDC0405 T is related to members of the genus Microbacterium . In the Bayesian inference tree ( ), neighbor-joining phylogenetic tree ( ), maximum likelihood tree ( ), and maximum parsimony tree ( ), strain KUDC0405 T was grouped with M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T . KUDC0405 T exhibited a distinct phylogenetic lineage, indicating that it is a novel species belonging to the genus Microbacterium ( , ). The GenBank/EMBL/DDBJ accession numbers for the partial 16S rRNA gene sequence of KUDC0405 T are MT071892 and CP091139. Genomic Analysis The complete genomes determined have been deposited in the NCBI GenBank database under accession number GCF_021582895 (KUDC0405 T ). The complete genome of strain KUDC0405 T consisted of a circular chromosome (3,610,832 bp). The genomic DNA G+C content was 70.4%, which is within the range reported for the Microbacterium genus. A total of 3,654 genes were identified, of which 3,018 were protein-coding genes and 52 were RNA genes (3 rRNA, 46 tRNA, and 3 ncRNA genes). describes the genomic features of the strain KUDC0405 T and closely related strains. The genome of strain KUDC0405 T displayed 256 subsystems according to genome annotation using RAST. Various metabolic genes were predicted for various metabolic processes, such as the metabolism of amino acids and derivatives (311 genes), carbohydrates (247 genes), cofactors, vitamins, prosthetic groups, pigments (153 genes), proteins (151 genes), nucleosides and nucleotides (106 genes), DNA (60 genes), virulence, disease, and defense (38 genes), membrane transport (33 genes), respiration (33 genes), and other metabolic processes. KUDC0405 T did not appear to be motile and the genome contained no genes encoding proteins associated with motility. The OrthoVenn diagram revealed orthologous protein clusters of strain KUDC0405 T , M. bovistercoris NEAU-LLE T , and M. pseudoresistens CC-5209 T ( ). Strain KUDC0405 T formed 4,260 proteins, 2,019 orthologous clusters, and 2,106 singletons. All three strains (strain KUDC0405 T , M. bovistercoris NEAU-LLE T , and M. pseudoresistence CC-5209 T ) shared 1,510 orthologous protein clusters ( ). AntiSMASH 6.0 revealed that KUDC0405 T has the following five putative BGCs responsible for the biosynthesis of secondary metabolites: ectonine (located from 20,987 to 31,382 nt), T3PKS1 (located from 1,168,528 to 1,207,332 nt), T3PKS2, RRE-containing (located from 2,647,239 to 2,707,971 nt), and terpene (located from 3,487,598 to 3,508,218 nt) ( ). The terpene cluster exhibited high similarity (>80%) to the reported carotenoid gene cluster. The KUDC0405 T genome was compared with those of M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T . All values were notably lower than their thresholds (ANI, ~95%; AAI, ~95%; dDDH, ~70%). In the case of ANIm, <20% of the genome was aligned for M. bovistercoris NEAU-LLE T , and the alignment was assigned as suspicious by the software. In silico, AAI values were analysed at 64.7% and 65.0% in strain KUDC0405 T , M. bovistercoris NEAU-LLE T , and strain KUDC0405 T and M. pseudoresistence CC-5209 T , respectively. Also, GGDC results for strains KUDC0405 T , M. bovistercoris NEAU-LLE T , and M. pseudoresistence CC-5209 T were calculated as 17.3% and 17.5% based on formula 2 (identities/HSP length). The ANIb, ANIm, AAI, and dDDH values of strain KUDC0405 T compared with those of the closely related strains are presented in . These genome data clearly indicate that strain KUDC0405 T represents a novel species of the Microbacterium genus. Phenotypic, Physiological, and Biochemical Characteristics The strain KUDC0405 T was gram-positive, non-spore forming, non-motile, and grew anaerobically on TSA. Colonies on TSA media were smooth, circular, yellowish-white, and the cells were rod-shaped (0.3–0.4 × 0.7–0.8 μm) ( ). Growth was observed at 25–37°C (optimum, 25–30°C), pH 6–8 (optimum, pH 7), and with 0–7.0% NaCl (w/v) (optimum, 0.5–1.2%) on TSA under aerobic conditions. KUDC0405 T produces siderophores, which are iron-chelating compounds secreted by microorganisms that transport iron to the cell membranes of plants . The cells were oxidase-positive and catalase-positive. API ZYM strips are positive for acid phosphatase, alkaline phosphatase, crystalline arylamidase, esterase, esterase lipase, leucine arylamidase, lipase, N-acetyl-β-glucosamidase, naphthol-AS-BI-phosphohydrolase, trypsin, valine arylamidase, α-glucosidase, α-mannosidase, β-glucouronidase, and β-glucosidase, but negative for α-chymotrypsin, α-fructosidase, α-galactosidase, and β-galactosidase. Acid is produced from (API 50CH) 5-ketogluconate, aesculin, arbutin, cellobiose, D-arabinose, D-lyxose, D-tagatose, D-turanose, erythritol, fructose, galactose, glucose, L-arabinose, L-xylose, maltose, mannitol, mannose, melezitose, rhamnose, ribose, salicin, sorbose, starch, sucrose, and trehalose but not from 2-ketogluconate, adonitol, amygdalin, D-arabitol, D-fructose, dulcitol, gentiobiose, gluconate, glycerol, glycogen, inositol, inulin, L-arabitol, L-fructose, lactose, melibiose, methyl-α-D-glucoside, methyl-α-D-mannoside, methyl-β-D-xyloside, N-acetylglucosamine, raffinose, sorbitol, and xylitol. API 20NE strips were positive for glucose fermentation/oxidation, esculin hydrolysis, and nitrate reduction but negative for indole production, arginine dihydrolase, urease, gelatine hydrolysis, and adipic acid, capric acid, and trisodium citrate assimilation. Strain KUDC0405 T indicated hydrolysis of Tween 20, 40, and 80 but not Tween 60, starch, and urea ( , ). Chemotaxonomic Characteristics The predominant menaquinone in KUDC0405 T was MK-12. The polar lipids included diphospharidylglycerol, glycolipid, phosphatidylglycerol, an unidentified phospholipid, three unidentified aminolipids, and an unidentified lipid ( ). Diphosphathydylglycerol, glycolipid, and phosphatidylglycerol were commonly identified in KUDC0405 T , M. bovistercoris NEAU-LLE T , and M. pseudoresistence CC-5209 T ( ). Strain KUDC0405 T contained glycine (32.4%), ornithine (26.8%), alanine (25.4%), and glutamic (15.4%) as cell-wall peptidoglycans. The major fatty acids in KUDC0405 T were anteiso-C 17:0 (35.2%), iso-C 16:0 (16.3%), and iso-C 17:0 (8.0%), and the minor components included iso-C 15:0 (4.0%), C 16:0 (2.6%), anteiso-C 15:0 (2.5%), and C 18:0 (1.5%). shows the cellular fatty acid profiles of KUDC0405 T and the most closely related reference strains. M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T presented anteiso-C 17:0 and anteiso-C 15:0 as major fatty acids. The major fatty acids in the genus Microbacterium were anteiso-C 17:0 . To conclude, we suggest that strain KUDC0405 T represents a novel species of the genus Microbacterium , for which we suggest the name M. elymi sp. nov.. Description of Microbacterium elymi sp. nov. KUDC0405 T M. elymi (e’ly.mi. N. L. gen. n. elymi of the plant E. tsukushiensis ) Cells are Gram-stain-positive, catalase- and oxidase- positive, non-spore forming, non-motile, facultatively anaerobic and rod-shaped (0.3–0.4 × 0.7–0.8 μm). Colonies are smooth, circular, yellowish-white, and 3.0–0.4 mm in diameter on TSA with growth for 2 days. Optimal growth occurs at 25–30°C, at pH 7, and 0.5–1.2%NaCl (w/v) on TSA media. Strain KUDC0405 T produces siderophores and contains glycine, ornithine, alanine, and glutamic acid as cell-wall peptidoglycans. The polar lipids were diphosphatydilglycerol, glycolipid, phosphatydilglycerol, and phospholipid; the major menaquinone was MK-12; and the major fatty acids were anteiso-C 17:0 and iso-C 16:0 . The genomic DNA GC content was 70.4%. The type strain, KUDC0405 T (=KCTC 49411 T =CGMCC1.18472 T ), was isolated from the rhizosphere of E. tsukushiensis collected from the Dokdo Islands, Republic of Korea. The GenBank/EMBL/DDBJ accession numbers for the partial 16S rRNA gene sequence and genome sequence of KUDC0405 T were MT071892 and CP091139, respectively. NCBI accession number for genomes are GCF_021582895. General features of the genome de novo assembly are as follows: genome size, 3,610,832 bp; number of contigs, 1; coverage, 119.0 ×. The 16S rRNA gene sequence of the strain KUDC0405 T (1,490 bp) was determined as previously described . This strain exhibited the highest 16S rRNA gene sequence similarity (97.72%) with M. bovistercoris NEAU-LLE T , followed by M. pseudoresistence CC-5209 T (97.58%), M. resistens NBRC 103078 T (97.51%), M. oleivorans NBRC 103075 T (97.51%), M. testaceum NBRC 12675 T (97.51%), and M. paraoxydans NBRC 103076 T (97.30%). A comparison of the preliminary 16S rRNA gene sequences revealed that strain KUDC0405 T is related to members of the genus Microbacterium . In the Bayesian inference tree ( ), neighbor-joining phylogenetic tree ( ), maximum likelihood tree ( ), and maximum parsimony tree ( ), strain KUDC0405 T was grouped with M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T . KUDC0405 T exhibited a distinct phylogenetic lineage, indicating that it is a novel species belonging to the genus Microbacterium ( , ). The GenBank/EMBL/DDBJ accession numbers for the partial 16S rRNA gene sequence of KUDC0405 T are MT071892 and CP091139. The complete genomes determined have been deposited in the NCBI GenBank database under accession number GCF_021582895 (KUDC0405 T ). The complete genome of strain KUDC0405 T consisted of a circular chromosome (3,610,832 bp). The genomic DNA G+C content was 70.4%, which is within the range reported for the Microbacterium genus. A total of 3,654 genes were identified, of which 3,018 were protein-coding genes and 52 were RNA genes (3 rRNA, 46 tRNA, and 3 ncRNA genes). describes the genomic features of the strain KUDC0405 T and closely related strains. The genome of strain KUDC0405 T displayed 256 subsystems according to genome annotation using RAST. Various metabolic genes were predicted for various metabolic processes, such as the metabolism of amino acids and derivatives (311 genes), carbohydrates (247 genes), cofactors, vitamins, prosthetic groups, pigments (153 genes), proteins (151 genes), nucleosides and nucleotides (106 genes), DNA (60 genes), virulence, disease, and defense (38 genes), membrane transport (33 genes), respiration (33 genes), and other metabolic processes. KUDC0405 T did not appear to be motile and the genome contained no genes encoding proteins associated with motility. The OrthoVenn diagram revealed orthologous protein clusters of strain KUDC0405 T , M. bovistercoris NEAU-LLE T , and M. pseudoresistens CC-5209 T ( ). Strain KUDC0405 T formed 4,260 proteins, 2,019 orthologous clusters, and 2,106 singletons. All three strains (strain KUDC0405 T , M. bovistercoris NEAU-LLE T , and M. pseudoresistence CC-5209 T ) shared 1,510 orthologous protein clusters ( ). AntiSMASH 6.0 revealed that KUDC0405 T has the following five putative BGCs responsible for the biosynthesis of secondary metabolites: ectonine (located from 20,987 to 31,382 nt), T3PKS1 (located from 1,168,528 to 1,207,332 nt), T3PKS2, RRE-containing (located from 2,647,239 to 2,707,971 nt), and terpene (located from 3,487,598 to 3,508,218 nt) ( ). The terpene cluster exhibited high similarity (>80%) to the reported carotenoid gene cluster. The KUDC0405 T genome was compared with those of M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T . All values were notably lower than their thresholds (ANI, ~95%; AAI, ~95%; dDDH, ~70%). In the case of ANIm, <20% of the genome was aligned for M. bovistercoris NEAU-LLE T , and the alignment was assigned as suspicious by the software. In silico, AAI values were analysed at 64.7% and 65.0% in strain KUDC0405 T , M. bovistercoris NEAU-LLE T , and strain KUDC0405 T and M. pseudoresistence CC-5209 T , respectively. Also, GGDC results for strains KUDC0405 T , M. bovistercoris NEAU-LLE T , and M. pseudoresistence CC-5209 T were calculated as 17.3% and 17.5% based on formula 2 (identities/HSP length). The ANIb, ANIm, AAI, and dDDH values of strain KUDC0405 T compared with those of the closely related strains are presented in . These genome data clearly indicate that strain KUDC0405 T represents a novel species of the Microbacterium genus. The strain KUDC0405 T was gram-positive, non-spore forming, non-motile, and grew anaerobically on TSA. Colonies on TSA media were smooth, circular, yellowish-white, and the cells were rod-shaped (0.3–0.4 × 0.7–0.8 μm) ( ). Growth was observed at 25–37°C (optimum, 25–30°C), pH 6–8 (optimum, pH 7), and with 0–7.0% NaCl (w/v) (optimum, 0.5–1.2%) on TSA under aerobic conditions. KUDC0405 T produces siderophores, which are iron-chelating compounds secreted by microorganisms that transport iron to the cell membranes of plants . The cells were oxidase-positive and catalase-positive. API ZYM strips are positive for acid phosphatase, alkaline phosphatase, crystalline arylamidase, esterase, esterase lipase, leucine arylamidase, lipase, N-acetyl-β-glucosamidase, naphthol-AS-BI-phosphohydrolase, trypsin, valine arylamidase, α-glucosidase, α-mannosidase, β-glucouronidase, and β-glucosidase, but negative for α-chymotrypsin, α-fructosidase, α-galactosidase, and β-galactosidase. Acid is produced from (API 50CH) 5-ketogluconate, aesculin, arbutin, cellobiose, D-arabinose, D-lyxose, D-tagatose, D-turanose, erythritol, fructose, galactose, glucose, L-arabinose, L-xylose, maltose, mannitol, mannose, melezitose, rhamnose, ribose, salicin, sorbose, starch, sucrose, and trehalose but not from 2-ketogluconate, adonitol, amygdalin, D-arabitol, D-fructose, dulcitol, gentiobiose, gluconate, glycerol, glycogen, inositol, inulin, L-arabitol, L-fructose, lactose, melibiose, methyl-α-D-glucoside, methyl-α-D-mannoside, methyl-β-D-xyloside, N-acetylglucosamine, raffinose, sorbitol, and xylitol. API 20NE strips were positive for glucose fermentation/oxidation, esculin hydrolysis, and nitrate reduction but negative for indole production, arginine dihydrolase, urease, gelatine hydrolysis, and adipic acid, capric acid, and trisodium citrate assimilation. Strain KUDC0405 T indicated hydrolysis of Tween 20, 40, and 80 but not Tween 60, starch, and urea ( , ). The predominant menaquinone in KUDC0405 T was MK-12. The polar lipids included diphospharidylglycerol, glycolipid, phosphatidylglycerol, an unidentified phospholipid, three unidentified aminolipids, and an unidentified lipid ( ). Diphosphathydylglycerol, glycolipid, and phosphatidylglycerol were commonly identified in KUDC0405 T , M. bovistercoris NEAU-LLE T , and M. pseudoresistence CC-5209 T ( ). Strain KUDC0405 T contained glycine (32.4%), ornithine (26.8%), alanine (25.4%), and glutamic (15.4%) as cell-wall peptidoglycans. The major fatty acids in KUDC0405 T were anteiso-C 17:0 (35.2%), iso-C 16:0 (16.3%), and iso-C 17:0 (8.0%), and the minor components included iso-C 15:0 (4.0%), C 16:0 (2.6%), anteiso-C 15:0 (2.5%), and C 18:0 (1.5%). shows the cellular fatty acid profiles of KUDC0405 T and the most closely related reference strains. M. bovistercoris NEAU-LLE T and M. pseudoresistence CC-5209 T presented anteiso-C 17:0 and anteiso-C 15:0 as major fatty acids. The major fatty acids in the genus Microbacterium were anteiso-C 17:0 . To conclude, we suggest that strain KUDC0405 T represents a novel species of the genus Microbacterium , for which we suggest the name M. elymi sp. nov.. Microbacterium elymi sp. nov. KUDC0405 T M. elymi (e’ly.mi. N. L. gen. n. elymi of the plant E. tsukushiensis ) Cells are Gram-stain-positive, catalase- and oxidase- positive, non-spore forming, non-motile, facultatively anaerobic and rod-shaped (0.3–0.4 × 0.7–0.8 μm). Colonies are smooth, circular, yellowish-white, and 3.0–0.4 mm in diameter on TSA with growth for 2 days. Optimal growth occurs at 25–30°C, at pH 7, and 0.5–1.2%NaCl (w/v) on TSA media. Strain KUDC0405 T produces siderophores and contains glycine, ornithine, alanine, and glutamic acid as cell-wall peptidoglycans. The polar lipids were diphosphatydilglycerol, glycolipid, phosphatydilglycerol, and phospholipid; the major menaquinone was MK-12; and the major fatty acids were anteiso-C 17:0 and iso-C 16:0 . The genomic DNA GC content was 70.4%. The type strain, KUDC0405 T (=KCTC 49411 T =CGMCC1.18472 T ), was isolated from the rhizosphere of E. tsukushiensis collected from the Dokdo Islands, Republic of Korea. The GenBank/EMBL/DDBJ accession numbers for the partial 16S rRNA gene sequence and genome sequence of KUDC0405 T were MT071892 and CP091139, respectively. NCBI accession number for genomes are GCF_021582895. General features of the genome de novo assembly are as follows: genome size, 3,610,832 bp; number of contigs, 1; coverage, 119.0 ×. Supplementary data for this paper are available on-line only at http://jmb.or.kr .
The disparity between funding for eye research vs. the high cost of sight-loss in the UK
01fdd31e-00bf-4133-9d0b-713c9d2c1405
9998460
Ophthalmology[mh]
Supplemental Video Impact of sight-loss and lack of funding for eye research in the UK
Effect of
359ed6af-e64b-4cd0-b630-a87447dd2ed5
9998864
Microbiology[mh]
Fermented meats are produced worldwide due to their sensory characteristics and convenience . A great variety of fermented meats constitutes an important part of cultural patrimony. Fermented meat products have a unique flavor and texture and long shelf life under natural or artificial conditions, which are promoted by microorganisms . Meat protein is decomposed by microorganisms and enzymes to produce a large number of amino acids, which improves the nutrition and flavor of fermented meat . For fermented meat, four flavor development methods are used: protein degradation, lipid oxidation, the Maillard reaction, and the use of microorganisms . However, traditional natural fermentation is difficult to normalize fermented meat quality and ensure safety. Therefore, to improve product quality and reduce production difficulties by artificial inoculation of microorganisms, which can better ensure the safety and stability of fermented meat. Lactic acid bacteria (LAB) is a microorganism that is widely used in the production of fermented meat products. LAB in fermented meats can inhibit the growth and reproduction of pathogenic and spoilage bacteria while also lowering the content of harmful substances like nitrite . Lactiplantibacillus plantarum and Streptococcus xylosus in fermented sausage promoted protein and fat decomposition while inhibiting the growth of pathogenic and spoilage bacteria to prevent odor and rancidity, maintain sausage quality, and improve flavor . Lactobacillus sake , for example, can promote lipid hydrolysis, inhibit lipid autoxidation, and improve fermented flavor development . In terms of flavor development, LAB uses carbohydrates as the energy source to produce organic acids. Moreover, LAB can promote protein decomposition, further positively affect amino acid metabolism, and promote the development of fermented meat flavor . The addition of LAB is beneficial to increasing the content of free fatty acids in fermented meat products, particularly to promote the release of unsaturated fatty acids and provide basic materials for flavor development , , . Therefore, LAB plays an important role as a functional starter, resulting in unique flavor and safe fermented meats. Quorum sensing (QS) is a cell density-dependent type of communication mechanism mediated by QS molecules. QS molecules, also called autoinducers (AIs), enable bacteria to collectively regulate gene expression, thereby coordinating various activities . Moreover, biofilm formation of L. sanfranciscensis , cell adhesion of L. acidophilus , and environmental tolerance of gram-positive bacteria can be controlled by QS systems. AIs are classified into several types, with AI-2 being the only signal molecule produced by both gram-negative and gram-positive bacteria and used for intraspecific or interspecific communication . AI-2 is produced by S-ribosylhomocysteinase (LuxS), an enzyme found in many bacterial species that has been proposed to allow interspecies communication. The luxS gene has been identified in food-borne LAB, including Lactobacillus spp. and Bacillus spp. , as well as food-borne pathogens . Our previous research has shown that the LuxS/AI-2 QS system can regulate LAB metabolism and influence physiological activities , . Therefore, AI-2 might affect the quality of fermented food by regulating the metabolism of LAB. However, the QS and AI of LAB are seldom investigated in fermented foods. The knowledge of the QS of LAB during meat fermentation is rather preliminary and based on empirical observations. The relationship between AI-2 and the quality characteristics of fermented sausage has not been reported previously. Limosilactobacillus fermentum 332, previously isolated from fermented food with good fermentation potential for meat products and high AI-2 activity, was used as the starter for fermented sausage in this study. The changes in the activity of the signal molecule AI-2 and the quality of fermented sausage during the manufacturing process were investigated. The potential relationship between LAB AI-2 and flavors in fermented sausage was investigated preliminary, laying the theoretical groundwork for the directional improvement of fermentation strain production performance and product quality in the food industry. LAB viable count and pH changes The change in LAB viable count is shown in Fig. A. The LAB viable count of fermented sausage, inoculated with L. fermentum 332, was significantly higher than that in control on days 1, 5, and 11 ( p < 0.05). The LAB viable count was highest during the fermentation (day 1) period, then decreased during the drying (day 5) and mature periods (day 11). As shown in Fig. B, the pH of the fermented sausage inoculated with starter culture decreased rapidly from 5.64 to 4.54 within 24 h, while the pH of the control decreased from 5.65 to 5.20. The pH of the inoculated group increased slightly after 24 h (1 day) of fermentation. The pH rose to 4.82 on day 11. The fermented sausage itself contained some LAB and caused the pH decrease in fermented sausage without L. fermentum 332 (control). The pH of control was the lowest on day 5, then slightly increased to 4.99 on day 11. During the initial stage of fermentation, optimal temperature and humidity were required for the growth of microorganisms. L. fermentum 332 was grown and metabolized rapidly under optimal conditions, producing a large amount of lactic acid and other acids. Therefore, the pH decreased rapidly, and the acidity increased. After, the microorganisms in the meat interact with the enzymes to further decompose the proteins to produce free amino acids, ammonia, and basic amines, which slightly increase the pH value , . The pH of the inoculated group was significantly lower than that of the control during the 1–11 days of fermentation ( p < 0.05). These results were consistent with the results of other studies that reported the pH of the fermented sausage inoculated with LAB was lower than the pH of the control sausage – . The result showed that L. fermentum 332 has strong acidifying property, which effectively decreases the fermentation time. Meanwhile, whether the acidity change of fermented sausage caused by inoculation of L. fermentum 332 was related to the QS needed further study. Color changes Sausage color is crucial for the marketability of the meat product because it influences its appearance and acceptability. The color indexes include lightness ( L* ), redness ( a* ), and yellowness ( b* ). L* indicates the brightness of the sausage. The higher L* value indicates a brighter color of the sausage. a* indicates the bright red degree of sausage. The sausage products show attractive rose colors when the L* and a* of sausage are higher. The color changes of fermented sausage are presented in Table . The L* of the two groups of fermented sausage showed a downward trend. On days 1 and 5, the L* of the fermented sausage inoculated with starter culture was significantly higher than that of the control ( p < 0.05). On day 11, the L* of the fermented sausage inoculated with starter culture was lower than that of the control ( p < 0.05). A similar result was reported . The addition of L. fermentum 332 reduced the a w of the fermented sausage and dried the surface of the fermented sausage. a* of fermented sausage showed an upward trend. During the mature stage (day 11), a* value of the fermented sausage inoculated with starter culture was significantly higher than that of the control ( p < 0.05). No significant difference was found in b* between the two groups. b* is an important variable related to the lipid . The results indicated that the addition of L. fermentum 332 was beneficial in improving the color of the sausage. Previous studies pointed that the LAB had the effect of improving color in fermented sausages , . Texture changes Texture indices included hardness, elasticity, adhesiveness, and chewiness, all of which were affected by the meat's ripening time. For control and inoculated sausages, hardness, adhesiveness, and chewiness increased significantly with ripening time (Table ). The increase in hardness caused by sausage ripening was primarily due to water loss. These results were consistent with others , . Instead, elasticity decreased through ripening time. The results of instrumental texture showed significant differences between the control and inoculated sausages. The addition of L. fermentum 332 significantly increased the hardness, adhesiveness, and chewiness and decreased the elasticity ( p < 0.05). This may be because L. fermentum 332 increased the degradation of proteins. The reduction of sulfhydryl content caused an increase in hardness . The result indicated that the addition of L. fermentum 332 changed the texture characteristics of the fermented sausage. TBARS and TVBN changes The TBARS value is the most commonly used to assess lipid oxidation in meat products. Carbonyls, aldehydes, and hydrocarbons are the main TBARS components that contribute to off-aromas and flavors in meat products. The TBARS of the two fermented sausage groups increased, as shown in Fig. A. The TBARS value of control and inoculated sausages gradually increased. The TBARS value of the fermented sausage inoculated with starter culture was significantly lower than that of the control from day 1 to 11 ( p < 0.05). With the addition of L. fermentum 332, the TBARS decreased from 0.255 to 0.186 mg/100 g on day 11. It has been reported that the TBARS values were between 0.40 and 3.90 mg MDA/kg for vacuum-packed sausages . The organic sausages did not exceed the value of 3.0 mg/kg, which was used as an indicator of meat oxidative rancidity . In contrast to our findings, adding acid whey and probiotic strains to the experimental model fermented sausage had no effect ( p > 0.05) on TBARS values after 0, 90, and 180 days of storage when compared to the organic sample with sea salt . The disparity is most likely due to the different antioxidant activities of different strains. TVBN is the major product of protein decomposed by bacteria in meat, in which protein provides rich nutrition for microorganism growth. The effect of L. fermentum 332 addition on TVBN in fermented sausage was shown in Fig. B. The TVBN content of the two fermented sausage groups increased as time passed. The control's TVBN content ranged from 0.98 to 2.16 mg/100 g. The TVBN content of fermented sausage inoculated with starter culture ranged between 0.62 and 1.61 mg/100 g. From day 1 to day 11, the TVBN content of the fermented sausage inoculated with starter culture was significantly lower than that of the control ( p < 0.05). Therefore, the L. fermentum 332 inoculation delayed the increase of TVBN and improved the quality of the fermented sausage. Volatile flavor components changes The main sources of volatile flavor components are lipid oxidative decomposition, protein degradation metabolism, and carbohydrate decomposition . Aldehydes, ketones, esters, acids, alcohols, and other substances would be produced during the fermentation of sausage. The quality of fermented sausage is affected by various components. The effect of adding L. fermentum 332 on volatile flavor components in fermented sausage is shown in Table . In the two fermented sausage groups, 121 volatile flavor compounds were detected, including 27 alcohols, 11 aldehydes, 37 esters, 6 ketones, 14 acids, 14 alkenes, 4 alkanes, 2 phenols, and 6 benzenes. The majority of the compounds found in fermented sausages have previously been reported , . These compounds are typically formed as a result of protein and lipid oxidation, amino acid metabolism, and carbohydrate catabolism . There were 95 different types of volatile flavor substances detected in the control, and 104 different types of volatile flavor substances were detected in the fermented sausage inoculated with starter culture. The proportion of esters and alcohol in the two groups' sausages was significantly higher than the proportion of other volatile flavor components. On day 1 (fermentation stage), 62 different volatile flavor compounds were detected in the control culture and 65 different volatile flavor compounds in the fermented sausage inoculated with starter culture. Alcohols and esters had significantly higher types and contents than other types ( p < 0.05), while aldehydes, ketones, acids, olefins, and other types had relatively lower contents. Because of the addition of L. fermentum 332, the content of alcohol and ester in the fermented sausage inoculated with starter culture was significantly higher than in the control ( p < 0.05). Esters are created by esterifying alcohols and acids. The majority of them have fruit and flower fragrances and contribute significantly to the flavor formation of meat products. As shown in Supplementary Tables and , eucalyptol, ethyl caproate and ethyl octanoate showed higher content than others in fermented sausage inoculated with starter culture and control. The addition of L. fermentum 332 increased the content of ethyl caproate and ethyl octanoate in fermented sausage, and both were significantly higher than the control ( p < 0.05). The alcohol types in the fermented sausage inoculated with starter culture were significantly higher than those in the control. Aldehydes are flavor compounds found in fermented sausage. N -hexanal, the basic product of linoleic acid oxidation, was found in the fermented sausage inoculated with starter culture. It has a grassy odor and reflects the degree of fat oxidation, but it was not detected in the control. Therefore, the addition of L. fermentum 332 increased the amount of flavor substances in the fermentation stage. On day 5 (drying stage), 54 types of volatile flavor compounds were detected in the control and 66 types of volatile flavor compounds in the fermented sausage inoculated with starter culture. The variety of volatile flavor compounds was lower in the drying stage than in the fermentation stage, which could be attributed to environmental changes. On day 11 (mature stage), a total of 67 types of volatile flavor compounds were detected in the control and 73 types of volatile flavor compounds in the fermented sausage inoculated with starter culture. The types and concentrations of aldehydes in the fermented sausage inoculated with starter culture were significantly higher than those in the other processing stages ( p < 0.05), indicating a higher degree of lipid oxidation. Esters are volatile compounds that contribute to the distinct flavor of fermented meat . Esters accounted for 21.31% of the total volatile substances in the fermented sausage inoculated with starter culture, and the number of species was 25. The proportion of esters in total volatile substances was 18.1% in the control, and the number of species was 23. The types and contents of olefins in the fermented sausage inoculated with starter culture were greater than in the control culture ( p < 0.05). Therefore, adding L. fermentum 332 improved the flavor of the fermented sausage. a w changes As shown in Fig. A, the addition of L. fermentum 332 to the fermented sausage significantly decreased a w compared with that of the control ( p < 0.05). a w of the inoculated group decreased from 0.830 to 0.707, and that of the control decreased from 0.832 to 0.732 after 11 days of fermentation. The differences were due to the evaporation of the surface moisture and water migration inside the sausages during the fermentation . Our results were consistent with other reported results . Therefore, the addition of L. fermentum 332 can effectively reduce the a w of fermented sausage, thereby extending the shelf life of the product. AI-2 activity changes The change of AI-2 activity during the fermentation of sausage is shown in Fig. B. The AI-2 activity was detected in the fermented sausage samples. It has been reported that the AI-2 activity detected in different types of Kimchi was different, which was related to the different LAB strains in Kimchi . The AI-2 activity of LAB strains, which isolated from fermented meat was detected . Moreover, the addition of nitrate increased the luxS gene expression. The AI-2 activity of fermented sausage, which inoculated L. fermentum 332, was significantly higher than that in the control on days 1, 5, and 11 ( p < 0.05). The AI-2 activity was highest in the fermentation period (day 1), then decreased during the drying period (day 5) and aging period (day 11). The change in AI-2 activity was similar to the change in LAB viable count. This indicated that the activity of AI-2 was higher in the fermented sausage inoculated with starter culture fermented sausage and consistent with the viable count of the strain. QS not only related to bacterial density but also affected by the surrounding media . The results showed the AI-2 activity and acid production of fermented sausage inoculated with starter culture were both significantly higher than those of the control ( p < 0.05). This may be because the higher LAB viable count number caused the increase in AI-2 activity, which accelerated the acid production L. fermentum 332. In the fermentation stage, the brightness and redness of the starter control were significantly higher than that of the control, according to the analysis of the potential correlation between AI-2 activity and color changes ( p < 0.05). The addition of L. fermentum 332 improved the color of the sausage over the control. This may be because L. fermentum 332 decreased the pH, and promoted nitrite to combine with myoglobin to form nitromyoglobin. Meanwhile, the hardness and chewiness of fermented sausage inoculated with starter culture were significantly higher than those of the control ( p < 0.05). At this point, the fermented sausage inoculated with starter culture displayed increased AI-2 activity, which is involved in many physiological metabolic processes of LAB , . The increased AI-2 activity might promote the metabolism of L. fermentum 332, influencing the color and texture of fermented sausage. The analysis of potential correlations between AI-2 activity, TBARS, and TVBN changes revealed that AI-2 activity was negatively correlated with TBARS and TVBN values. The AI-2 activity of the fermented sausage inoculated with starter culture was significantly higher than the control, but the TBARS and TVBN values were significantly lower. It was suggested that there might be potential correlation among AI-2 activity, lipid oxidation and protein decomposition. The exact relationship needs to be further studied. The potential correlation between AI-2 activity and volatile flavor component changes was investigated, and it was discovered that the types of volatile flavor substances in the fermented sausage inoculated with starter culture (where AI-2 activity was higher) were significantly higher than those in the control ( p < 0.05). This showed that adding L. fermentum 332 increased the types of volatile flavor components. Therefore, AI-2 might take part in the formation of flavor substances. However, the distinct relationship between the volatile flavor components and AI-2 activity warrants further investigation. In conclusion, with the inoculation of L. fermentum 332 the quality of fermented sausage was improved. However, there has yet to be published research on the relationship between AI-2 and the quality characteristics of fermented sausage. The potential relationship between AI-2 and the quality characteristics of the fermented sausage was investigated in this study. AI-2 activity of fermented sausage increased with the inoculation of L. fermentum 332, which was accompanied by a decrease in pH, an improvement in color and texture, a decrease in TBARS and TVBN values, and an increase in volatile flavor substances. These changes, in general, have an impact on the development of flavor compounds in fermented sausage. Therefore, AI-2 activity might influence the quality characteristics of fermented sausage. More research is needed to determine the mechanism underlying the effect of AI-2 activity on the quality characteristics of fermented sausage during fermentation. The change in LAB viable count is shown in Fig. A. The LAB viable count of fermented sausage, inoculated with L. fermentum 332, was significantly higher than that in control on days 1, 5, and 11 ( p < 0.05). The LAB viable count was highest during the fermentation (day 1) period, then decreased during the drying (day 5) and mature periods (day 11). As shown in Fig. B, the pH of the fermented sausage inoculated with starter culture decreased rapidly from 5.64 to 4.54 within 24 h, while the pH of the control decreased from 5.65 to 5.20. The pH of the inoculated group increased slightly after 24 h (1 day) of fermentation. The pH rose to 4.82 on day 11. The fermented sausage itself contained some LAB and caused the pH decrease in fermented sausage without L. fermentum 332 (control). The pH of control was the lowest on day 5, then slightly increased to 4.99 on day 11. During the initial stage of fermentation, optimal temperature and humidity were required for the growth of microorganisms. L. fermentum 332 was grown and metabolized rapidly under optimal conditions, producing a large amount of lactic acid and other acids. Therefore, the pH decreased rapidly, and the acidity increased. After, the microorganisms in the meat interact with the enzymes to further decompose the proteins to produce free amino acids, ammonia, and basic amines, which slightly increase the pH value , . The pH of the inoculated group was significantly lower than that of the control during the 1–11 days of fermentation ( p < 0.05). These results were consistent with the results of other studies that reported the pH of the fermented sausage inoculated with LAB was lower than the pH of the control sausage – . The result showed that L. fermentum 332 has strong acidifying property, which effectively decreases the fermentation time. Meanwhile, whether the acidity change of fermented sausage caused by inoculation of L. fermentum 332 was related to the QS needed further study. Sausage color is crucial for the marketability of the meat product because it influences its appearance and acceptability. The color indexes include lightness ( L* ), redness ( a* ), and yellowness ( b* ). L* indicates the brightness of the sausage. The higher L* value indicates a brighter color of the sausage. a* indicates the bright red degree of sausage. The sausage products show attractive rose colors when the L* and a* of sausage are higher. The color changes of fermented sausage are presented in Table . The L* of the two groups of fermented sausage showed a downward trend. On days 1 and 5, the L* of the fermented sausage inoculated with starter culture was significantly higher than that of the control ( p < 0.05). On day 11, the L* of the fermented sausage inoculated with starter culture was lower than that of the control ( p < 0.05). A similar result was reported . The addition of L. fermentum 332 reduced the a w of the fermented sausage and dried the surface of the fermented sausage. a* of fermented sausage showed an upward trend. During the mature stage (day 11), a* value of the fermented sausage inoculated with starter culture was significantly higher than that of the control ( p < 0.05). No significant difference was found in b* between the two groups. b* is an important variable related to the lipid . The results indicated that the addition of L. fermentum 332 was beneficial in improving the color of the sausage. Previous studies pointed that the LAB had the effect of improving color in fermented sausages , . Texture indices included hardness, elasticity, adhesiveness, and chewiness, all of which were affected by the meat's ripening time. For control and inoculated sausages, hardness, adhesiveness, and chewiness increased significantly with ripening time (Table ). The increase in hardness caused by sausage ripening was primarily due to water loss. These results were consistent with others , . Instead, elasticity decreased through ripening time. The results of instrumental texture showed significant differences between the control and inoculated sausages. The addition of L. fermentum 332 significantly increased the hardness, adhesiveness, and chewiness and decreased the elasticity ( p < 0.05). This may be because L. fermentum 332 increased the degradation of proteins. The reduction of sulfhydryl content caused an increase in hardness . The result indicated that the addition of L. fermentum 332 changed the texture characteristics of the fermented sausage. The TBARS value is the most commonly used to assess lipid oxidation in meat products. Carbonyls, aldehydes, and hydrocarbons are the main TBARS components that contribute to off-aromas and flavors in meat products. The TBARS of the two fermented sausage groups increased, as shown in Fig. A. The TBARS value of control and inoculated sausages gradually increased. The TBARS value of the fermented sausage inoculated with starter culture was significantly lower than that of the control from day 1 to 11 ( p < 0.05). With the addition of L. fermentum 332, the TBARS decreased from 0.255 to 0.186 mg/100 g on day 11. It has been reported that the TBARS values were between 0.40 and 3.90 mg MDA/kg for vacuum-packed sausages . The organic sausages did not exceed the value of 3.0 mg/kg, which was used as an indicator of meat oxidative rancidity . In contrast to our findings, adding acid whey and probiotic strains to the experimental model fermented sausage had no effect ( p > 0.05) on TBARS values after 0, 90, and 180 days of storage when compared to the organic sample with sea salt . The disparity is most likely due to the different antioxidant activities of different strains. TVBN is the major product of protein decomposed by bacteria in meat, in which protein provides rich nutrition for microorganism growth. The effect of L. fermentum 332 addition on TVBN in fermented sausage was shown in Fig. B. The TVBN content of the two fermented sausage groups increased as time passed. The control's TVBN content ranged from 0.98 to 2.16 mg/100 g. The TVBN content of fermented sausage inoculated with starter culture ranged between 0.62 and 1.61 mg/100 g. From day 1 to day 11, the TVBN content of the fermented sausage inoculated with starter culture was significantly lower than that of the control ( p < 0.05). Therefore, the L. fermentum 332 inoculation delayed the increase of TVBN and improved the quality of the fermented sausage. The main sources of volatile flavor components are lipid oxidative decomposition, protein degradation metabolism, and carbohydrate decomposition . Aldehydes, ketones, esters, acids, alcohols, and other substances would be produced during the fermentation of sausage. The quality of fermented sausage is affected by various components. The effect of adding L. fermentum 332 on volatile flavor components in fermented sausage is shown in Table . In the two fermented sausage groups, 121 volatile flavor compounds were detected, including 27 alcohols, 11 aldehydes, 37 esters, 6 ketones, 14 acids, 14 alkenes, 4 alkanes, 2 phenols, and 6 benzenes. The majority of the compounds found in fermented sausages have previously been reported , . These compounds are typically formed as a result of protein and lipid oxidation, amino acid metabolism, and carbohydrate catabolism . There were 95 different types of volatile flavor substances detected in the control, and 104 different types of volatile flavor substances were detected in the fermented sausage inoculated with starter culture. The proportion of esters and alcohol in the two groups' sausages was significantly higher than the proportion of other volatile flavor components. On day 1 (fermentation stage), 62 different volatile flavor compounds were detected in the control culture and 65 different volatile flavor compounds in the fermented sausage inoculated with starter culture. Alcohols and esters had significantly higher types and contents than other types ( p < 0.05), while aldehydes, ketones, acids, olefins, and other types had relatively lower contents. Because of the addition of L. fermentum 332, the content of alcohol and ester in the fermented sausage inoculated with starter culture was significantly higher than in the control ( p < 0.05). Esters are created by esterifying alcohols and acids. The majority of them have fruit and flower fragrances and contribute significantly to the flavor formation of meat products. As shown in Supplementary Tables and , eucalyptol, ethyl caproate and ethyl octanoate showed higher content than others in fermented sausage inoculated with starter culture and control. The addition of L. fermentum 332 increased the content of ethyl caproate and ethyl octanoate in fermented sausage, and both were significantly higher than the control ( p < 0.05). The alcohol types in the fermented sausage inoculated with starter culture were significantly higher than those in the control. Aldehydes are flavor compounds found in fermented sausage. N -hexanal, the basic product of linoleic acid oxidation, was found in the fermented sausage inoculated with starter culture. It has a grassy odor and reflects the degree of fat oxidation, but it was not detected in the control. Therefore, the addition of L. fermentum 332 increased the amount of flavor substances in the fermentation stage. On day 5 (drying stage), 54 types of volatile flavor compounds were detected in the control and 66 types of volatile flavor compounds in the fermented sausage inoculated with starter culture. The variety of volatile flavor compounds was lower in the drying stage than in the fermentation stage, which could be attributed to environmental changes. On day 11 (mature stage), a total of 67 types of volatile flavor compounds were detected in the control and 73 types of volatile flavor compounds in the fermented sausage inoculated with starter culture. The types and concentrations of aldehydes in the fermented sausage inoculated with starter culture were significantly higher than those in the other processing stages ( p < 0.05), indicating a higher degree of lipid oxidation. Esters are volatile compounds that contribute to the distinct flavor of fermented meat . Esters accounted for 21.31% of the total volatile substances in the fermented sausage inoculated with starter culture, and the number of species was 25. The proportion of esters in total volatile substances was 18.1% in the control, and the number of species was 23. The types and contents of olefins in the fermented sausage inoculated with starter culture were greater than in the control culture ( p < 0.05). Therefore, adding L. fermentum 332 improved the flavor of the fermented sausage. w changes As shown in Fig. A, the addition of L. fermentum 332 to the fermented sausage significantly decreased a w compared with that of the control ( p < 0.05). a w of the inoculated group decreased from 0.830 to 0.707, and that of the control decreased from 0.832 to 0.732 after 11 days of fermentation. The differences were due to the evaporation of the surface moisture and water migration inside the sausages during the fermentation . Our results were consistent with other reported results . Therefore, the addition of L. fermentum 332 can effectively reduce the a w of fermented sausage, thereby extending the shelf life of the product. The change of AI-2 activity during the fermentation of sausage is shown in Fig. B. The AI-2 activity was detected in the fermented sausage samples. It has been reported that the AI-2 activity detected in different types of Kimchi was different, which was related to the different LAB strains in Kimchi . The AI-2 activity of LAB strains, which isolated from fermented meat was detected . Moreover, the addition of nitrate increased the luxS gene expression. The AI-2 activity of fermented sausage, which inoculated L. fermentum 332, was significantly higher than that in the control on days 1, 5, and 11 ( p < 0.05). The AI-2 activity was highest in the fermentation period (day 1), then decreased during the drying period (day 5) and aging period (day 11). The change in AI-2 activity was similar to the change in LAB viable count. This indicated that the activity of AI-2 was higher in the fermented sausage inoculated with starter culture fermented sausage and consistent with the viable count of the strain. QS not only related to bacterial density but also affected by the surrounding media . The results showed the AI-2 activity and acid production of fermented sausage inoculated with starter culture were both significantly higher than those of the control ( p < 0.05). This may be because the higher LAB viable count number caused the increase in AI-2 activity, which accelerated the acid production L. fermentum 332. In the fermentation stage, the brightness and redness of the starter control were significantly higher than that of the control, according to the analysis of the potential correlation between AI-2 activity and color changes ( p < 0.05). The addition of L. fermentum 332 improved the color of the sausage over the control. This may be because L. fermentum 332 decreased the pH, and promoted nitrite to combine with myoglobin to form nitromyoglobin. Meanwhile, the hardness and chewiness of fermented sausage inoculated with starter culture were significantly higher than those of the control ( p < 0.05). At this point, the fermented sausage inoculated with starter culture displayed increased AI-2 activity, which is involved in many physiological metabolic processes of LAB , . The increased AI-2 activity might promote the metabolism of L. fermentum 332, influencing the color and texture of fermented sausage. The analysis of potential correlations between AI-2 activity, TBARS, and TVBN changes revealed that AI-2 activity was negatively correlated with TBARS and TVBN values. The AI-2 activity of the fermented sausage inoculated with starter culture was significantly higher than the control, but the TBARS and TVBN values were significantly lower. It was suggested that there might be potential correlation among AI-2 activity, lipid oxidation and protein decomposition. The exact relationship needs to be further studied. The potential correlation between AI-2 activity and volatile flavor component changes was investigated, and it was discovered that the types of volatile flavor substances in the fermented sausage inoculated with starter culture (where AI-2 activity was higher) were significantly higher than those in the control ( p < 0.05). This showed that adding L. fermentum 332 increased the types of volatile flavor components. Therefore, AI-2 might take part in the formation of flavor substances. However, the distinct relationship between the volatile flavor components and AI-2 activity warrants further investigation. In conclusion, with the inoculation of L. fermentum 332 the quality of fermented sausage was improved. However, there has yet to be published research on the relationship between AI-2 and the quality characteristics of fermented sausage. The potential relationship between AI-2 and the quality characteristics of the fermented sausage was investigated in this study. AI-2 activity of fermented sausage increased with the inoculation of L. fermentum 332, which was accompanied by a decrease in pH, an improvement in color and texture, a decrease in TBARS and TVBN values, and an increase in volatile flavor substances. These changes, in general, have an impact on the development of flavor compounds in fermented sausage. Therefore, AI-2 activity might influence the quality characteristics of fermented sausage. More research is needed to determine the mechanism underlying the effect of AI-2 activity on the quality characteristics of fermented sausage during fermentation. Strains and growth conditions Limosilactobacillus fermentum 332 was isolated from Chinese traditional fermented foods and kept in MRS broth supplemented with 20% (v/v) glycerol as frozen (80 °C) stocks. Before use, it was transferred at least three times in MRS broth (Solarbio, Beijing, China) at 37 °C. Vibrio harveyi BB170 is a directionally mutated strain with an AI-2 receptor that can be used to measure AI-2 activity . V. harveyi BB170 (ATCC BAA-1117) was cultured at 30 °C with shaking after being transferred at least three times in an autoinducer bioassay (AB) medium (Huankai, Guangdong, China) . Preparation of fermented sausages Fresh mutton hindleg meat and tail fat were obtained from a local commercial processor (Hohhot, China), and sausages were made with modifications to the method previously described . The sausage's ingredients were as follows: mutton hindleg meat (70%), mutton tail fat (30%), salt (2.5%), glucose (0.5%), sugar (1%), NaNO 2 (0.01%), ascorbic acid (0.05%), pepper powder (0.2%), ginger powder (0.2%), spice powder (0.1%), corn starch (1%), and lactalbumin powder (0.5%). The ingredients were thoroughly mixed and filled into pig casings. The sausage diameter was approximately 3 cm, and length was approximately 20 cm. Control was fermented sausage without starter culture and the treatment was fermented sausage inoculated with L. fermentum 332 (the concentration of the starter culture was 4%, 10 6 CFU/g). For 24 h, fermentation was carried out at 30 °C and 95% relative humidity (day 1). This was known as the fermentation period. For four days, the sausages were placed in a 15 °C and 75–85% relative humidity environment (day 5). This stage was regarded as the drying period. Then, the sausages were transferred to an environment of 10 °C and 65% relative humidity for 6 days (day 11). This stage was regarded as the maturation period. After preparation, the samples were packed and stored at − 20 °C until further analyses. The sausages in both groups were sampled at various fermentation times (day 1, 5, and 11) to determine AI-2 activity, LAB viable count, physicochemical characteristics, and volatile flavor components. LAB viable count of fermented sausages Plate counts were used to determine LAB viable counts according to the method previously described . Physicochemical characteristics of fermented sausages pH The sausage samples were homogenized with 10 times the mass of potassium chloride solution, and the filtrate was collected to measure the pH value using a PB-10 pH meter (Sigma-Aldrich, St. Louis, USA). Color The sausage color was assessed using a TCP2 chromometer (Nanjing Bei Instrument Equipment Co., Ltd, Jiangsu, China). The lightness (L*), redness (a*), and yellowness (b*) values of each sample were measured. Texture The sausage sample was cut into 1 × 1 × 1 cm 3 , and the texture was assessed using a QTS texture analyzer (Food Technology Corporation, Los Angeles, USA). The hardness (g), elasticity (mm), and chewiness (g) values of each sample were measured. Thiobarbituric acid reactive substance (TBARS) To determine the degree of lipid oxidation, the TBARS of sausage samples was quantified. Shaking for 30 min, a 10-g minced sausage sample was mixed with 50 mL of 7.5% trichloroacetic acid (containing 0.1% ethylenediaminetetraacetic acid). Following that, 5 mL of the supernatant was filtered and mixed with 5 mL of 0.02 mol/L thiobarbituric acid solution at 90 °C for 40 min. 5 mL of chloroform was added after the mixed solution had cooled. A multifunctional microplate reader was used to measure absorbance at 532 and 600 nm (BioTek Epoch, Vermont, USA). The following equation was used to calculate the TBARS value: [12pt]{minimal} $$(/100)=532-600}{155} () 72.6 100.$$ TBARS ( mg / 100 g ) = A 532 - A 600 155 × 1 10 × 72.6 × 100 . Here, A532 and A600 are the absorbances (532 and 600 nm) of the assay solution. Total volatile basic nitrogen (TVBN) The TVBN content was determined using the method previously described with slight modifications. Of note, 5 g of the sausage sample was blended with 25 mL of distilled water and equilibrated for 30 min at room temperature. Filter paper was used to filter the solution. By adding 5 mL of 10 g/L magnesia, a 10-mL filtrate was made alkaline and distilled for 5 min. A control of 10 mL of distilled water was also used. The distillate was collected in an Erlenmeyer flask with 10 mL of 20 g/L boric acid aqueous solutions and a mixed indicator made by dissolving 0.1 g of methyl red and 0.5 g of bromocresol green into 100 mL of 95% ethanol. Titration with 0.01 mol/L hydrochloric acid solution was performed on the mixed solution. The TVBN content was calculated using the following equation: [12pt]{minimal} $$(/100)=1-2) 14]}{ } 100.$$ TVBN mg / 100 g = V 1 - V 2 × c × 14 m × 10 100 × 100 . Here, V1 is the titration volume of the tested sample (mL), V2 is the titration volume of the blank (mL), c is the actual concentration of hydrochloric acid (mol/L), and m is the weight of the sausage sample (g). Water activity Water activity ( a w ) was measured using an HD-3A water activity meter (Wuxi Huake Instrument Co., Ltd, Jiangsu, China). Volatile flavor components Volatile flavor components were assessed using the method previously described . The solid phase microextraction technique was used to extract the headspace volatile compounds. Of note, 5 g of the sausage sample was minced. Each sample was exposed to a solid phase microextraction fiber (DVB/CAR/PDMS 50/30 m; 57328-U; Supelco, Bellefonte, PA, USA), and extraction was performed for 40 min at 60 °C. After extraction, the fiber was inserted into the injection port and desorbed for 3 min at 250 °C. A gas chromatography/mass spectrometry system was used to analyze volatile compounds (TRACE 1300; Thermo Fisher Scientific, Waltham, MA, USA). The protocol was carried out exactly as previously described . As an internal standard, 2-methyl-3-heptanone was used. Volatile compounds were identified using mass spectra obtained from the NIST MS Search 2.0 library database. AI-2 activity of fermented sausages Minced fermented sausage and sterile distilled water were mixed at a ratio of 1:1 (w/v). The supernatant was collected after centrifugation at 12,000× g for 10 min, and the pH value was adjusted to 7.0. Next, the supernatant was filtered using a bacterial filter (0.22 µm; Linghang Technology Co., Ltd, Tianjin, China) for sterilization. The supernatant was stored at − 80 °C until further analyses. AI-2 activity was evaluated using V. harveyi BB170 as described previously . V. harveyi BB170 was grown in AB medium at 30 °C with shaking. The resulting cells were diluted in fresh AB medium (5000-fold dilution; approximately 10 5 CFU/mL) to OD 595 nm = 0.7–1.2. Diluted V. harveyi BB170 was mixed with fermented sausage supernatant in a 100:1 (v/v) ratio. The mixture was shaken and cultured at 30 °C. The luminescence of the samples was quantified using a VICTOR X Light Luminescence Plate Reader (Perkin Elmer, Waltham, USA). Statistical analysis All tests were repeated at least three times. Results are expressed as the mean ± standard error. Data analysis was performed using SPSS 1.0 software (IBM Corporation, Armonk, NY, USA). A t-test was used to compare significant differences ( p < 0.05) between the two groups of fermented sausages. Limosilactobacillus fermentum 332 was isolated from Chinese traditional fermented foods and kept in MRS broth supplemented with 20% (v/v) glycerol as frozen (80 °C) stocks. Before use, it was transferred at least three times in MRS broth (Solarbio, Beijing, China) at 37 °C. Vibrio harveyi BB170 is a directionally mutated strain with an AI-2 receptor that can be used to measure AI-2 activity . V. harveyi BB170 (ATCC BAA-1117) was cultured at 30 °C with shaking after being transferred at least three times in an autoinducer bioassay (AB) medium (Huankai, Guangdong, China) . Fresh mutton hindleg meat and tail fat were obtained from a local commercial processor (Hohhot, China), and sausages were made with modifications to the method previously described . The sausage's ingredients were as follows: mutton hindleg meat (70%), mutton tail fat (30%), salt (2.5%), glucose (0.5%), sugar (1%), NaNO 2 (0.01%), ascorbic acid (0.05%), pepper powder (0.2%), ginger powder (0.2%), spice powder (0.1%), corn starch (1%), and lactalbumin powder (0.5%). The ingredients were thoroughly mixed and filled into pig casings. The sausage diameter was approximately 3 cm, and length was approximately 20 cm. Control was fermented sausage without starter culture and the treatment was fermented sausage inoculated with L. fermentum 332 (the concentration of the starter culture was 4%, 10 6 CFU/g). For 24 h, fermentation was carried out at 30 °C and 95% relative humidity (day 1). This was known as the fermentation period. For four days, the sausages were placed in a 15 °C and 75–85% relative humidity environment (day 5). This stage was regarded as the drying period. Then, the sausages were transferred to an environment of 10 °C and 65% relative humidity for 6 days (day 11). This stage was regarded as the maturation period. After preparation, the samples were packed and stored at − 20 °C until further analyses. The sausages in both groups were sampled at various fermentation times (day 1, 5, and 11) to determine AI-2 activity, LAB viable count, physicochemical characteristics, and volatile flavor components. Plate counts were used to determine LAB viable counts according to the method previously described . pH The sausage samples were homogenized with 10 times the mass of potassium chloride solution, and the filtrate was collected to measure the pH value using a PB-10 pH meter (Sigma-Aldrich, St. Louis, USA). Color The sausage color was assessed using a TCP2 chromometer (Nanjing Bei Instrument Equipment Co., Ltd, Jiangsu, China). The lightness (L*), redness (a*), and yellowness (b*) values of each sample were measured. Texture The sausage sample was cut into 1 × 1 × 1 cm 3 , and the texture was assessed using a QTS texture analyzer (Food Technology Corporation, Los Angeles, USA). The hardness (g), elasticity (mm), and chewiness (g) values of each sample were measured. The sausage samples were homogenized with 10 times the mass of potassium chloride solution, and the filtrate was collected to measure the pH value using a PB-10 pH meter (Sigma-Aldrich, St. Louis, USA). The sausage color was assessed using a TCP2 chromometer (Nanjing Bei Instrument Equipment Co., Ltd, Jiangsu, China). The lightness (L*), redness (a*), and yellowness (b*) values of each sample were measured. The sausage sample was cut into 1 × 1 × 1 cm 3 , and the texture was assessed using a QTS texture analyzer (Food Technology Corporation, Los Angeles, USA). The hardness (g), elasticity (mm), and chewiness (g) values of each sample were measured. To determine the degree of lipid oxidation, the TBARS of sausage samples was quantified. Shaking for 30 min, a 10-g minced sausage sample was mixed with 50 mL of 7.5% trichloroacetic acid (containing 0.1% ethylenediaminetetraacetic acid). Following that, 5 mL of the supernatant was filtered and mixed with 5 mL of 0.02 mol/L thiobarbituric acid solution at 90 °C for 40 min. 5 mL of chloroform was added after the mixed solution had cooled. A multifunctional microplate reader was used to measure absorbance at 532 and 600 nm (BioTek Epoch, Vermont, USA). The following equation was used to calculate the TBARS value: [12pt]{minimal} $$(/100)=532-600}{155} () 72.6 100.$$ TBARS ( mg / 100 g ) = A 532 - A 600 155 × 1 10 × 72.6 × 100 . Here, A532 and A600 are the absorbances (532 and 600 nm) of the assay solution. The TVBN content was determined using the method previously described with slight modifications. Of note, 5 g of the sausage sample was blended with 25 mL of distilled water and equilibrated for 30 min at room temperature. Filter paper was used to filter the solution. By adding 5 mL of 10 g/L magnesia, a 10-mL filtrate was made alkaline and distilled for 5 min. A control of 10 mL of distilled water was also used. The distillate was collected in an Erlenmeyer flask with 10 mL of 20 g/L boric acid aqueous solutions and a mixed indicator made by dissolving 0.1 g of methyl red and 0.5 g of bromocresol green into 100 mL of 95% ethanol. Titration with 0.01 mol/L hydrochloric acid solution was performed on the mixed solution. The TVBN content was calculated using the following equation: [12pt]{minimal} $$(/100)=1-2) 14]}{ } 100.$$ TVBN mg / 100 g = V 1 - V 2 × c × 14 m × 10 100 × 100 . Here, V1 is the titration volume of the tested sample (mL), V2 is the titration volume of the blank (mL), c is the actual concentration of hydrochloric acid (mol/L), and m is the weight of the sausage sample (g). Water activity ( a w ) was measured using an HD-3A water activity meter (Wuxi Huake Instrument Co., Ltd, Jiangsu, China). Volatile flavor components were assessed using the method previously described . The solid phase microextraction technique was used to extract the headspace volatile compounds. Of note, 5 g of the sausage sample was minced. Each sample was exposed to a solid phase microextraction fiber (DVB/CAR/PDMS 50/30 m; 57328-U; Supelco, Bellefonte, PA, USA), and extraction was performed for 40 min at 60 °C. After extraction, the fiber was inserted into the injection port and desorbed for 3 min at 250 °C. A gas chromatography/mass spectrometry system was used to analyze volatile compounds (TRACE 1300; Thermo Fisher Scientific, Waltham, MA, USA). The protocol was carried out exactly as previously described . As an internal standard, 2-methyl-3-heptanone was used. Volatile compounds were identified using mass spectra obtained from the NIST MS Search 2.0 library database. Minced fermented sausage and sterile distilled water were mixed at a ratio of 1:1 (w/v). The supernatant was collected after centrifugation at 12,000× g for 10 min, and the pH value was adjusted to 7.0. Next, the supernatant was filtered using a bacterial filter (0.22 µm; Linghang Technology Co., Ltd, Tianjin, China) for sterilization. The supernatant was stored at − 80 °C until further analyses. AI-2 activity was evaluated using V. harveyi BB170 as described previously . V. harveyi BB170 was grown in AB medium at 30 °C with shaking. The resulting cells were diluted in fresh AB medium (5000-fold dilution; approximately 10 5 CFU/mL) to OD 595 nm = 0.7–1.2. Diluted V. harveyi BB170 was mixed with fermented sausage supernatant in a 100:1 (v/v) ratio. The mixture was shaken and cultured at 30 °C. The luminescence of the samples was quantified using a VICTOR X Light Luminescence Plate Reader (Perkin Elmer, Waltham, USA). All tests were repeated at least three times. Results are expressed as the mean ± standard error. Data analysis was performed using SPSS 1.0 software (IBM Corporation, Armonk, NY, USA). A t-test was used to compare significant differences ( p < 0.05) between the two groups of fermented sausages. Supplementary Table S1. Supplementary Table S2.
Value assessment of NMPA-approved new cancer drugs for solid cancer in China, 2016–2020
4ca995c2-a309-4083-b690-0c3daf7fa91f
9998930
Internal Medicine[mh]
Innovations in cancer therapy, particularly the influx of new drugs have yielded high expectations of transform treatment of the disease from all healthcare stakeholders . Nevertheless, dramatic rise in drug costs has recently highlighted a vigorous debate over whether cancer drugs prices, especially for that of targeted drugs and immunotherapies, commensurate with their value to patients, within reach of who need them not only in developed country, but also in developing country like China with scarce resources and rising demand for health services . To our knowledge, not all people know the price of everything but the value of nothing. It has never been more important to assess the value of new cancer drugs, and several organizations including the American Society of Clinical Oncology (ASCO), European Society for Medical Oncology (ESMO), Institute for Clinical and Economic Review (ICER), National Comprehensive Cancer Network (NCCN), and the pan-Canadian Oncology Drug Review Expert Review Committee (pCODRERC) have recently taken a step forward in this endeavor, developing tools for value assessment . All these tools have been designed for value assessment with the aim of weighing up the balance between efficacy, toxicity, quality of life and costs. Despite of their different conceptual definition of “value,” Bentley et al. reported that the ASCO and ESMO tools demonstrated convergent validity and inter-rater reliability for value assessment for new cancer drugs. In recent years, regulatory reforms have led to the introduction of a series of expedited programs to accelerate development, review, and approval of new drugs in China . Here, we overview the landscape of new cancer drugs approved by NMPA for solid cancer between 2016 and 2020 in China, describe the value of these drugs and further explore whether value is related to the drug price. Data sources and extraction We used the publicly available data to identify all new drugs (new molecular entities and novel biologic agents) approved by the China's National Medical Products Administration (NMPA) between January 1, 2016 and December 31, 2020, with initial indications for solid tumor. Meanwhile, we assessed whether the drug was granted with one of expedited programs in NMPA pathways and designations to accelerate drug approval (special review, priority review, conditional approval, urgently needed overseas drugs, and breakthrough therapy). Notably, drugs that were later approved for additional indications were not considered in this study. The launch price and postlaunch price of drugs were extracted from the trade name and generic name recorded in the Hospital Information System (HIS). To estimate monthly treatment cost of a drug, we used the prescription and dosing information from the NMPA-approved label. Monthly treatment costs were calculated over an average of 30 days on the basis of the dosage schedule for an adult patient weighing 60 kg with a body surface area of 1.70 m 2 . The cost of all regimes was adjusted to provide the price per 4-week period (33.3% increase for 3-week treatment cycles and 100% increase for 2-week treatment cycles). Drug prices were converted to US dollars at the exchange rate as of August 29, 2022. To quantify the clinical benefit from the pivotal clinical trials supporting regulatory approval, we applied two value frameworks developed by ASCO and ESMO, namely the American Society of Clinical Oncology Value Framework (ASCO-VF) version 2 , and European Society for Medical Oncology Magnitude of Clinical Benefit Scale (ESMO-MCBS) version 1.1 . Scores were assessed by one reviewer and checked by a second one, with any discrepancies resolved by a senior reviewer. In contrast to ESMO-MCBS, ASCO-VF was not planned to score single-arm studies and was therefore only suitable for phase II or III randomized clinical trials. In cases in which multiple pivotal clinical trials have been done and yield different clinical benefit scores for a given drug, the highest score was considered. Consistent with the developer of the value frameworks, meaningful clinical benefit was defined as a grade of A or B (for the curative setting) or 4, 5 (for the palliative setting) using ESMO-MCBS, whereas ASCO-VF did not clearly define what score was deemed “meaningful value.” Cherny et al. recommended that the optimal threshold score of 45 or higher was proposed for recognizing substantial benefit for ASCO-VF by generating receiver operating characteristic (ROC) curves. Nevertheless, given the differences in construction and goals of ASCO-VF and ESMO-MCBS, they might yield some discordance in a cohort of studies. Thus, we split scores at the 75th percentile of ASCO-VF scores as the cutoff score for subsequent analyses, referring to the meaningful value achieved of ESMO-MCBS as a grade of 4, 5, B, or A . Statistical analysis All data were collected in an Excel file designed for this study. Statistical analysis was conducted in IBM SPSS 25.0. Continuous data were graphed and analyzed to assess the normality of the underlying distribution. Spearman's correlation coefficient was used to describe the association between launch prices and clinical benefit according to ESMO-MCBS and ASCO-VF. We generated a ROC curve to establish a discrimination threshold of ASCO-VF scores to meet ESMO-MCBS criteria. P < 0.05 was deemed statistically significant. We used the publicly available data to identify all new drugs (new molecular entities and novel biologic agents) approved by the China's National Medical Products Administration (NMPA) between January 1, 2016 and December 31, 2020, with initial indications for solid tumor. Meanwhile, we assessed whether the drug was granted with one of expedited programs in NMPA pathways and designations to accelerate drug approval (special review, priority review, conditional approval, urgently needed overseas drugs, and breakthrough therapy). Notably, drugs that were later approved for additional indications were not considered in this study. The launch price and postlaunch price of drugs were extracted from the trade name and generic name recorded in the Hospital Information System (HIS). To estimate monthly treatment cost of a drug, we used the prescription and dosing information from the NMPA-approved label. Monthly treatment costs were calculated over an average of 30 days on the basis of the dosage schedule for an adult patient weighing 60 kg with a body surface area of 1.70 m 2 . The cost of all regimes was adjusted to provide the price per 4-week period (33.3% increase for 3-week treatment cycles and 100% increase for 2-week treatment cycles). Drug prices were converted to US dollars at the exchange rate as of August 29, 2022. To quantify the clinical benefit from the pivotal clinical trials supporting regulatory approval, we applied two value frameworks developed by ASCO and ESMO, namely the American Society of Clinical Oncology Value Framework (ASCO-VF) version 2 , and European Society for Medical Oncology Magnitude of Clinical Benefit Scale (ESMO-MCBS) version 1.1 . Scores were assessed by one reviewer and checked by a second one, with any discrepancies resolved by a senior reviewer. In contrast to ESMO-MCBS, ASCO-VF was not planned to score single-arm studies and was therefore only suitable for phase II or III randomized clinical trials. In cases in which multiple pivotal clinical trials have been done and yield different clinical benefit scores for a given drug, the highest score was considered. Consistent with the developer of the value frameworks, meaningful clinical benefit was defined as a grade of A or B (for the curative setting) or 4, 5 (for the palliative setting) using ESMO-MCBS, whereas ASCO-VF did not clearly define what score was deemed “meaningful value.” Cherny et al. recommended that the optimal threshold score of 45 or higher was proposed for recognizing substantial benefit for ASCO-VF by generating receiver operating characteristic (ROC) curves. Nevertheless, given the differences in construction and goals of ASCO-VF and ESMO-MCBS, they might yield some discordance in a cohort of studies. Thus, we split scores at the 75th percentile of ASCO-VF scores as the cutoff score for subsequent analyses, referring to the meaningful value achieved of ESMO-MCBS as a grade of 4, 5, B, or A . All data were collected in an Excel file designed for this study. Statistical analysis was conducted in IBM SPSS 25.0. Continuous data were graphed and analyzed to assess the normality of the underlying distribution. Spearman's correlation coefficient was used to describe the association between launch prices and clinical benefit according to ESMO-MCBS and ASCO-VF. We generated a ROC curve to establish a discrimination threshold of ASCO-VF scores to meet ESMO-MCBS criteria. P < 0.05 was deemed statistically significant. Number and characteristics of new drugs From 2016 to 2020, 52 new cancer drugs received initial regulatory approval by NMPA, 37 (71%) of which were approved for treating solid tumors and 15 (29%) were approved for treatment of hematologic cancers . Because data from pivotal clinical trials of nine drugs were incomplete or unavailable, only 28 drugs with prices and pivotal trials data were analyzed for subsequent analyses. The most common indications were non-small-cell lung cancer ( N = 8, 29%) and breast cancer ( N = 6, 21%) . Of these, 23 drugs were imported from abroad and five drugs were domestic. Furthermore, 24 of which have benefited from at least one expedited program and most received priority review and special review. Clinical benefit of new drugs For new drugs used for treating solid tumors, the median ASCO-VF score was 43.3 (interquartile range, 27.1–58.35; range −20 to 110.1), and the scores were normally distributed . 14 drugs fell below, 14 drugs were above. We split scores at the 75th percentile of ASCO-VF scores-−58.35 as the cutoff score that deemed “meaningful clinical benefit”. Seven drugs were above the threshold whereas 21 (75%) fell below. By the ESMO-MCBS, 13 drugs met the criteria for meaningful benefit. Three (27%) of the 13 drugs meeting ESMO-MCBS thresholds were above the 75th percentile of ASCO-VF scores −58.35. For drugs in the palliative setting, Of the 19 drugs that did not meet the ASCO-VF cutoff score, only 12 fell below the ESMO-MCBS criteria. For drugs in curative setting, four (100%) of four drugs met ESMO-MCBS thresholds, only one were above the ASCO-VF cutoff score. The clinical benefit was shown in . Association between ASCO-VF and ESMO-MCBS ROC curve was used to establish a discrimination threshold for ASCO-VF score to meet the ESMO-MCBS criteria of meaningful clinical benefit, and the threshold was determined to be approximately 31. However, the area under the curve was 0.662, suggesting only fair predictive value . Agreement between ASCO-VF and ESMO-MCBS thresholds was only fair (κ = 0.515, P < 0.05). Correlation between price and value of drugs In China, the median monthly treatment costs per patient at launch for the included cancer drugs were $4,381. As of August 25, 2022, the median monthly treatment costs were $1,408, indicating that the postlaunch price changes for most NMPA approved cancer drugs were roughly three times less than the launch prices . We found no statistically significant associations between launch prices of drugs approved for solid tumors and clinical benefit were observed according to both frameworks . For ASCO-VF, launch prices had weak correlation with clinical benefit (Spearman's ρ < 0.30; P > 0.05). The launch price of new cancer drugs and ESMO-MCBS grades had weak correlation (Spearman's ρ < 0.30; P > 0.05). From 2016 to 2020, 52 new cancer drugs received initial regulatory approval by NMPA, 37 (71%) of which were approved for treating solid tumors and 15 (29%) were approved for treatment of hematologic cancers . Because data from pivotal clinical trials of nine drugs were incomplete or unavailable, only 28 drugs with prices and pivotal trials data were analyzed for subsequent analyses. The most common indications were non-small-cell lung cancer ( N = 8, 29%) and breast cancer ( N = 6, 21%) . Of these, 23 drugs were imported from abroad and five drugs were domestic. Furthermore, 24 of which have benefited from at least one expedited program and most received priority review and special review. For new drugs used for treating solid tumors, the median ASCO-VF score was 43.3 (interquartile range, 27.1–58.35; range −20 to 110.1), and the scores were normally distributed . 14 drugs fell below, 14 drugs were above. We split scores at the 75th percentile of ASCO-VF scores-−58.35 as the cutoff score that deemed “meaningful clinical benefit”. Seven drugs were above the threshold whereas 21 (75%) fell below. By the ESMO-MCBS, 13 drugs met the criteria for meaningful benefit. Three (27%) of the 13 drugs meeting ESMO-MCBS thresholds were above the 75th percentile of ASCO-VF scores −58.35. For drugs in the palliative setting, Of the 19 drugs that did not meet the ASCO-VF cutoff score, only 12 fell below the ESMO-MCBS criteria. For drugs in curative setting, four (100%) of four drugs met ESMO-MCBS thresholds, only one were above the ASCO-VF cutoff score. The clinical benefit was shown in . ROC curve was used to establish a discrimination threshold for ASCO-VF score to meet the ESMO-MCBS criteria of meaningful clinical benefit, and the threshold was determined to be approximately 31. However, the area under the curve was 0.662, suggesting only fair predictive value . Agreement between ASCO-VF and ESMO-MCBS thresholds was only fair (κ = 0.515, P < 0.05). In China, the median monthly treatment costs per patient at launch for the included cancer drugs were $4,381. As of August 25, 2022, the median monthly treatment costs were $1,408, indicating that the postlaunch price changes for most NMPA approved cancer drugs were roughly three times less than the launch prices . We found no statistically significant associations between launch prices of drugs approved for solid tumors and clinical benefit were observed according to both frameworks . For ASCO-VF, launch prices had weak correlation with clinical benefit (Spearman's ρ < 0.30; P > 0.05). The launch price of new cancer drugs and ESMO-MCBS grades had weak correlation (Spearman's ρ < 0.30; P > 0.05). To the best of our knowledge, this is the first study in China to comprehensively evaluate the value of new cancer drugs using ASCO-VF and ESMO-MCBS, and to investigate the correlation between price of new drugs and their clinical benefits. In our review of all new cancer drugs approved by NMPA for solid cancer between 2016 and 2020, approximately half of new drugs achieved meaningful clinical benefit according to ESMO-MCBS. We found that all new drugs had a wide range of ASCO-VF scores and only fair association between ASCO-VF and ESMO-MCBS, which was consistent with previous studies . About three-quarters of new drugs were listed in the National Reimbursement Drug List (NRDL). In contrast to the increasing prices of cancer drugs in the years after approval in the US, the daily treatment cost of cancer agents has fallen in China, especially for targeted therapies and branded products . However, we found no significant correlation between price and clinical benefit according to the two frameworks. With rising demand for health services in China, prices should be better aligned with value, especially for expensive cancer drugs. Nearly half of cancer drug indications approved in China had shown OS benefit . Lack of a clear association between price and clinical benefit indicates that value frameworks can help not only identify drugs with low or uncertain clinical benefit that should be targeted for price negotiations, but also therapies with evidence of higher clinical benefit to improve access to benefit drugs . In 2015, China's government proposed to establish an open and transparent price negotiation mechanism with multi-party participation for some patented, high-priced drugs and exclusively produced drugs. In the same year, the first round of national-level drug pricing negotiations was launched . The dimensions of national price-negotiation cancer drugs include the value of drugs and the affordability of China's healthcare system funds. China's government has sufficient bargaining power. Since 2017, the government has started annual centralized price negotiation, which has sharply fallen drug prices compared with launch prices, resulting in increased affordability of expensive cancer drugs . However, as new clinical trials were conducted, results of post-approval clinical trials might lead to dynamic changes in value frameworks scores. Moreover, the qualifications of medical insurance payment tend to be strict. For new indications of cancer drugs, the out-of-pocket spending of patients have not been reduced, leading to increase the financial toxicity of patients. Therefore, the government needs to routinely monitor the impacts of shifts in medicine on resource utilization. This study has several limitations. Most breakthrough-designated drugs were approved based on single-arm or non-randomized trials, resulting in the level of evidence for breakthrough therapies was inferior to that for non-breakthrough therapies when assessing the value through value frameworks . Thence, it is highly unfair to assess clinical benefit in this situation. Furthermore, treatment effects are known to be heterogeneous, and some patients can benefit greatly from drugs with a low value score . This was one of the limitations of the study, which did not evaluate the value of all new drugs. It is complicated to precisely define value of a drug, and our assessment relied entirely on data reported of clinical trial, not taking into account other factors that may influence value of a drug. Secondly, the association between ASCO-VF and ESMO-MCBS was only fair based on our findings. The ASCO-VF and the ESMO-MCBS have shared the goal of assisting clinicians and patients to measure the relative benefits of new cancer drugs . Nevertheless, due to the differences in the frameworks' inherent designs, especially the method and indicators of the frameworks differ greatly, resulting in greatly divergences in scores and grades . In addition, we used scores at the 75th percentile of ASCO-VF scores as a threshold for comparisons. Nevertheless, changing this cutoff score will influence the degree of correlation between the two value frameworks. Meanwhile, we did not consider the duration of treatment when calculating the costs of a drug. But most of drugs in the palliative setting, so as long as response to treatment continues, monthly drug costs could be used. Furthermore, we did not investigate whether all cancer drugs that were approved by the NMPA were also approved in other countries. Therefore, our study needs to be followed by further study to assess the value in other countries especially emerging or developing countries comprehensively, promoting the situation of other countries in terms of access to oncology medicines with value assessment. In summary, ASCO-VF and ESMO-MCBS are important tools for assessing value of cancer drugs, although the correlation between these frameworks is fair. Based on the available evidence, not all new drugs met the meaningful threshold according to ASCO-VF or ESMO-MCBS. The price of a drug was not significantly related to the level of clinical benefit, and the cost could not justify its value. Policy makers requires to improve the alignment between drug prices and clinical benefits in order to provide optimal cancer treatments for patients. The original contributions presented in the study are included in the article/ , further inquiries can be directed to the corresponding author. Conceptualization and writing—review and editing: JL and QJ. Methodology: JL and SO. Software and writing—original draft preparation: JL. Data curation: HW, XQ, and RP. Formal analysis: JL, SO, and SW. Visualization: HW and XQ. Supervision and funding acquisition: QJ. All authors have read and agreed to the published version of the manuscript.
Molecular diagnostics in the evaluation of thyroid nodules: Current use and prospective opportunities
41a0387d-0d6d-4bff-918b-412f2e22fbbc
9999006
Pathology[mh]
Thyroid cancer is the most common endocrine malignancy with an estimated 43,800 expected new cases diagnosed in 2022 and representing the 7 th most common cancer in women . Thyroid cancer almost always presents as a thyroid nodule and thyroid nodules are very common with over 60% of the population having one or more by the time patients reach their 7 th and 8 th decades of life . However, only 5-15% of thyroid nodules harbor thyroid malignancy. Fine needle aspiration cytology (FNAC) is the foundation for diagnosis of nodules that meet criteria for biopsy, and a Bethesda II (BII - benign) or Bethesda VI (BVI - malignant) cytology result has excellent accuracy and correlation with final histopathology upon surgical resection . BII cytology predicts benign histology 97% of the time or greater and BVI cytology confers a risk of malignancy up to 99% . The primary challenge in the evaluation of thyroid nodules occurs in the setting of Bethesda III (BIII) or Bethesda IV (BIV) cytology, often grouped together as indeterminate thyroid nodules (ITN). Approximately 20-25% of thyroid nodule aspirates result in ITN cytology . The risk of malignancy of BIII and BIV ITN ranges from 6-40% depending on the institution and the categorization of noninvasive follicular thyroid neoplasm with papillary-like nuclear features (NIFTP) as benign or malignant . Historically, consensus guidelines recommended surgery, often in the form of a thyroid lobectomy, for definitive diagnosis of ITN since it is often not possible to differentiate between benign and malignant nodules by cytology alone . This approach is sub-optimal given the cost, possible morbidity, and need for thyroid hormone replacement in a subset of patients after lobectomy and all patients after total thyroidectomy; especially since ~75% of ITN will prove to be benign on final histopathology . The utilization of transcriptional signatures and discovery of driver mutations promoting thyroid cancer development and influencing its behavior provided the molecular foundation for improved diagnostic accuracy in ITN . As will be described, molecular diagnostics has moved beyond aiding in diagnosis and can provide information on tumor prognosis . The goal of this review is to provide an update on commercially available lab developed molecular diagnostic tests for use in nodular thyroid disease. The contemporary clinical use, advantages, and disadvantages, as well as future potential applications will be discussed. A brief review of test sensitivity (SN), specificity (SP), negative predictive value (NPV), and positive predictive value (PPV) is warranted to promote appropriate understanding and scrutiny of molecular diagnostic performance metrics . SN is a calculation of the number of true positives (for this topic, the patient has thyroid cancer, and the molecular test reports a positive finding) divided by all the patients with thyroid cancer (who have true positive plus false negative test results). A low SN indicates thyroid cancers have been missed (called negative or benign) by the molecular marker test. Alternatively, SP is a calculation of the true negatives (the patient does not have thyroid cancer and the test is negative) divided by all the patients without thyroid cancer (true negative plus false positive test results) . Clinically, NPV and PPV are better indicators of a test’s ability to rule out or rule in disease, respectively. NPV is a calculation of the true negatives divided by all the patients with a negative test result (true negatives and false negatives). PPV is a calculation of the true positives divided by all the patient with a positive test result (true positives and false positives) . At any given SN and SP, both NPV and PPV are affected by the disease prevalence in the population such that a higher disease prevalence will result in a higher PPV and lower NPV than in a population with a lower prevalence of disease . Other measures of diagnostic performance include overall accuracy, which is the proportion of correctly identified patients (true positive and true negative results) relative to the entire cohort, and likelihood ratios, the probability of the expected test result in those with thyroid cancer as compared to the same result in those without . It is critical that a thyroid nodule molecular diagnostic test is validated with a high-quality study that ideally is prospective, multi-center, with blinded central histopathologic review. A prospective validation study reduces clinical decision-making bias regarding who enters the study cohort and who has surgery. A multi-center study with blinded histopathology review confirms the gold standard presence or absence of disease in a broad and representative population which aids in reliable SN and SP calculations. Finally, all patients enrolled in the study must have surgery so the prevalence of thyroid cancer in the studied cohort can be known and utilized to calculate the NPV and PPV. The utilization of molecular diagnostics has rapidly advanced over the last 10-15 years with some older generation tests maintaining a presence for use and others being replaced by next generation sequencing (NGS) platforms. A brief review of older and currently unavailable molecular tests is presented, primarily to provide context for assessing the currently available tests. The identification of the BRAF V600E mutation in papillary thyroid carcinoma (PTC) in 2003 was one of the earliest identified molecular signatures correlating a molecular variant with final histology . BRAF V600E is a highly specific yet poorly sensitive marker for thyroid cancer, especially in ITN where it is now known that BRAF V600E is present in <10% of molecularly tested ITN aspirates . Thus, research began into mutation panels that raise test SN to detect more malignant nodules. One of the first studies was a prospective multi-institutional study evaluating BRAF V600E, BRAF K601E, mutations of NRAS , KRAS and HRAS gene codons as well as RET/PTC 1/3 rearrangements and PAX/PPARγ fusions . This panel showed a high specificity with 97% of mutation positive nodules representing histologically malignant tumors yet only a 62% sensitivity as not all malignancies carried variants or fusions detected by the panel. In 2012, the clinical validation study of the Afirma ® Gene Expression Classifier (GEC) was published. The Afirma GEC combined mRNA expression on a 167-gene microarray platform with machine learning with a goal of predicting benign nodules with ITN cytology to reliably rule out thyroid cancer and avoid unnecessary surgery . This was a prospective, multi-center study with blinded central histopathology review and reported a high sensitivity of 90% and a high negative predictive value of 94% [(95% confidence interval (CI)), 87-98)] across BIII and BIV nodules) By virtue of the test design with an emphasis on ruling out thyroid cancer, the specificity and PPV were relatively low. As the first rule-out test, there was caution regarding the possibility of false-negative results among potentially more aggressive cancers. Knowing that the standard treatment of ITN nodules was surgery, a Hurthle cell cassette was included with the GEC to intentionally call most Hurthle samples as GEC suspicious. Resultingly, the overall specificity of the GEC amongst Hurthle cell lesions was only 12% . The acceptance and comfort with rule-out testing amongst physicians, the need for a higher benign call rate and PPV, combined with scientific advances and reduced costs of next-generation sequencing prompted development of the Afirma Genomic Sequencing Classifier (GSC) . Thyroseq ® has evolved with multiple iterations expanding the number of molecular variants identified from the initial 7 gene panel (targeted variants in 4 genes and 3 gene fusions) to a targeted NGS platform including 12 genes in version 1 to 14 genes analyzed for point mutations and 42 types of gene fusions in version 2 . Thyroseq v2 data was published in 2013 and the expanded Thyroseq v2.1 panel data was published in 2015. Thyroseq v2.1 reported test performance was a sensitivity of 90.9% [CI 78.8–100], specificity of 92.1% [CI 86.0–98.2], positive predictive value of 76.9% [CI, 60.7–93.1], and negative predictive value of 97.2% [CI 78.8–100], with an overall accuracy of 91.8% [CI, 86.4–97.3] These earlier versions of Thyroseq NGS panels were not tested in prospective, multi-center studies with blinded histopathologic review . As will be described, Thyroseq v3, the current commercially available testing platform, further expanded the number of molecular variants and fusions tested. Other molecular tests that were used for the preoperative diagnosis of ITN included a combined miRNA and somatic gene mutation panel from Asuragen ® (available ~2010-2014) and the micro-RNA (miRNA) classifier RosettaGX ® Reveal (available from ~2016-2018) . Neither is currently commercially available. The incorporation of thyroid nodule molecular diagnostic testing into clinical practice bears some discussion. Thyroid nodule biopsies can occur in outpatient clinics, pathology departments, radiology suites, and rarely in an inpatient setting. Each practice, institution, and location present opportunities and challenges. One consideration is whether to utilize a “collect on all” protocol where a sample for molecular marker testing is collected at the time of a thyroid nodule’s initial FNA. Alternatively, patients can be asked to return for a repeat FNA for collection of a sample for molecular testing after an indeterminate cytology result. Given most biopsies are read as definitively benign or malignant (approximately 75%), allowing a patient to avoid unnecessary needle passes is reasonable. However, the inconvenience of taking more time from work or away from home, additional copays, and a repeat of the FNA preparation and procedure argues for collecting a molecular marker sample at the time of the initial FNA in the event of an ITN result. Most patients will be in favor of getting all samples collected at once in lieu of returning for a second procedure if given the option. Collecting on all samples does require tracking of specimens, a timely send out of material upon receipt of an ITN result and discarding of unused samples to free up space for future samples. This does require dedicated organization and effort. Currently, the Afirma and Thyroseq testing platforms both allow for centralized cytology diagnosis (at Thyroid Cytology Partners and CBL Path respectively) with reflex send out of collected molecular samples upon an ITN result. ThyGeNEXT/ThyraMIR ® (MPTX) offers cytology reads via a partnership with Dianon Pathology. In a community practice setting, a transition from decentralized thyroid FNAs in radiology practices, with separate cytology reads at individual centers, to a centralized collection for cytology and molecular markers resulted in a decrease of ITN from 24% to 10% and a reduction in diagnostic surgeries from 24% to 6% . If onsite cytology assessment is available, this may represent the best model. At the time of the FNA, rapid on-site evaluation can be made to determine cytology adequacy, diagnosis, and the need for extra needle passes for dedicated material for molecular testing while a patient is prepped and waits. This practice can reduce nondiagnostic aspirates and improve diagnostic accuracy . The logistics of this practice demand an integrated clinic model with enough pathology personnel to create cytology slides and have a rapid read. This is not feasible in many, if not most, clinical settings. Slide scraping, the collection of thyroid follicular cells from cytology slides with the aid of microscope assisted microdissection, presents a convenient methodology for running some molecular tests on cytology smears when the patient has not had access to molecular diagnostics or there was no collection of a molecular sample at the time of initial FNAC. The Afirma platform does not offer slide scraping while MPTX and Thyroseq do offer this collection method. Though convenient, there are limitations to slide scraping relative to collecting a fresh sample. In the MPTX validation study, 18% of slides failed to provide adequate nucleic acid quantity to run the assay . In the Thyroseq validation of slide scraping, Diff-Quik stained smears were inadequate 35% of the time though all Papanicolaou-stained smears were informative . Of greater concern than assay failure, are the discordant results between microdissected cytology smears relative to a fresh FNAC placed in its respective nucleic acid protection/storage buffer. There was 11% discordance for miRNA with the ThyraMir portion of MPTX and 14% of copy number alterations along with 17% of fusions were missed (false negatives) on Thyroseq slide scraping compared to a fresh sample from FNAC . Clinicians should consider the discussion point regarding the use of slide scraping for Thyroseq, “the collection of a portion of a fresh FNA sample directly into a nucleic acid preservative solution should be attempted whenever possible because this provides the highest success rate and accuracy of testing” . Molecular testing has become a more commonly utilized tool in the clinical setting to help provide additional risk information for ITN. Ideally, the results of the molecular test shift the risk of malignancy (ROM) from ~25% with ITN cytology to risks that help determine which patients will benefit from conservative surveillance versus definitive surgical intervention . Molecular testing platforms have evolved with technical advancements coming in the form of expanded genomic information and improved test performance. As of this writing, the three most used molecular tests in the United States include the Afirma Genomic Sequencing Classifier (Afirma GSC), ThyGeNEXT/ThyraMIR (MPTX), and Thyroseqv3 (TSv3). Each molecular test is performed using a different method; however, all three aim to provide the clinician with accurate and precise information concerning patients’ risk of nodule malignancy. To our knowledge there is no widespread use of these molecular markers outside of the United States. There is limited use in certain provinces of Canada as well as sporadic use in South America and Europe, almost universally without national healthcare or insurance support. The Afirma GSC uses next generation RNAseq and whole transcriptome analysis combined with machine learning algorithms to provide a benign or suspicious result in nodules with ITN . MPTX is a multiplatform test approach that combines a next generation targeted sequencing panel (ThyGeNEXT) with a microRNA risk classifier test (ThyraMIR) . TSv3 is a targeted next generation sequencing test that evaluates point mutations, gene fusions, copy number alterations and abnormal gene expression in 112 thyroid cancer related genes. A high-quality diagnostic test validation study that is prospective, blinded, multi-center and representative of the intended test population is critical to provide confidence in the test performance. Post-validation real-world studies are important for increasing confidence in a test’s performance and providing evidence of benefit in clinical practice outside of the controls of a validation study. MPTX screens samples with the ThyGeNEXT NGS panel that include selected DNA mutations in the following genes: ALK, BRAF, GNAS, HRAS, KRAS, NRAS, PIK3CA, PTEN, RET and TERT promoter genes. The following gene fusions are detected by analysis of RNA: ALK , BRAF, NTRK, PPARγ, RET and THADA . If there is a strong driver mutation detected, the sample is considered positive. If the sample has a weak oncogenic driver mutation or no mutation, it is further risk stratified using the microRNA classifier (ThyraMir). The initial ThyraMir panel included 5 growth-promoting miRNAs (miR-31, -146, -222, -375, -551) and 5 growth-suppressing miRNAs (miR-29, -138, -139, -155, -204). MPTX results are ultimately reported as one of three categories (negative, moderate, or positive) based on results of the combined ThyGeNEXT mutation panel and ThyraMIR microRNA risk classifier thresholds . The MPTX has been analytically validated and the clinical validation study is a retrospective, blinded multicenter study . Unanimous histopathology consensus was not met in 19% of cases which were excluded from analysis. MPTX results for 197 subjects with ITN were categorized as positive, moderate risk or negative for malignancy from a cohort with a 30% disease prevalence. Moderate risk was assigned to 28% of the cohort who are estimated to have the same ROM as the baseline cancer prevalence of 30%. When the moderate risk patients were found to have malignant histology, they were assigned as true positives. When the moderate risk patients were found to have benign histology, they were assigned as true negatives. Thus, the moderate risk groups were categorized in a way that bolsters overall test SN and SP (more true positives or true negatives than defined by the positive or negative groups alone). However, the moderate risk subjects/results were not used in the PPV and NPV calculations. Finally, based on concerns that the proportion of histologic subtypes within the studied cohort were inconsistent with published literature, a prevalence adjustment calculation was made to match the reported proportions of adenomas, malignant subtypes and NIFTP as reported by the TSv3 validation study . Bearing these considerations, the results showed 95% SN [CI, 86- 99] and 90% SP [CI, 84-95] for disease. Negative MPTX results ruled out disease with 97% NPV while positive MPTX results ruled in high-risk disease with a 75% PPV. An updated ThyGeNEXT panel improved strong driver mutation detection by 8% with BRAF V600E and TER T promoters being the most common mutations. Additionally, this newer panel increasingly detected coexisting drivers by 4%, TERT being the most common and often paired with RAS . A pairwise analysis of miRNA to detect medullary thyroid cancer (MTC) showed 100% accuracy on a study of 4 MTC and 26 non-MTC samples . Finally, MPTX has recently been updated with the addition of miR-21 and an interdependent pairwise microRNA expression analysis (MPTXv2). This updated MPTX platform was tested on the same cohort as the original validation study population. The results showed a decrease in the moderate-risk cohort from 28% to 13% (p < 0.001) and a reported improvement in PPV to 96% (from 74%) and NPV to 99% (from 95%) (p=NS for both) .There have been no completely independent research studies to assess the MPTX performance. In one analysis of pediatric lesions comprised of 66 malignant and 47 benign tumors, MPTXv1, analysis performed with 70% SN and 96% SP . The Afirma GSC samples are initially tested for RNA quantity and quality. Sufficient samples are tested against initial classifiers to detect parathyroid tissue, MTC, BRAF V600E variants and RET/PTC 1 and RET/PTC 3 fusions. Recently, the validation of the MTC classifier of the Afirma GSC showed 100% SN and 100% SP in a cohort of 21 MTC and 190 non-MTC lesions . If all the classifiers are negative and there is adequate follicular content, the GSC ensemble model relies heavily on differential gene expression of > 10,000 genes for sample classification of GSC-B or GSC-S results. The Afirma GSC clinical validation study was based on a cohort of ITN samples collected prospectively from multiple community and academic centers from the Afirma GEC validation . All patients underwent surgery without known genomic information and all samples were assigned a histopathology diagnosis by an expert panel blinded to all genomic information. The results showed (at a 24% cancer prevalence): SN - 91% [CI, 79-98], SP - 68% [CI, 60-76], NPV - 96% [CI, 90-99], PPV - 47% [CI, 36-58] . Since the validation study, 14 independent real-world studies have been published and in aggregate show a significant improvement in performance over the Afirma GEC, primarily with improved specificity and a higher benign call rate (BCR) of 65% (as compared to 54% with the Afirma GEC) . As expected, some of these studies have also demonstrated that the implementation of Afirma GSC reduced the rate of surgical intervention by 45-68% . A meta-analysis by Vuong et al. including seven studies comparing the performance of Afirma GEC to GSC and found that GSC had a higher BCR (65.3% vs 43.8%; P <0.001), a lower resection rate (26.8% vs 50.1%; P <0.001), and a higher risk of malignancy (60.1% vs 37.6%; P <0.001) in resected specimens . The Afirma GSC incorporates Hurthle/oncocytic and neoplasm classifiers to enhance the diagnostic accuracy in predominately oncocytic ITN relative to the Afirma GEC . A review of four independent post validation studies of the Afirma GSC performance in oncocytic cell lesions showed maintenance of a high SN (3 with 100% SN and one with 80% SN) and improved SP (81-100% for GSC compared to 29-43% for GEC) . When compared to the GEC, the BCR for oncocytic cell–predominant nodules by the GSC is significantly elevated (73.7% vs 21.4%; P < 0.001) . TSv3 is a genomic classifier (GC) where a value is assigned to each detected genetic alteration based on the strength of association with malignancy: 0 (no association), 1 (low cancer risk), or 2 (high cancer risk). A GC score calculated for each sample is a sum of individual values of all detected alterations, with GC scores 0 and 1 accepted as test negative (score 1 is commercially reported as “currently negative”) and scores 2 and above as test positive . “Currently negative”, low cancer probability alterations, are included in the BCR in TSv3 studies. The clinical validation study for TSv3 by Steward et al. was a prospective, multi-center, blinded study that ultimately analyzed 257 ITN, all with histologic consensus. The test demonstrated a 94% [CI, 86%-98%] SN and 82% [CI, 75%-87%] SP. With a cancer/NIFTP prevalence of 28%, the NPV was 97% [CI, 93%-99%] and PPV was 66% [CI, 56%-75%] . There have been 10 independent studies assessing the performance of TSv3 . A recent meta-analysis by Lee at al. including six studies (total 530 thyroid nodules) evaluating the performance of TSv3 found a similar sensitivity of 95.1% [CI, 91.1–97.4%] but a lower specificity of 49.6% [CI, 29.3–70.1%] when compared to the original validation study; the reported PPV of 70% [CI, 55–83%], and NPV of 92% [CI, 86–97%] remained comparable . Molecular tests can be classified as “rule in” vs “rule out” based on their ability to confirm or exclude malignancy. Vargas-Salas et al. found that with a thyroid cancer prevalence of 20–40%, a robust “rule-out” test requires a minimum NPV of 94% and a minimum sensitivity of 90%, whereas to “rule- in” malignancy, a test requires a PPV of at least 60% and a specificity above 80% . MPTX, Afirma GSC, and TSv3 all perform well as “rule out” tests for ITN based on their relatively high sensitivities and NPVs, though independent confirmation of MPTX performance is lacking. MPTX has too few studies to compare its performance to other molecular testing platforms and future studies are needed to confirm its clinical efficacy. A study by Silaghi et al. comparing the performance of Afirma GSC and TSv3 found TSv3 to have the best overall diagnostic performance with the lowest negative likelihood ratio (NLR 0.02), followed by Afirma GSC (NLR 0.11). Both TSv3 and Afirma GSC achieved optimal results to exclude malignancy; however, both failed to achieve a higher performance to confirm or “rule in” a malignancy when compared to their predecessor Thyroseqv2 . Similarly, Lee et al. found there was no statistically significant difference in diagnostic performances between the Afirma GSC and TSv3 . Finally, Livhits et al. performed a randomized clinical trial by using Afirma GSC or TSv3 in routine clinical practice on a rotating monthly basis. They found that both Afirma GSC and TSv3 have a relatively similar specificity (80% and 85%, respectively), and both allowed approximately 49% of patients with indeterminate nodules to avoid diagnostic surgery . Given the similar performance, it is no longer accurate to call Afirma a “rule out test” and Thyroseq a “rule in test” as they have been commonly described with earlier iterations of the testing platforms and in a recent review . Molecular genetic testing is a valuable tool in understanding patients’ prognosis based on specific mutations detected in thyroid cancer. Various mutations are associated with increased tumor aggressiveness, metastatic lymph node spread, a tendency to de-differentiate, and/or reduced efficiency of radioiodine treatment. The main known genetic causes of thyroid cancer include point mutations in the BRAF, RAS, TERT promoter , RET , and TP53 genes and the fusion genes RET/PTC , PAX8/PPARγ , and NTRK . Molecular genetic testing of thyroid tissue in the preoperative and/or postoperative period is becoming more common, and therefore detection of genetic changes may serve as a prognostic factor that can help determine the extent of surgical treatment and the use of systemic targeted therapy. The characterization of molecular variants and fusions as BRAF -like, RAS -like, and non- BRAF non- RAS -like has helped to group molecular alterations in thyroid cancer that share similar risk of events such as extra-thyroidal extension and lymph node metastases . For example, a retrospective analysis by Tang et al. associating pathologic features to the aforementioned molecular classes showed a statistically higher rate of T4 tumor size and N1b nodal metastases in BRAF -like mutated tumors (22%) compared to the other classes (≤ 6%) amongst other more aggressive findings . Afirma GSC, MPTX, and TSv3, have shown promise in predicting disease recurrence in thyroid cancers and Bethesda V/VI nodules based on the detection of low-risk vs high-risk genetic mutations. In thyroid nodules with Afirma GSC suspicious results, or thyroid nodules with BV or BVI cytology, Afirma Xpression Atlas (XA) can provide more granular molecular information. The analytical and clinical validation of XA, which identifies thyroid nodule molecular variants and fusions by whole transcriptome sequencing, was published in 2019 . In 2020, the panel was expanded to detect molecular alterations in 593 genes allowing XA to report on 905 variants and 235 fusions. Afirma XA results may offer important prognostic insights; for example, nodules with a non- RAS and non- BRAF molecular profile have lower rates of lymph node metastasis and extrathyroidal extension . A large retrospective study by Hu et al. demonstrated that 44% of Bethesda III/IV Afirma GSC-S and most Bethesda V/VI nodules (87% BVI) had at least one genomic variant or fusion identified, which could optimize individual treatment decisions . The ability of Afirma XA to demonstrate improved clinical outcomes based on surgery and mutational status is yet to be determined as no randomized trials have been performed; however, the genomic insights provided by XA may predict tumor aggressiveness and provide important information regarding variants for targeted therapy . Labourier et al. found that in a systematic review of the literature, 70%-75% of malignant/Bethesda VI cytology were expected to be positive for the oncogenic BRAF V600E substitution with the second most frequent gene alteration being TERT promoter mutations (11%) . High frequency of oncogenic BRAF mutations has important clinical implications and multiple studies have shown that BRAF V600E correlates with aggressive features of thyroid cancer such as extrathyroidal extensions, vascular invasion, larger thyroid nodule size, advanced staging, lymph node metastasis and recurrence . Additionally, TERT promoter mutations are among the most recognized markers associated with aggressive thyroid cancer phenotypes . When specifically evaluating the performance of TSv3 in thyroid nodules with Bethesda V (suspicious for malignancy) cytology, Skaugen et al. found that TSv3 had sensitivity of 89.6% (95% CI, 82.4%- 94.1%) and specificity of 77.3% (95% CI, 56.6%-89.9%). Moreover, when TSv3 positive Bethesda V nodules were sorted into molecular risk groups (low, intermediate, high), disease recurrence was more commonly found in the high-risk group whereas no patients in the low-risk group developed recurrence . Another study by Hescot et al. used TSv3 to determine if there were molecular prognostic factors associated with recurrence and overall survival in patients’ with poorly differentiated thyroid carcinomas (PDTCs). Of the 40 patients tested with TSv3, high-risk molecular signatures ( TERT , TP53 mutations) were found in 24 cases (60%), intermediate-risk signature in 9 cases (22.5%) and low-risk signature in 7 cases (17.5%) with potentially actionable mutations that may be amenable to targeted therapy identified in 10% of cases. Furthermore, the high molecular-risk signature was associated with distant disease metastasis (P = 0.007) and with worse overall survival (P = 0.01), whereas none of the patients with low-risk molecular signature died due to thyroid cancer . It is important to note that there are no established guidelines addressing management decisions based on the detection of most genetic alterations detected in thyroid nodules regardless of cytology category. In ITN, the most studied value is in the diagnosis of benignity or malignancy. The value of knowing the molecular alterations in BV and BVI thyroid nodules has yet to be investigated in prospective multi-center studies. Additionally, molecular tests performance metrics are generally assessed independent of other clinically relevant factors such as family history of thyroid cancer, heritable syndromes, radiation exposure, and thyroid ultrasound features. One area of increasing interest is the identification of aggressive thyroid cancers that may be amenable to future systemic targeted therapies as needed, possibly in the neo-adjuvant setting. While the use of molecular testing to risk-stratify indeterminate thyroid nodules is encouraging, arguably the most exciting use of this technology is in the setting of advanced and aggressive thyroid disease where identification of targetable mutations can have significant clinical impact . In differentiated thyroid cancer, the overall mortality is low, however 15% of cases will be locally invasive and in those with distant metastases which are radioioine (RAI)-refractory, the 10-year overall survival is <50% . Conversely, the most aggressive subtypes of thyroid cancer, medullary, poorly differentiated, and anaplastic, have high disease-specific mortality. Especially in these thyroid cancer subsets with high mortality rates, there has been substantial expansion of the therapeutic armamentarium with tumor genome-directed therapies over the past decade . Studies have identified several targetable (or potentially targetable) alterations in advanced thyroid cancer, including mutations in commonly detected genes such BRAF V600E, RET , PIK3CA , as well as gene fusions including RET , NTRK , and ALK . In addition to therapies targeting specific genetic alterations, immunotherapy shows significant promise in treating tumors with microsatellite instability, high tumor mutational burden (TMB), and high PD-L 1 expression. With the possibility of identifying genomic alterations via NGS in advanced thyroid cancers, the study of neoadjuvant therapy for aggressive disease has just begun. Recently, a multidisciplinary, multi-institutional, multi-national consensus statement was jointly published by the American Head and Neck Society (AHNS) and the International Thyroid Oncology Group (ITOG) defining advanced thyroid cancer and its targeted treatment . The group advocates for molecular testing to be “performed in Clinical Laboratory Improvement Amendments (CLIA)- accredited laboratories (or their international equivalent), on appropriate specimens, using clinically validated procedures, which may include laboratory-developed tests or FDA-approved commercial assays” . With the support of high-quality evidence, the consensus recommends that “when somatic mutational testing is performed for thyroid cancer, multiplexed NGS-based panels are superior to multiple single-gene tests” and that, “NGS panels that include assays for gene fusions are preferred given the ability to detect multiple mutations and fusions in one assay thereby conserving tissue and limiting expense” . Differentiated Thyroid Cancer (DTC): Accounting for roughly 95% of thyroid cancers, DTC arises from follicular thyroid cells and is often RAI-avid. This allows the vast majority of DTC to be treated with surgery alone for smaller tumors or surgery with RAI and levothyroxine suppression therapy for more advanced or aggressive disease. However, it is reported that 7–23% of patients with DTC will develop distant metastases, and two-thirds of patients with distant metastases become RAI-refractory . These patients have poor prognosis with overall 10-year survival of <50% . Multicenter, randomized, double-blind, placebo-controlled, phase III studies led to FDA approval of multi-kinase inhibitors (MKIs) Sorafenib and Lenvatinib, for the treatment of RAI-refractory locally advanced (non-operative) or metastatic DTC . MKIs block activation of several key receptors that regulate thyroid cancer progression including angiogenesis. While studies showed progression free survival (PFS) benefit in the treatment groups compared to placebo groups , because of the non-specific targeting of these drugs, their clinical utility is limited by their substantial toxicity profiles. In the last decade, recognition of important molecular drivers and signaling pathways has led to the development of molecular-targeted therapies especially for advanced and RAI-refractory differentiated thyroid cancer. Presence of a BRAF V600E mutation, the most common driver mutation in the spectrum of follicular cell derived thyroid cancers, can confer susceptibility to selective RAF kinase inhibitors in some cancer lineages. The combination of dabrafenib ( BRAF inhibitor) and trametinib ( MEK inhibitor), which was initially FDA-approved in BRAF V600E mutated ATC, has also been studied in BRAF -mutated PTC with high response rates (50% single-agent dabrafenib vs. 54% combination, modified RECIST criteria) and median progression free survival 11.4 vs. 15.1 months. This combination of drugs recently garnered approval for treatment of BRAF -mutated DTC . The FDA-approved drugs selpercatinib and pralsetinib target the oncogenic RET gene fusions, detected in approximately 10% of PTC . Thyroid cancers harboring genetic rearrangements involving NTRK 1/3 (~2% of PTC) can respond to treatment with TRK inhibitors, including FDA-approved larotrectinib and entrectinib . ALK fusions are still more rare in well differentiated thyroid cancers (<1% of PTC) but are identified more frequently in PDTC. ALK -inhibitors are FDA-approved for solid tumors that harbor ALK fusions and a few patients with thyroid cancer have been included in the reported clinical trials and/or case reports, although no ALK-inhibitors are currently FDA-approved for DTC specifically. Therefore, ALK fusion testing is currently indicated for advanced DTC only in the context of either “off-label” treatment or clinical trials. Lastly, while microsatellite instability (MSI) and TMB in DTC are often low, MSI-high or TMB-high cancers, may be eligible for treatment with pembrolizumab, a programmed death-1 (PD-1) inhibitor, given the its tissue agnostic approval for MSI-high cancers and the demonstrated responses of TMB-high solid tumors . Anaplastic Thyroid Cancer (ATC), with a median overall survival of 4 months, is considered one of the most aggressive and lethal malignancies and typically presents at a median age of 65-70 years . This most-aggressive thyroid cancer, with a 6-month OS of 35%, and disease-specific mortality approaching 100% is responsible for over half of the annual thyroid cancer-related deaths despite comprising only 1.5% of all thyroid cancers . These outcomes are despite aggressive multimodality treatment regimens including surgery (when feasible), traditional cytotoxic chemotherapy and radiation therapy. ATC is postulated to have the potential to arise either de novo or from pre-existing DTC. The coexistence of BRAF -mutated ATC with PTC described in several studies, suggests the potential of a common DTC origin for most of these tumors . ATC has a higher relative tumor mutational burden (TMB) than DTC although overall the TMB for ATC is still lower than many other solid malignancies . The mutational profile of ATC tends to include accumulation of variations in tumor suppressor genes such as TP53 and PTEN ; oncogenes such as TERT promoter, RAS , BRAF , and PIK3CA ; oncogene-fusions such as NTRK , RET , and ALK ; or through mismatch repairs . Given the aggressive nature of ATC, most often with surgically unresectable disease at presentation, and resistance to radioactive iodine, chemotherapies, and radiation therapy, all patients with suspected ATC are recommended to undergo expeditious histological confirmation, staging, and molecular testing and if a targetable mutation is identified, treatment should include directed therapies against this actionable target. The most significant shift in the management of ATC to occur in decades was the afore mentioned combinatorial use of BRAF/MEK inhibitors (dabrafenib/trametinib) in ATC patients harboring a BRAF V600E mutation . Due to the potential for long turn-around times for traditional NGS testing, some centers employ a rapid PCR assay to detect BRAF V600E in DNA isolated from paraffin blocks (48–72-hour turnaround) or use peripheral blood NGS (cell-free DNA) which has sensitivity of 75%–90% and turnaround time of 3–7 days. These options may enable slightly earlier initiation of targeted therapies if they exist . Mutation-specific immunohistochemistry for BRAF V600E can also be useful in expeditiously identifying patients who might benefit from approved targeted therapy, but requires substantial tissue via core needle biopsy, FNA cell block, or even surgical specimen due to the potential for false positives . When successful, BRAF -directed therapy can induce rapid and substantial disease regression and may eventually render previously inoperable disease amenable for surgical resection . For these patients with advanced stage ATC who are able to undergo complete locoregional surgical resection, one study has shown some of the highest survival rates ever reported for this disease with a 94% 1-year survival and an unmet median OS in a cohort of 20 patients (8 of 20 having stage IVC disease) having received BRAF-directed therapy followed by surgery . Medullary Thyroid Cancer (MTC) arises from parafollicular C cells which are neuroendocrine in origin and accounts for about 2% of thyroid cancers. Although rare, MTC accounts for about 14% of annual deaths from thyroid cancer . MTC most often occurs sporadically (80%) with hereditary forms (20%) being associated with the multiple endocrine neoplasia (MEN) type 2 syndromes. These inherited forms of MTC are associated with genomic alterations of the RET proto-oncogene and are inherited in an autosomal dominant fashion. Patients diagnosed with MTC, regardless of disease stage, personal history of other endocrinologic disorder, or family history, should have genetic counseling and be tested for germline RET mutations . About 6% of MTC patients with no family history or other endocrinologic disorder to suggest MEN, are found to harbor a germline RET mutation prompting counseling and testing of family members. Somatic RET mutations are also found in approximately 50% of patients with sporadic MTC. Somatic mutations in HRAS (~25%), KRAS , and rarely NRAS genes, which are canonically mutually exclusive with RET mutations, have also been identified in sporadic MTC . About 20% of sporadic MTC harbor neither RET nor RAS gene alterations . Patients with advanced sporadic MTC should be offered molecular testing since somatic RET mutations have been shown to lead to more aggressive disease, including higher T- and N-stage, and increase the rate of distant metastasis . Currently, two MKIs, vandetanib and cabozantinib, are approved by the U.S. FDA for the systemic treatment of MTC and show improvement in progression-free survival , both MKIs have a narrow therapeutic window and off-target kinase inhibition causes significant toxicities. Additionally, MTC can acquire gatekeeper resistance mutations at RET codon V808 rendering these therapies ineffective . Recently however, selective RET inhibitors have shown both promising efficacy and more favorable toxicity profiles . Selpercatinib (LOXO-292) is a selective RET kinase inhibitor potently effective against RET alterations, including gene fusions, oncogenic mutations, and even the V804 gatekeeper mutation. Early data from LIBRETTO-001, the phase I/II study of selpercatinib, showed 56% of patients with RET -mutant MTC previously treated with vandetanib and/or cabozantinib achieved objective responses with mostly grade 1 or 2 adverse events, prompting early approval by the FDA . Currently, an ongoing randomized trial is evaluating treatment-naїve patients with RET -mutant MTC, comparing selpercatinib with standard MKI therapy. Pralsetinib (BLU-667), another selective RET inhibitor, has been recently approved by the FDA for the treatment of patients with advanced or metastatic RET -mutant MTC (IC50 0.3–5 nM). This approval was based on early data from the phase I/II trial (ARROW) of pralsetinib showing a 65% objective response rate in patients with RET - mutant tumors, including patients with MKI resistant tumors and with known gatekeeper mutations . In this study, pralsetinib has been well tolerated with most treatment related adverse events being low grade and reversible . In summary, the use of molecular testing in the identification of therapeutic targets can have significant clinical impact. We are undoubtedly only seeing the beginning of this new frontier. Knowledge of molecular mutations, fusions, and gene expression profiles, especially for the most advanced and aggressive forms of thyroid cancer will likely continue to drive drug discovery and development world-wide. Molecular testing of thyroid nodules and thyroid cancer has improved the diagnostic accuracy of indeterminate thyroid nodules and provides actionable information regarding tumor prognosis. Additionally, identifiable molecular variants and fusions inform clinicians of a patient’s eligibility for targeted systemic therapies in the important subset of thyroid cancer patients with metastatic, progressive, radio-iodine refractory disease. Future research should focus on the clinical utility of molecular information to change the clinical approach to patients with thyroid nodules. For example, prospective studies on the extent of surgery and the assessment of changes in factors such as tumor recurrence. Additionally, novel analyses to predict tumor behavior are warranted. Finally, the investigation of targeted therapies in the neo-adjuvant setting for thyroid cancer that presents aggressively is ongoing and may improve overall outcomes, for example, with improved opportunities for acceptable surgical outcomes in previously unresectable tumors. JP, JK, and EC wrote equal portions of the first draft of this review. All authors contributed to the article and approved the submitted version.
Towards a patient-centred approach in therapeutic patient education. A qualitative study exploring health care professionals’ practices and related representations
e28e8440-94c3-47d2-b612-9b076f24dd36
9999270
Patient Education as Topic[mh]
In recent years, a large body of literature has highlighted the difficulties encountered in implementing Therapeutic Patient Education (TPE). – TPE is defined as a process that “should enable patients to acquire and maintain abilities that allow them to optimally manage their lives with their disease. It is therefore a continuous process, integrated in health care. It's patient-centred (…). It is designed to help patients and their families understand the disease and the treatment, cooperate with health care providers, live healthily, and maintain or improve their quality of life”. One of the challenges for TPE implementation is addressing the “patient-centred” dimension. TPE was frequently reported as focusing on Health Care Professionals (HCPs)’ needs. , , The Heath Care Professional (HCP) involved the patient insufficiently in information exchange , and decision-making, or communicated biomedical aspects only , without consideration for lay knowledge and psychosocial factors, whereas collaborative and patient-centred approaches should be encouraged. Difficulties in integrating such practices into care routines were also reported. HCPs’ representations might constitute a promising avenue for a better understanding of the difficulties in achieving the patient-centred dimension. If the representation-practice link is established on a theoretical level, only two studies , in TPE explored this link and measured the quality of practices, as a recent literature review showed. However, as long as what the HCP does when he/she educates is not specified, it remains impossible to draw lessons for practice. This review also highlighted that TPE practices were not constant. The extent to which representations contribute to variations would be interesting to explore. This study uses the Social Representation Theory in particular Abric's approach to investigate the links between educational practices among HCPs and their representations. Representations are “a form of socially elaborated and shared knowledge, with a practical aim and contributing to the construction of a reality common to a social group” or “schemas or sets of cognitions about the subjective experience. (…) Other terms for this construct include explanatory model, mental model, narrative, perceptions and beliefs”. According to Abric, in a given situation one of the four components of the representation of the situation (i.e. Representation of themselves, of the other, of the context and of the task) becomes predominant and leads to a behavior/practice. This study aims to explore the practice-representation links in TPE, in two steps, consisting of: providing an overview of actual TPE practices (including variations) and examining possible representations in relation to these. A qualitative approach, via individual interviews with the HCPs, was used to describe their TPE practices and possible related representations. The research programme was approved by the Ethics Committee of our research institute (project 24/2012). Participants The inclusion criteria were to: (1) practise as an HCP, (2) currently implement educational practices with patients, (3) understand and speak French and (4) agree to be interviewed individually. Given the possible variety of TPE practices, any practice the participant referred to as TPE was considered as such. HCPs from Belgium and France were included to ensure diversity of the sample. Potential participants were identified on the basis either of a list from a TPE training organization or their designation as “patient educators” by TPE specialists from their institution. Potential participants had given their prior consent for their details to be shared for research purposes. No disagreements were mentioned. The research team phoned them and offered to take part in the research. They were then free to accept/decline. All participants provided verbal informed consent. Interviews took place in an office at their workplace, at their request In line with the data saturation principle, interviews were conducted until further interviews no longer add to the conceptual framework. Data collection Semi-interviews were conducted by the first author, on the basis of an interview guide. Interview themes were: an actual TPE practice, the representations regarding TPE, the representations at work in the reported practice, practice variations, elements linked with these variations, and socio-administrative items. Within the interviews, the “explicitation interview” techniques were used to collect the practices. An “explicitation interview” is a technique designed to elicit verbalization of a past activity. It helped the HCP to move back into an educational session (i.e. a sequence of actions with patient(s)), and to focus on what he/she did. Regarding representations, the four components of the representation of the situation from Abric, (i.e. Representations of themselves as HCPs, of the patient, of the context and of the task), shown as relevant in TPE, structured the interview guide. Subcategories of the educational task representation were based on the model of Deccache, which explains compliance with chronic disease treatments (health and health behavior objectives; the HCP-patient relationship; education goals). Links were built on the basis of the participants’ rationales for their practices. A validation was carried out with the participant during the interview. Interviews were audio-taped and fully transcribed. Data analysis Data analysis took place in two steps: - Step 1. Elaborate a typology of educational practices. The interactions of the HCP with the patient (what he/she did and what the patient did) were retraced for various contrasting practices. A common criterion for ranking practices regarding patient-centredness was subsequently identified, namely the power distribution between the HCP and the patient. Since this criterion is shared with the models of Szasz and Hollender, later supplemented by Botelho, the proposed classification drew inspiration from these models. Szasz and Hollender provided three models of the HCP-Patient relationship: (1) Activity-Passivity; (2) Guidance-Cooperation; (3) Mutual Participation. Botelho added a fourth model: (4) Autonomism “in which the patient is exerting greater control over and responsibility for health care” than the HCP (p.212). A (sub)type was assigned to each interview on the basis of the (sub)types definitions. - Step 2. Identify the predominant/decisive representation in the reported practice. In case of doubt, attribution was carried out by two reviewers independently (SR, BG). Disagreements were resolved through discussion. Regular meetings between the authors were held to ensure the quality of the coding categories and to validate each step. The inclusion criteria were to: (1) practise as an HCP, (2) currently implement educational practices with patients, (3) understand and speak French and (4) agree to be interviewed individually. Given the possible variety of TPE practices, any practice the participant referred to as TPE was considered as such. HCPs from Belgium and France were included to ensure diversity of the sample. Potential participants were identified on the basis either of a list from a TPE training organization or their designation as “patient educators” by TPE specialists from their institution. Potential participants had given their prior consent for their details to be shared for research purposes. No disagreements were mentioned. The research team phoned them and offered to take part in the research. They were then free to accept/decline. All participants provided verbal informed consent. Interviews took place in an office at their workplace, at their request In line with the data saturation principle, interviews were conducted until further interviews no longer add to the conceptual framework. Semi-interviews were conducted by the first author, on the basis of an interview guide. Interview themes were: an actual TPE practice, the representations regarding TPE, the representations at work in the reported practice, practice variations, elements linked with these variations, and socio-administrative items. Within the interviews, the “explicitation interview” techniques were used to collect the practices. An “explicitation interview” is a technique designed to elicit verbalization of a past activity. It helped the HCP to move back into an educational session (i.e. a sequence of actions with patient(s)), and to focus on what he/she did. Regarding representations, the four components of the representation of the situation from Abric, (i.e. Representations of themselves as HCPs, of the patient, of the context and of the task), shown as relevant in TPE, structured the interview guide. Subcategories of the educational task representation were based on the model of Deccache, which explains compliance with chronic disease treatments (health and health behavior objectives; the HCP-patient relationship; education goals). Links were built on the basis of the participants’ rationales for their practices. A validation was carried out with the participant during the interview. Interviews were audio-taped and fully transcribed. Data analysis took place in two steps: - Step 1. Elaborate a typology of educational practices. The interactions of the HCP with the patient (what he/she did and what the patient did) were retraced for various contrasting practices. A common criterion for ranking practices regarding patient-centredness was subsequently identified, namely the power distribution between the HCP and the patient. Since this criterion is shared with the models of Szasz and Hollender, later supplemented by Botelho, the proposed classification drew inspiration from these models. Szasz and Hollender provided three models of the HCP-Patient relationship: (1) Activity-Passivity; (2) Guidance-Cooperation; (3) Mutual Participation. Botelho added a fourth model: (4) Autonomism “in which the patient is exerting greater control over and responsibility for health care” than the HCP (p.212). A (sub)type was assigned to each interview on the basis of the (sub)types definitions. - Step 2. Identify the predominant/decisive representation in the reported practice. In case of doubt, attribution was carried out by two reviewers independently (SR, BG). Disagreements were resolved through discussion. Regular meetings between the authors were held to ensure the quality of the coding categories and to validate each step. Of the forty-one HCPs contacted, thirty met the inclusion criteria. Three refused to participate in the research, four no longer had contact with patients and four others wanted to be interviewed in pairs. Twenty-six of the remaining thirty referred to an actual practice and were selected for an in-depth analysis; four could not report any. They had either no TPE training or a 40-h training course. Of the remaining twenty-six, nineteen worked in Belgium. Twenty were women. Thirteen were nurses, seven medical doctors, three physiotherapists, two dietitians, and one a pharmacist Experience in TPE was 12 years [3;26] on average. Training in TPE was diverse, ranging from no dedicated training (n = 12) to a master's degree (n = 3), a 40-to-70-h training course (n = 3), diabetes-educator training (n = 3), a 300-h university diploma (n = 5). TPE was mainly one-to-one, even though nine HCPs performed both one-to-one and with groups. While the most frequently reported practice concerned diabetes (n = 10), practices were varied (locomotor problems, multiple sclerosis, obesity, oncology, osteoporosis, pediatrics, post-stroke rehabilitation and respiratory problems). Interviews lasted 75 min on average. The presentation of the results is as follows: TPE practices and possible related representation(s), followed by variations in practices and possible related representation(s). TPE practices and their related representation(s) Beyond the four practice types of the models of Szasz and Hollender and Botelho, adopted during the analysis, nine subtypes were highlighted. Each practice subtype came with its decisive representation (see ). provides an overview per practice subtype of the participants’ socio-demographic characteristics. Type 1: Activity-Passivity and Decisive Representation(s) The HCP delivered theoretical knowledge, regardless of what the patient knew or wanted to know. Transmission was unidirectional. Three subtypes were noted: information only (1.1), instruction (1.2) and start of personalization (1.3). Practitioners’ reflexivity was low. Many relaunch questions were necessary to prompt them to report an actual practice, a characteristic shared with the HCPs who could not report any. Regarding representations, task representations were often linked to a lack of knowledge of what TPE is. Most of these HCPs had no training courses dedicated to TPE. Subtype 1.1 information only and their representation(s) The HCP displayed knowledge or listed behaviors to be adopted, but did not systematically include the rationale. Information was limited to physical health. She was relatively happy to learn that she could have epistaxis, that she could bleed much longer. (Alba, home nurse) Regarding representations, two patterns were observed, depending on the HCP's experience of a patient-centred TPE and training profile. Without this experience, TPE representations prevailed. “TPE” was understood as either transmitting biomedical knowledge or facilitating the patient's health behaviors by carrying out administrative procedures on his/her behalf. Patient education is: I explain to you and I give you the shot. (Alba) With a training in TPE and this experience, representations of the context and of patients, as not supportive of TPE prevailed. The context gradually discouraged the HCP from implementing TPE. Subtype 1.2 instruction and their representation(s) The HCP transmitted knowledge to help the patient learn an action or understand a mechanism. When teaching a technical action, the HCP demonstrated it, had the patient do it, and corrected it, until it was done properly. Starting from this subtype, knowledge delivery used various sensory channels. Metaphor or imaginary clinical cases were also reported. I showed them how to put on the immobilization scarf and how to transfer the patient (from a chair to a physiotherapist's table). They did it with me. I corrected them two or three times. Then, they understood, they were able to do it again the next day. (Clement, physiotherapist, post-stroke rehabilitation) Regarding representations, representations of health behavior objectives and of the mechanism for achieving them (task representations) characterized these HCPs. So that the patient self-administers the treatment in day-to-day life, they transmitted knowledge or know-how “ready to be implemented”, as “knowledge leads to behavior”. I’ve won when I don't see them for years because they’ve understood how to cure themselves (…). It's ignorance in quotes that makes [it impossible]. (Lucas, physiotherapist) Subtype 1.3 start of personalization and their representation(s) The HCP used aspects of the patient's life context and lifestyle to communicate knowledge. Contextual information was, however, rarely collected for educational purposes. Regarding representations, life context was factored in, thanks to long-term or home-based follow-up, which allowed it to be observed. Some HCPs reported being unsure of practising TPE or confused TPE with approaches such as pain management through medication. Type 2. Guidance-Cooperation and Decisive Representation(s) Prior to education, the HCP considered elements (knowledge, technical actions, lifestyle habits) provided by the patient, at the HCP's request Lifestyle habits and knowledge were addressed differently. Habits were used to personalize education. Knowledge was checked for accuracy and corrected, if necessary. Learning methods were collaborative and even game-oriented. Two subtypes can be distinguished depending on whether or not they explored patients’ representations. Regarding representations, emotional or motivational aspects (task representation) might generate discomfort among these HCPs. Subtype 2.1 transmission disregarding representations and their representation(s) Rationales for health behavior were systematically set out. Regarding representations, two patterns were observed, depending on their awareness that knowledge might not be sufficient to change behaviors (task representation). Without this awareness, the HCP communicated knowledge in the way he/she likes to learn (e.g. memos, handling, color codes) as “knowing leads to acting accordingly”. When patients did not implement what they learned, HCPs were unable to work out the reason why. “When you eat French fries, does it raise your glycaemia, your sugar level?” “Yes, Ma’am”. They always say: “yes”. “Why?” “Because they’re greasy!” (…). 8 times out of 10 they’re wrong. Even though I’ve just explained, and they’ve understood that there was sugar in potatoes. (Chloe, dietitian, diabetology) With this awareness, self-representation as unskilled in psychosocial factors (e.g. self-efficacy, perceived social support) was decisive. This might be reinforced by unsuccessful experience of training intended to remedy this. Professor Z talked to us about metaphors. He really likes metaphors. But I don't know how to make them (…). He gave us an example. Coversyl®, the blood pressure medication, “is the rose of a watering can”. (…). I’m still trying to figure it out. (Helen, nurse, diabetology) Subtype 2.2 transmission “with awareness of” representations and their representation(s) After their collection, representations were confronted with scientific facts in order to be “corrected”. We have to try to reframe people's beliefs. (…) About milk, there are a lot of beliefs, more or less false. (Firmin, rheumatologist, osteoporosis) Negotiation was limited to dimensions (e.g. lifestyle habits, alternative treatment) that were not deleterious to the proposed treatment. Regarding representations, emotions and motivation were found to be relevant in TPE (task representations). However, the HCPs did not feel competent to address them (self-representation). A very ambiguous back and forth, about the disease (which is not my field) and “I’m screwed”. I delivered all the messages I had to (…). I left the room and told one of the assistants: “I feel bad, I’m afraid she’ll do something stupid.” (Elsa, pharmacist, oncology) Context representations might increase or decrease this discomfort, depending on the representation of being part of an interdisciplinary team. If so, emotional/motivational aspects were seen as other HCPs’ task (doctors, psychologists). If not, the HCP reported acting “like a wart” (Elsa), an interloper who is not part of the team. Type 3. Mutual participation and Decisive Representation(s) Education started from pre-existing knowledge, skills and representations. Active methods (e.g. reformulation, echoing, questions left opened, silences) were used to support the patient's expression and mental elaboration. Motivation and psychosocial factors were part of education. Some HCPs developed strategies to implement Type 3 despite contexts that gave little room for personalization (e.g. standardized procedure, penitentiary settings). Strategies were: initiating personalized sessions after a standardized diagnosis or group sessions, meta-communicating on the context, … Three subtypes were noted. They constituted the shift from HCP-centred approaches (3.1) to patient-centred approaches (3.3). The patient's objectives gradually took precedence even if the HCP had their own. Regarding representations, self-representation was key in 3.2 and 3.3, but peripheral in 3.1. Subtype 3.1 “coaching” based on pre-existing motivation and their representation(s) Methods were “predefined” by the HCP. Pre-existing motivation could be assessed to estimate whether the conditions (e.g. internal motivation, Prochaska's “contemplation” stage) were in place to optimize the TPE success rate. Negotiation applied as long as the vital prognosis was not engaged. Metaphors tailored to the patient can be used. The forklift driver, I’m going to tell him: “Listen, your lungs are like the oil filter, you put in good quality oil, but you’ve never changed the oil filter, after a while it doesn't work anymore”. (Uranie, nurse, tobaccology) Regarding representations, the task representation that “a technique addresses a health problem” was decisive (e.g. emotional awareness for eating disorders, Prochaska model for smoking cessation). The Prochaska wheel is wonderful, isn't it? Based on his/her motivation, we act differently. (Uranie) Subtype 3.2. Life-project-based learning and their representation(s) The HCP assisted the patient to gradually gain autonomy including in making health choices. The HCP helped the patient's “life project/objectives” to emerge through interrogative methods. Education was structured based on the life project. Motivation was stimulated in this way. Knowledge and know-how transmission were not a priority unless the patient asked for it. Representations could be deconstructed by encouraging the patient to adopt dissonant behavior. “What does oxygen mean to you?” “A thread.” “A thread. But what will it do for you?” The guy says: “Dig my grave” (…) “You’re at stage 4, that's true, but you might still want to do something… this month, this week, or this year, whenever you like…. What would it be?” “I’d really like to play the drums again!” “That's a great goal, let's go for it! What would be needed so you can play the drums again?” “I’ve got no strength left” “OK”. At that point, you really have to dig down, to rephrase: “How could you get more strength?” “I’d have to lift weights.” “Not weights, they’re a bit heavy at the moment. (..) First, it would be walking.” “Yes, but I can't walk anymore!” “Why can't you walk anymore?” “Because I walk 2 metres and I’m exhausted!” (…) “Do you think it's a need for air or for oxygen?” “I don't know the difference!” “Do you want us to experience the difference?” “Yes!” So, I get my oxygen tank and say: “Here we go, we’ll do two tests. One with oxygen, one without. We’ll see when you’re best”. (Daphne, nurse, COPD). Regarding representations, self-representation was central in subtypes 3.2 and 3.3. TPE values were perceived as matching those of the HCP. The freedom to determine life choices was a central value for subtype 3.2. TPE contributed to it through gradual empowerment (task representation). Whereas values were perceived as defining the HCP, the training equipped him/her. These HCPs completed the longest TPE training courses. I think it comes from one's value system: it would be very arrogant to claim I know what others need to do. (Robin, dietitian, addictology). Subtype 3.3. “Trust-based relationship” and their representation(s) The prior health problem was the one the patient considered as such. His/her reading grid of a phenomenon was accepted as “true”. The HCP tried to get to the root of the problem by questioning. Health behavior objectives and possible learning were “co-decided” in light of what was possible for, and desired by, the patient. Those were her goals, the goals she felt she could achieve. I didn't come along and say: “You’d have to stop after 3-4 drinks”. (…) It was a discussion. (Lina, nurse, diabetology) Regarding representation, the specificity of subtype 3.3 is that trust was key, to achieve a successful follow-up (task representation). Given the trusting relationship that has been established, if there was another major problem with diabetes (…) the fact that we know each other well would facilitate the follow-up. (Lina) Type 4. Autonomism and Decisive Representation(s) Education was put on hold. The HCP suggested health behaviors based on her/his feelings and representation of the patient's feelings. She/he discussed them with the patient and let her/him decide. The HCP stood alongside the patient. His blood test, it's true that's something I don't always bring up because I know it's going to tire him out, that he knows that (laughs) and I know he knows it. (Julie, general practitioner) Regarding representations, task representation seemed to be decisive. Mobilizing the patient failed; the HCP redefined the objectives as well-being and maintenance of the patient's connection to the health care system. I believe that by maintaining this dialogue and leaving the door open, maybe one day he’ll be ready and he’ll come back. (Julie) Practice analysis highlighted a range of educational practices, differing in terms of HCP-Patient power distribution, but also on other aspects, namely: communication mode, consideration for patients’ representations, motivational approach, personalization, complexity of methods and learning contents, and practice reflexivity (see ). Specific representations were related to each (sub)type which might shed light on this practice diversity. Regarding patient-centredness, mutual participation – the patient-centred approach in which the HCP still has a role to play – was not mainstreamed among our participants. Were these practice (sub)types immutable? Abric suggests practice variations should be considered. Given that a representation becomes predominant in a given situation, the decisive representation might vary and so might the related practice. Variations in Practices and their Related Representations Practices could vary within an HCP in three ways: within a subtype, between the subtypes and between technical care and education. Some decisive representations seemed to be related to these variations. Variations within a subtype and decisive representations All participants except Firmin, Hermione, Isaline, and Nathan reported such variations. It consisted in using different contents/methods while remaining in the same practice subtype. These variations were related to representations of the patient experiencing learning difficulties. Difficulties were perceived as being associated with patients’ characteristics: eagerness to learn, intellectual level, ethnic origin, age, vehicular language mastery, illiteracy, social class, prior knowledge, psychiatric disorders and precariousness. Adaptations were made to enable learning: simplified vocabulary, information quantity adjustment, pictorial aids, content reordering, … Sometimes I only use pictures. I’ve a food plan in pictures, because I’ve illiterate persons. (Chloe) The availability of the HCP (in terms of physical condition and time available) was also associated with information quantity adjustment. If I’ve slept well, if I’m on top form, I’ll explain everything (Laughs). (Alba) Variations between Subtypes and Decisive representation(s) Some HCPs (Belle, Firmin, Hermione, Julie, Lina, and Xaviere) reported variations between practice subtypes. These were related either to representations of the context or to representations of patients. Shifts from 1.1 to 2.2 aimed to compensate for the usual length of consultations; HCPs engaged in group education to go into further explanation. Here, we have to do things quickly. We tell them if their treatment is appropriate or not, it's a whole process: X-rays, blood tests… Once we’ve done all that, the time is almost up (…). There, we spend a whole afternoon with them so that they can ask any question they like. (Firmin) Shifts from 1.2 to 2.2 were linked to the availability of another HCP who provided this kind of education. Once we realized it was a total misunderstanding, I suggested the nurse go to his home every day to observe what was going on, how he pricked his fingertips, how he understood his illness and how he administered his treatment. (Hermione, general practitioner) Shifts from 3.3 to 2.2 were linked to the prescriber's request to meet the objectives quickly or to an influx of patients, preventing the HCP from taking the necessary time. When there are three patients in the waiting room, it's sometimes harder to take the time. (Lina) Shifts from 3.1 to 2.1 consisted of temporarily adopting a “directive style” to meet the patient's wishes. “I have to take my Lantus® when I go to bed, around midnight.” So, we need to make sure learning takes place because schedules are incompatible with normal nursing care (…) The question was: “What's essential right now?” (Xaviere, nurse, diabetology) Shifts from 4 to 3.1 were linked to the patient's possible remobilization (e.g. Type 4 and their representations). Technical care versus education and decisive representation(s) Some HCPs of (sub)types 1.2, 2.1, 3.3 and 4 (Garance, Hermione, Fanny, Helen, Lina, and Julie) reported practising education on some occasions and purely technical care on others. Various representation type could be at work: Representations of the patient as meeting the educational criteria (physical or intellectual abilities, motivation, vehicular language fluency); Sometimes we provide only care because people don't ask for education (…). When people start to know us better and ask questions, we start educating them. (Garance, nurse, diabetology) Representations of the context : the policy of the health care service regarding education or a medical prescription that specified “education”; On Thursdays I’m in charge of the diabetic foot clinic. We do very little education because it's basically wound care. It's therapeutic. Even though we should also provide education (…) On Fridays, I have a small office close to the doctors’. (…) If there is an issue with their diabetes, no matter what, the door is open. (Lina) HCPs’ self-representations as being “on top form”, which is perceived as essential to educate. Care takes… I wouldn't say very little energy… but less energy than educating the patient. You need energy to repeat the same thing sometimes 3 to 4 times. Whereas I can visit patients with 40°C [104°F]. I tell them: “I’m not well today”. They leave me in peace. (Helen) Beyond the four practice types of the models of Szasz and Hollender and Botelho, adopted during the analysis, nine subtypes were highlighted. Each practice subtype came with its decisive representation (see ). provides an overview per practice subtype of the participants’ socio-demographic characteristics. Type 1: Activity-Passivity and Decisive Representation(s) The HCP delivered theoretical knowledge, regardless of what the patient knew or wanted to know. Transmission was unidirectional. Three subtypes were noted: information only (1.1), instruction (1.2) and start of personalization (1.3). Practitioners’ reflexivity was low. Many relaunch questions were necessary to prompt them to report an actual practice, a characteristic shared with the HCPs who could not report any. Regarding representations, task representations were often linked to a lack of knowledge of what TPE is. Most of these HCPs had no training courses dedicated to TPE. Subtype 1.1 information only and their representation(s) The HCP displayed knowledge or listed behaviors to be adopted, but did not systematically include the rationale. Information was limited to physical health. She was relatively happy to learn that she could have epistaxis, that she could bleed much longer. (Alba, home nurse) Regarding representations, two patterns were observed, depending on the HCP's experience of a patient-centred TPE and training profile. Without this experience, TPE representations prevailed. “TPE” was understood as either transmitting biomedical knowledge or facilitating the patient's health behaviors by carrying out administrative procedures on his/her behalf. Patient education is: I explain to you and I give you the shot. (Alba) With a training in TPE and this experience, representations of the context and of patients, as not supportive of TPE prevailed. The context gradually discouraged the HCP from implementing TPE. Subtype 1.2 instruction and their representation(s) The HCP transmitted knowledge to help the patient learn an action or understand a mechanism. When teaching a technical action, the HCP demonstrated it, had the patient do it, and corrected it, until it was done properly. Starting from this subtype, knowledge delivery used various sensory channels. Metaphor or imaginary clinical cases were also reported. I showed them how to put on the immobilization scarf and how to transfer the patient (from a chair to a physiotherapist's table). They did it with me. I corrected them two or three times. Then, they understood, they were able to do it again the next day. (Clement, physiotherapist, post-stroke rehabilitation) Regarding representations, representations of health behavior objectives and of the mechanism for achieving them (task representations) characterized these HCPs. So that the patient self-administers the treatment in day-to-day life, they transmitted knowledge or know-how “ready to be implemented”, as “knowledge leads to behavior”. I’ve won when I don't see them for years because they’ve understood how to cure themselves (…). It's ignorance in quotes that makes [it impossible]. (Lucas, physiotherapist) Subtype 1.3 start of personalization and their representation(s) The HCP used aspects of the patient's life context and lifestyle to communicate knowledge. Contextual information was, however, rarely collected for educational purposes. Regarding representations, life context was factored in, thanks to long-term or home-based follow-up, which allowed it to be observed. Some HCPs reported being unsure of practising TPE or confused TPE with approaches such as pain management through medication. Type 2. Guidance-Cooperation and Decisive Representation(s) Prior to education, the HCP considered elements (knowledge, technical actions, lifestyle habits) provided by the patient, at the HCP's request Lifestyle habits and knowledge were addressed differently. Habits were used to personalize education. Knowledge was checked for accuracy and corrected, if necessary. Learning methods were collaborative and even game-oriented. Two subtypes can be distinguished depending on whether or not they explored patients’ representations. Regarding representations, emotional or motivational aspects (task representation) might generate discomfort among these HCPs. Subtype 2.1 transmission disregarding representations and their representation(s) Rationales for health behavior were systematically set out. Regarding representations, two patterns were observed, depending on their awareness that knowledge might not be sufficient to change behaviors (task representation). Without this awareness, the HCP communicated knowledge in the way he/she likes to learn (e.g. memos, handling, color codes) as “knowing leads to acting accordingly”. When patients did not implement what they learned, HCPs were unable to work out the reason why. “When you eat French fries, does it raise your glycaemia, your sugar level?” “Yes, Ma’am”. They always say: “yes”. “Why?” “Because they’re greasy!” (…). 8 times out of 10 they’re wrong. Even though I’ve just explained, and they’ve understood that there was sugar in potatoes. (Chloe, dietitian, diabetology) With this awareness, self-representation as unskilled in psychosocial factors (e.g. self-efficacy, perceived social support) was decisive. This might be reinforced by unsuccessful experience of training intended to remedy this. Professor Z talked to us about metaphors. He really likes metaphors. But I don't know how to make them (…). He gave us an example. Coversyl®, the blood pressure medication, “is the rose of a watering can”. (…). I’m still trying to figure it out. (Helen, nurse, diabetology) Subtype 2.2 transmission “with awareness of” representations and their representation(s) After their collection, representations were confronted with scientific facts in order to be “corrected”. We have to try to reframe people's beliefs. (…) About milk, there are a lot of beliefs, more or less false. (Firmin, rheumatologist, osteoporosis) Negotiation was limited to dimensions (e.g. lifestyle habits, alternative treatment) that were not deleterious to the proposed treatment. Regarding representations, emotions and motivation were found to be relevant in TPE (task representations). However, the HCPs did not feel competent to address them (self-representation). A very ambiguous back and forth, about the disease (which is not my field) and “I’m screwed”. I delivered all the messages I had to (…). I left the room and told one of the assistants: “I feel bad, I’m afraid she’ll do something stupid.” (Elsa, pharmacist, oncology) Context representations might increase or decrease this discomfort, depending on the representation of being part of an interdisciplinary team. If so, emotional/motivational aspects were seen as other HCPs’ task (doctors, psychologists). If not, the HCP reported acting “like a wart” (Elsa), an interloper who is not part of the team. Type 3. Mutual participation and Decisive Representation(s) Education started from pre-existing knowledge, skills and representations. Active methods (e.g. reformulation, echoing, questions left opened, silences) were used to support the patient's expression and mental elaboration. Motivation and psychosocial factors were part of education. Some HCPs developed strategies to implement Type 3 despite contexts that gave little room for personalization (e.g. standardized procedure, penitentiary settings). Strategies were: initiating personalized sessions after a standardized diagnosis or group sessions, meta-communicating on the context, … Three subtypes were noted. They constituted the shift from HCP-centred approaches (3.1) to patient-centred approaches (3.3). The patient's objectives gradually took precedence even if the HCP had their own. Regarding representations, self-representation was key in 3.2 and 3.3, but peripheral in 3.1. Subtype 3.1 “coaching” based on pre-existing motivation and their representation(s) Methods were “predefined” by the HCP. Pre-existing motivation could be assessed to estimate whether the conditions (e.g. internal motivation, Prochaska's “contemplation” stage) were in place to optimize the TPE success rate. Negotiation applied as long as the vital prognosis was not engaged. Metaphors tailored to the patient can be used. The forklift driver, I’m going to tell him: “Listen, your lungs are like the oil filter, you put in good quality oil, but you’ve never changed the oil filter, after a while it doesn't work anymore”. (Uranie, nurse, tobaccology) Regarding representations, the task representation that “a technique addresses a health problem” was decisive (e.g. emotional awareness for eating disorders, Prochaska model for smoking cessation). The Prochaska wheel is wonderful, isn't it? Based on his/her motivation, we act differently. (Uranie) Subtype 3.2. Life-project-based learning and their representation(s) The HCP assisted the patient to gradually gain autonomy including in making health choices. The HCP helped the patient's “life project/objectives” to emerge through interrogative methods. Education was structured based on the life project. Motivation was stimulated in this way. Knowledge and know-how transmission were not a priority unless the patient asked for it. Representations could be deconstructed by encouraging the patient to adopt dissonant behavior. “What does oxygen mean to you?” “A thread.” “A thread. But what will it do for you?” The guy says: “Dig my grave” (…) “You’re at stage 4, that's true, but you might still want to do something… this month, this week, or this year, whenever you like…. What would it be?” “I’d really like to play the drums again!” “That's a great goal, let's go for it! What would be needed so you can play the drums again?” “I’ve got no strength left” “OK”. At that point, you really have to dig down, to rephrase: “How could you get more strength?” “I’d have to lift weights.” “Not weights, they’re a bit heavy at the moment. (..) First, it would be walking.” “Yes, but I can't walk anymore!” “Why can't you walk anymore?” “Because I walk 2 metres and I’m exhausted!” (…) “Do you think it's a need for air or for oxygen?” “I don't know the difference!” “Do you want us to experience the difference?” “Yes!” So, I get my oxygen tank and say: “Here we go, we’ll do two tests. One with oxygen, one without. We’ll see when you’re best”. (Daphne, nurse, COPD). Regarding representations, self-representation was central in subtypes 3.2 and 3.3. TPE values were perceived as matching those of the HCP. The freedom to determine life choices was a central value for subtype 3.2. TPE contributed to it through gradual empowerment (task representation). Whereas values were perceived as defining the HCP, the training equipped him/her. These HCPs completed the longest TPE training courses. I think it comes from one's value system: it would be very arrogant to claim I know what others need to do. (Robin, dietitian, addictology). Subtype 3.3. “Trust-based relationship” and their representation(s) The prior health problem was the one the patient considered as such. His/her reading grid of a phenomenon was accepted as “true”. The HCP tried to get to the root of the problem by questioning. Health behavior objectives and possible learning were “co-decided” in light of what was possible for, and desired by, the patient. Those were her goals, the goals she felt she could achieve. I didn't come along and say: “You’d have to stop after 3-4 drinks”. (…) It was a discussion. (Lina, nurse, diabetology) Regarding representation, the specificity of subtype 3.3 is that trust was key, to achieve a successful follow-up (task representation). Given the trusting relationship that has been established, if there was another major problem with diabetes (…) the fact that we know each other well would facilitate the follow-up. (Lina) Type 4. Autonomism and Decisive Representation(s) Education was put on hold. The HCP suggested health behaviors based on her/his feelings and representation of the patient's feelings. She/he discussed them with the patient and let her/him decide. The HCP stood alongside the patient. His blood test, it's true that's something I don't always bring up because I know it's going to tire him out, that he knows that (laughs) and I know he knows it. (Julie, general practitioner) Regarding representations, task representation seemed to be decisive. Mobilizing the patient failed; the HCP redefined the objectives as well-being and maintenance of the patient's connection to the health care system. I believe that by maintaining this dialogue and leaving the door open, maybe one day he’ll be ready and he’ll come back. (Julie) Practice analysis highlighted a range of educational practices, differing in terms of HCP-Patient power distribution, but also on other aspects, namely: communication mode, consideration for patients’ representations, motivational approach, personalization, complexity of methods and learning contents, and practice reflexivity (see ). Specific representations were related to each (sub)type which might shed light on this practice diversity. Regarding patient-centredness, mutual participation – the patient-centred approach in which the HCP still has a role to play – was not mainstreamed among our participants. Were these practice (sub)types immutable? Abric suggests practice variations should be considered. Given that a representation becomes predominant in a given situation, the decisive representation might vary and so might the related practice. The HCP delivered theoretical knowledge, regardless of what the patient knew or wanted to know. Transmission was unidirectional. Three subtypes were noted: information only (1.1), instruction (1.2) and start of personalization (1.3). Practitioners’ reflexivity was low. Many relaunch questions were necessary to prompt them to report an actual practice, a characteristic shared with the HCPs who could not report any. Regarding representations, task representations were often linked to a lack of knowledge of what TPE is. Most of these HCPs had no training courses dedicated to TPE. Subtype 1.1 information only and their representation(s) The HCP displayed knowledge or listed behaviors to be adopted, but did not systematically include the rationale. Information was limited to physical health. She was relatively happy to learn that she could have epistaxis, that she could bleed much longer. (Alba, home nurse) Regarding representations, two patterns were observed, depending on the HCP's experience of a patient-centred TPE and training profile. Without this experience, TPE representations prevailed. “TPE” was understood as either transmitting biomedical knowledge or facilitating the patient's health behaviors by carrying out administrative procedures on his/her behalf. Patient education is: I explain to you and I give you the shot. (Alba) With a training in TPE and this experience, representations of the context and of patients, as not supportive of TPE prevailed. The context gradually discouraged the HCP from implementing TPE. Subtype 1.2 instruction and their representation(s) The HCP transmitted knowledge to help the patient learn an action or understand a mechanism. When teaching a technical action, the HCP demonstrated it, had the patient do it, and corrected it, until it was done properly. Starting from this subtype, knowledge delivery used various sensory channels. Metaphor or imaginary clinical cases were also reported. I showed them how to put on the immobilization scarf and how to transfer the patient (from a chair to a physiotherapist's table). They did it with me. I corrected them two or three times. Then, they understood, they were able to do it again the next day. (Clement, physiotherapist, post-stroke rehabilitation) Regarding representations, representations of health behavior objectives and of the mechanism for achieving them (task representations) characterized these HCPs. So that the patient self-administers the treatment in day-to-day life, they transmitted knowledge or know-how “ready to be implemented”, as “knowledge leads to behavior”. I’ve won when I don't see them for years because they’ve understood how to cure themselves (…). It's ignorance in quotes that makes [it impossible]. (Lucas, physiotherapist) Subtype 1.3 start of personalization and their representation(s) The HCP used aspects of the patient's life context and lifestyle to communicate knowledge. Contextual information was, however, rarely collected for educational purposes. Regarding representations, life context was factored in, thanks to long-term or home-based follow-up, which allowed it to be observed. Some HCPs reported being unsure of practising TPE or confused TPE with approaches such as pain management through medication. The HCP displayed knowledge or listed behaviors to be adopted, but did not systematically include the rationale. Information was limited to physical health. She was relatively happy to learn that she could have epistaxis, that she could bleed much longer. (Alba, home nurse) Regarding representations, two patterns were observed, depending on the HCP's experience of a patient-centred TPE and training profile. Without this experience, TPE representations prevailed. “TPE” was understood as either transmitting biomedical knowledge or facilitating the patient's health behaviors by carrying out administrative procedures on his/her behalf. Patient education is: I explain to you and I give you the shot. (Alba) With a training in TPE and this experience, representations of the context and of patients, as not supportive of TPE prevailed. The context gradually discouraged the HCP from implementing TPE. The HCP transmitted knowledge to help the patient learn an action or understand a mechanism. When teaching a technical action, the HCP demonstrated it, had the patient do it, and corrected it, until it was done properly. Starting from this subtype, knowledge delivery used various sensory channels. Metaphor or imaginary clinical cases were also reported. I showed them how to put on the immobilization scarf and how to transfer the patient (from a chair to a physiotherapist's table). They did it with me. I corrected them two or three times. Then, they understood, they were able to do it again the next day. (Clement, physiotherapist, post-stroke rehabilitation) Regarding representations, representations of health behavior objectives and of the mechanism for achieving them (task representations) characterized these HCPs. So that the patient self-administers the treatment in day-to-day life, they transmitted knowledge or know-how “ready to be implemented”, as “knowledge leads to behavior”. I’ve won when I don't see them for years because they’ve understood how to cure themselves (…). It's ignorance in quotes that makes [it impossible]. (Lucas, physiotherapist) The HCP used aspects of the patient's life context and lifestyle to communicate knowledge. Contextual information was, however, rarely collected for educational purposes. Regarding representations, life context was factored in, thanks to long-term or home-based follow-up, which allowed it to be observed. Some HCPs reported being unsure of practising TPE or confused TPE with approaches such as pain management through medication. Prior to education, the HCP considered elements (knowledge, technical actions, lifestyle habits) provided by the patient, at the HCP's request Lifestyle habits and knowledge were addressed differently. Habits were used to personalize education. Knowledge was checked for accuracy and corrected, if necessary. Learning methods were collaborative and even game-oriented. Two subtypes can be distinguished depending on whether or not they explored patients’ representations. Regarding representations, emotional or motivational aspects (task representation) might generate discomfort among these HCPs. Subtype 2.1 transmission disregarding representations and their representation(s) Rationales for health behavior were systematically set out. Regarding representations, two patterns were observed, depending on their awareness that knowledge might not be sufficient to change behaviors (task representation). Without this awareness, the HCP communicated knowledge in the way he/she likes to learn (e.g. memos, handling, color codes) as “knowing leads to acting accordingly”. When patients did not implement what they learned, HCPs were unable to work out the reason why. “When you eat French fries, does it raise your glycaemia, your sugar level?” “Yes, Ma’am”. They always say: “yes”. “Why?” “Because they’re greasy!” (…). 8 times out of 10 they’re wrong. Even though I’ve just explained, and they’ve understood that there was sugar in potatoes. (Chloe, dietitian, diabetology) With this awareness, self-representation as unskilled in psychosocial factors (e.g. self-efficacy, perceived social support) was decisive. This might be reinforced by unsuccessful experience of training intended to remedy this. Professor Z talked to us about metaphors. He really likes metaphors. But I don't know how to make them (…). He gave us an example. Coversyl®, the blood pressure medication, “is the rose of a watering can”. (…). I’m still trying to figure it out. (Helen, nurse, diabetology) Subtype 2.2 transmission “with awareness of” representations and their representation(s) After their collection, representations were confronted with scientific facts in order to be “corrected”. We have to try to reframe people's beliefs. (…) About milk, there are a lot of beliefs, more or less false. (Firmin, rheumatologist, osteoporosis) Negotiation was limited to dimensions (e.g. lifestyle habits, alternative treatment) that were not deleterious to the proposed treatment. Regarding representations, emotions and motivation were found to be relevant in TPE (task representations). However, the HCPs did not feel competent to address them (self-representation). A very ambiguous back and forth, about the disease (which is not my field) and “I’m screwed”. I delivered all the messages I had to (…). I left the room and told one of the assistants: “I feel bad, I’m afraid she’ll do something stupid.” (Elsa, pharmacist, oncology) Context representations might increase or decrease this discomfort, depending on the representation of being part of an interdisciplinary team. If so, emotional/motivational aspects were seen as other HCPs’ task (doctors, psychologists). If not, the HCP reported acting “like a wart” (Elsa), an interloper who is not part of the team. Rationales for health behavior were systematically set out. Regarding representations, two patterns were observed, depending on their awareness that knowledge might not be sufficient to change behaviors (task representation). Without this awareness, the HCP communicated knowledge in the way he/she likes to learn (e.g. memos, handling, color codes) as “knowing leads to acting accordingly”. When patients did not implement what they learned, HCPs were unable to work out the reason why. “When you eat French fries, does it raise your glycaemia, your sugar level?” “Yes, Ma’am”. They always say: “yes”. “Why?” “Because they’re greasy!” (…). 8 times out of 10 they’re wrong. Even though I’ve just explained, and they’ve understood that there was sugar in potatoes. (Chloe, dietitian, diabetology) With this awareness, self-representation as unskilled in psychosocial factors (e.g. self-efficacy, perceived social support) was decisive. This might be reinforced by unsuccessful experience of training intended to remedy this. Professor Z talked to us about metaphors. He really likes metaphors. But I don't know how to make them (…). He gave us an example. Coversyl®, the blood pressure medication, “is the rose of a watering can”. (…). I’m still trying to figure it out. (Helen, nurse, diabetology) After their collection, representations were confronted with scientific facts in order to be “corrected”. We have to try to reframe people's beliefs. (…) About milk, there are a lot of beliefs, more or less false. (Firmin, rheumatologist, osteoporosis) Negotiation was limited to dimensions (e.g. lifestyle habits, alternative treatment) that were not deleterious to the proposed treatment. Regarding representations, emotions and motivation were found to be relevant in TPE (task representations). However, the HCPs did not feel competent to address them (self-representation). A very ambiguous back and forth, about the disease (which is not my field) and “I’m screwed”. I delivered all the messages I had to (…). I left the room and told one of the assistants: “I feel bad, I’m afraid she’ll do something stupid.” (Elsa, pharmacist, oncology) Context representations might increase or decrease this discomfort, depending on the representation of being part of an interdisciplinary team. If so, emotional/motivational aspects were seen as other HCPs’ task (doctors, psychologists). If not, the HCP reported acting “like a wart” (Elsa), an interloper who is not part of the team. Education started from pre-existing knowledge, skills and representations. Active methods (e.g. reformulation, echoing, questions left opened, silences) were used to support the patient's expression and mental elaboration. Motivation and psychosocial factors were part of education. Some HCPs developed strategies to implement Type 3 despite contexts that gave little room for personalization (e.g. standardized procedure, penitentiary settings). Strategies were: initiating personalized sessions after a standardized diagnosis or group sessions, meta-communicating on the context, … Three subtypes were noted. They constituted the shift from HCP-centred approaches (3.1) to patient-centred approaches (3.3). The patient's objectives gradually took precedence even if the HCP had their own. Regarding representations, self-representation was key in 3.2 and 3.3, but peripheral in 3.1. Subtype 3.1 “coaching” based on pre-existing motivation and their representation(s) Methods were “predefined” by the HCP. Pre-existing motivation could be assessed to estimate whether the conditions (e.g. internal motivation, Prochaska's “contemplation” stage) were in place to optimize the TPE success rate. Negotiation applied as long as the vital prognosis was not engaged. Metaphors tailored to the patient can be used. The forklift driver, I’m going to tell him: “Listen, your lungs are like the oil filter, you put in good quality oil, but you’ve never changed the oil filter, after a while it doesn't work anymore”. (Uranie, nurse, tobaccology) Regarding representations, the task representation that “a technique addresses a health problem” was decisive (e.g. emotional awareness for eating disorders, Prochaska model for smoking cessation). The Prochaska wheel is wonderful, isn't it? Based on his/her motivation, we act differently. (Uranie) Subtype 3.2. Life-project-based learning and their representation(s) The HCP assisted the patient to gradually gain autonomy including in making health choices. The HCP helped the patient's “life project/objectives” to emerge through interrogative methods. Education was structured based on the life project. Motivation was stimulated in this way. Knowledge and know-how transmission were not a priority unless the patient asked for it. Representations could be deconstructed by encouraging the patient to adopt dissonant behavior. “What does oxygen mean to you?” “A thread.” “A thread. But what will it do for you?” The guy says: “Dig my grave” (…) “You’re at stage 4, that's true, but you might still want to do something… this month, this week, or this year, whenever you like…. What would it be?” “I’d really like to play the drums again!” “That's a great goal, let's go for it! What would be needed so you can play the drums again?” “I’ve got no strength left” “OK”. At that point, you really have to dig down, to rephrase: “How could you get more strength?” “I’d have to lift weights.” “Not weights, they’re a bit heavy at the moment. (..) First, it would be walking.” “Yes, but I can't walk anymore!” “Why can't you walk anymore?” “Because I walk 2 metres and I’m exhausted!” (…) “Do you think it's a need for air or for oxygen?” “I don't know the difference!” “Do you want us to experience the difference?” “Yes!” So, I get my oxygen tank and say: “Here we go, we’ll do two tests. One with oxygen, one without. We’ll see when you’re best”. (Daphne, nurse, COPD). Regarding representations, self-representation was central in subtypes 3.2 and 3.3. TPE values were perceived as matching those of the HCP. The freedom to determine life choices was a central value for subtype 3.2. TPE contributed to it through gradual empowerment (task representation). Whereas values were perceived as defining the HCP, the training equipped him/her. These HCPs completed the longest TPE training courses. I think it comes from one's value system: it would be very arrogant to claim I know what others need to do. (Robin, dietitian, addictology). Subtype 3.3. “Trust-based relationship” and their representation(s) The prior health problem was the one the patient considered as such. His/her reading grid of a phenomenon was accepted as “true”. The HCP tried to get to the root of the problem by questioning. Health behavior objectives and possible learning were “co-decided” in light of what was possible for, and desired by, the patient. Those were her goals, the goals she felt she could achieve. I didn't come along and say: “You’d have to stop after 3-4 drinks”. (…) It was a discussion. (Lina, nurse, diabetology) Regarding representation, the specificity of subtype 3.3 is that trust was key, to achieve a successful follow-up (task representation). Given the trusting relationship that has been established, if there was another major problem with diabetes (…) the fact that we know each other well would facilitate the follow-up. (Lina) Methods were “predefined” by the HCP. Pre-existing motivation could be assessed to estimate whether the conditions (e.g. internal motivation, Prochaska's “contemplation” stage) were in place to optimize the TPE success rate. Negotiation applied as long as the vital prognosis was not engaged. Metaphors tailored to the patient can be used. The forklift driver, I’m going to tell him: “Listen, your lungs are like the oil filter, you put in good quality oil, but you’ve never changed the oil filter, after a while it doesn't work anymore”. (Uranie, nurse, tobaccology) Regarding representations, the task representation that “a technique addresses a health problem” was decisive (e.g. emotional awareness for eating disorders, Prochaska model for smoking cessation). The Prochaska wheel is wonderful, isn't it? Based on his/her motivation, we act differently. (Uranie) The HCP assisted the patient to gradually gain autonomy including in making health choices. The HCP helped the patient's “life project/objectives” to emerge through interrogative methods. Education was structured based on the life project. Motivation was stimulated in this way. Knowledge and know-how transmission were not a priority unless the patient asked for it. Representations could be deconstructed by encouraging the patient to adopt dissonant behavior. “What does oxygen mean to you?” “A thread.” “A thread. But what will it do for you?” The guy says: “Dig my grave” (…) “You’re at stage 4, that's true, but you might still want to do something… this month, this week, or this year, whenever you like…. What would it be?” “I’d really like to play the drums again!” “That's a great goal, let's go for it! What would be needed so you can play the drums again?” “I’ve got no strength left” “OK”. At that point, you really have to dig down, to rephrase: “How could you get more strength?” “I’d have to lift weights.” “Not weights, they’re a bit heavy at the moment. (..) First, it would be walking.” “Yes, but I can't walk anymore!” “Why can't you walk anymore?” “Because I walk 2 metres and I’m exhausted!” (…) “Do you think it's a need for air or for oxygen?” “I don't know the difference!” “Do you want us to experience the difference?” “Yes!” So, I get my oxygen tank and say: “Here we go, we’ll do two tests. One with oxygen, one without. We’ll see when you’re best”. (Daphne, nurse, COPD). Regarding representations, self-representation was central in subtypes 3.2 and 3.3. TPE values were perceived as matching those of the HCP. The freedom to determine life choices was a central value for subtype 3.2. TPE contributed to it through gradual empowerment (task representation). Whereas values were perceived as defining the HCP, the training equipped him/her. These HCPs completed the longest TPE training courses. I think it comes from one's value system: it would be very arrogant to claim I know what others need to do. (Robin, dietitian, addictology). The prior health problem was the one the patient considered as such. His/her reading grid of a phenomenon was accepted as “true”. The HCP tried to get to the root of the problem by questioning. Health behavior objectives and possible learning were “co-decided” in light of what was possible for, and desired by, the patient. Those were her goals, the goals she felt she could achieve. I didn't come along and say: “You’d have to stop after 3-4 drinks”. (…) It was a discussion. (Lina, nurse, diabetology) Regarding representation, the specificity of subtype 3.3 is that trust was key, to achieve a successful follow-up (task representation). Given the trusting relationship that has been established, if there was another major problem with diabetes (…) the fact that we know each other well would facilitate the follow-up. (Lina) Education was put on hold. The HCP suggested health behaviors based on her/his feelings and representation of the patient's feelings. She/he discussed them with the patient and let her/him decide. The HCP stood alongside the patient. His blood test, it's true that's something I don't always bring up because I know it's going to tire him out, that he knows that (laughs) and I know he knows it. (Julie, general practitioner) Regarding representations, task representation seemed to be decisive. Mobilizing the patient failed; the HCP redefined the objectives as well-being and maintenance of the patient's connection to the health care system. I believe that by maintaining this dialogue and leaving the door open, maybe one day he’ll be ready and he’ll come back. (Julie) Practice analysis highlighted a range of educational practices, differing in terms of HCP-Patient power distribution, but also on other aspects, namely: communication mode, consideration for patients’ representations, motivational approach, personalization, complexity of methods and learning contents, and practice reflexivity (see ). Specific representations were related to each (sub)type which might shed light on this practice diversity. Regarding patient-centredness, mutual participation – the patient-centred approach in which the HCP still has a role to play – was not mainstreamed among our participants. Were these practice (sub)types immutable? Abric suggests practice variations should be considered. Given that a representation becomes predominant in a given situation, the decisive representation might vary and so might the related practice. Practices could vary within an HCP in three ways: within a subtype, between the subtypes and between technical care and education. Some decisive representations seemed to be related to these variations. Variations within a subtype and decisive representations All participants except Firmin, Hermione, Isaline, and Nathan reported such variations. It consisted in using different contents/methods while remaining in the same practice subtype. These variations were related to representations of the patient experiencing learning difficulties. Difficulties were perceived as being associated with patients’ characteristics: eagerness to learn, intellectual level, ethnic origin, age, vehicular language mastery, illiteracy, social class, prior knowledge, psychiatric disorders and precariousness. Adaptations were made to enable learning: simplified vocabulary, information quantity adjustment, pictorial aids, content reordering, … Sometimes I only use pictures. I’ve a food plan in pictures, because I’ve illiterate persons. (Chloe) The availability of the HCP (in terms of physical condition and time available) was also associated with information quantity adjustment. If I’ve slept well, if I’m on top form, I’ll explain everything (Laughs). (Alba) Variations between Subtypes and Decisive representation(s) Some HCPs (Belle, Firmin, Hermione, Julie, Lina, and Xaviere) reported variations between practice subtypes. These were related either to representations of the context or to representations of patients. Shifts from 1.1 to 2.2 aimed to compensate for the usual length of consultations; HCPs engaged in group education to go into further explanation. Here, we have to do things quickly. We tell them if their treatment is appropriate or not, it's a whole process: X-rays, blood tests… Once we’ve done all that, the time is almost up (…). There, we spend a whole afternoon with them so that they can ask any question they like. (Firmin) Shifts from 1.2 to 2.2 were linked to the availability of another HCP who provided this kind of education. Once we realized it was a total misunderstanding, I suggested the nurse go to his home every day to observe what was going on, how he pricked his fingertips, how he understood his illness and how he administered his treatment. (Hermione, general practitioner) Shifts from 3.3 to 2.2 were linked to the prescriber's request to meet the objectives quickly or to an influx of patients, preventing the HCP from taking the necessary time. When there are three patients in the waiting room, it's sometimes harder to take the time. (Lina) Shifts from 3.1 to 2.1 consisted of temporarily adopting a “directive style” to meet the patient's wishes. “I have to take my Lantus® when I go to bed, around midnight.” So, we need to make sure learning takes place because schedules are incompatible with normal nursing care (…) The question was: “What's essential right now?” (Xaviere, nurse, diabetology) Shifts from 4 to 3.1 were linked to the patient's possible remobilization (e.g. Type 4 and their representations). Technical care versus education and decisive representation(s) Some HCPs of (sub)types 1.2, 2.1, 3.3 and 4 (Garance, Hermione, Fanny, Helen, Lina, and Julie) reported practising education on some occasions and purely technical care on others. Various representation type could be at work: Representations of the patient as meeting the educational criteria (physical or intellectual abilities, motivation, vehicular language fluency); Sometimes we provide only care because people don't ask for education (…). When people start to know us better and ask questions, we start educating them. (Garance, nurse, diabetology) Representations of the context : the policy of the health care service regarding education or a medical prescription that specified “education”; On Thursdays I’m in charge of the diabetic foot clinic. We do very little education because it's basically wound care. It's therapeutic. Even though we should also provide education (…) On Fridays, I have a small office close to the doctors’. (…) If there is an issue with their diabetes, no matter what, the door is open. (Lina) HCPs’ self-representations as being “on top form”, which is perceived as essential to educate. Care takes… I wouldn't say very little energy… but less energy than educating the patient. You need energy to repeat the same thing sometimes 3 to 4 times. Whereas I can visit patients with 40°C [104°F]. I tell them: “I’m not well today”. They leave me in peace. (Helen) All participants except Firmin, Hermione, Isaline, and Nathan reported such variations. It consisted in using different contents/methods while remaining in the same practice subtype. These variations were related to representations of the patient experiencing learning difficulties. Difficulties were perceived as being associated with patients’ characteristics: eagerness to learn, intellectual level, ethnic origin, age, vehicular language mastery, illiteracy, social class, prior knowledge, psychiatric disorders and precariousness. Adaptations were made to enable learning: simplified vocabulary, information quantity adjustment, pictorial aids, content reordering, … Sometimes I only use pictures. I’ve a food plan in pictures, because I’ve illiterate persons. (Chloe) The availability of the HCP (in terms of physical condition and time available) was also associated with information quantity adjustment. If I’ve slept well, if I’m on top form, I’ll explain everything (Laughs). (Alba) Some HCPs (Belle, Firmin, Hermione, Julie, Lina, and Xaviere) reported variations between practice subtypes. These were related either to representations of the context or to representations of patients. Shifts from 1.1 to 2.2 aimed to compensate for the usual length of consultations; HCPs engaged in group education to go into further explanation. Here, we have to do things quickly. We tell them if their treatment is appropriate or not, it's a whole process: X-rays, blood tests… Once we’ve done all that, the time is almost up (…). There, we spend a whole afternoon with them so that they can ask any question they like. (Firmin) Shifts from 1.2 to 2.2 were linked to the availability of another HCP who provided this kind of education. Once we realized it was a total misunderstanding, I suggested the nurse go to his home every day to observe what was going on, how he pricked his fingertips, how he understood his illness and how he administered his treatment. (Hermione, general practitioner) Shifts from 3.3 to 2.2 were linked to the prescriber's request to meet the objectives quickly or to an influx of patients, preventing the HCP from taking the necessary time. When there are three patients in the waiting room, it's sometimes harder to take the time. (Lina) Shifts from 3.1 to 2.1 consisted of temporarily adopting a “directive style” to meet the patient's wishes. “I have to take my Lantus® when I go to bed, around midnight.” So, we need to make sure learning takes place because schedules are incompatible with normal nursing care (…) The question was: “What's essential right now?” (Xaviere, nurse, diabetology) Shifts from 4 to 3.1 were linked to the patient's possible remobilization (e.g. Type 4 and their representations). versus education and decisive representation(s) Some HCPs of (sub)types 1.2, 2.1, 3.3 and 4 (Garance, Hermione, Fanny, Helen, Lina, and Julie) reported practising education on some occasions and purely technical care on others. Various representation type could be at work: Representations of the patient as meeting the educational criteria (physical or intellectual abilities, motivation, vehicular language fluency); Sometimes we provide only care because people don't ask for education (…). When people start to know us better and ask questions, we start educating them. (Garance, nurse, diabetology) Representations of the context : the policy of the health care service regarding education or a medical prescription that specified “education”; On Thursdays I’m in charge of the diabetic foot clinic. We do very little education because it's basically wound care. It's therapeutic. Even though we should also provide education (…) On Fridays, I have a small office close to the doctors’. (…) If there is an issue with their diabetes, no matter what, the door is open. (Lina) HCPs’ self-representations as being “on top form”, which is perceived as essential to educate. Care takes… I wouldn't say very little energy… but less energy than educating the patient. You need energy to repeat the same thing sometimes 3 to 4 times. Whereas I can visit patients with 40°C [104°F]. I tell them: “I’m not well today”. They leave me in peace. (Helen) Main findings and existing literature This research aimed to explore representation-practice interactions in TPE in order to understand the difficulties in shifting towards more patient-centred practices in a mutual participation perspective. Of the thirty HCPs interviewed, four were unable to report any actual educational practice. This trend is also mentioned in the literature, by Vigil-Ripoche, who identifies the lack of reflexivity as a barrier to practice conceptualization (i.e. putting practices into words), which in turn impedes a shift towards more patient-centred practices. The typology of practices emerging from this study reflects the “power distribution in the HCP-Patient relationship”. , At one extreme (Type 1), the power is held by the HCP who tells the patient what to do; at the other end (Type 4), the power is left to the patient. While the typology of TPE practices is not, as such, novel, its originality lies in offering an empirical exploration of the theoretical models of Szasz and Hollender and Botelho. Besides the types, the proposed typology also includes many subtypes. According to Szasz and Hollender, the HCP should adapt his/her relationship to the patient in the light of the patient's health condition. They pointed to “mutual participation” as the appropriate type of relationship for chronic conditions, to “help the patient to help himself/herself”. Most of the practices reported were HCP-centred, however, and limited to knowledge transfer (18 out of 26 are at best type-2 practices). These results are consistent with the literature on representation-practice links in TPE. The training profile of the HCPs in TPE seems to play a decisive role in this low rate of patient-centred practices. In Type 3-Mutual participation, six out of seven had a university diploma, whereas those who had no specific training were mainly in Type 1. However, training is not the whole story. The same kind of training (40-to-70 h) was linked to different practice types (1, 2, 3, and even a failure to report practice). So, apart from training, what characterized the various practice types? Results showed that each educational (sub)type was linked to a decisive representation. Representations related to mutual participation (Subtypes 3.2 and 3.3) did not contribute much to understand difficulties in implementing TPE: they were self-representations which may have been reinforced by a feedback loop. Representations related to other (sub)types help more: a fragmented representation of what TPE is (Type 1), the representation that knowledge leads to behavior change (Subtype 2.1) or the self-representation as incompetent in approaches that go beyond transmitting knowledge (Subtype 2.2). Subtype 3.1 was therefore enlightening: the first stage of mutual participation was linked to task representations setting out a procedure to be used to manage a specific health problem. Context representations had a particular status. They were at work in practice variations only. Practices were not stable over time. Three kinds of variations were uncovered: variations within a subtype, variations between subtypes, and the presence/absence of educational practice. The variation phenomenon has previously been highlighted in the TPE literature on representation-practice links among HCPs. Previous research, however, concerned a single kind of variation: either variations within a subtype or variations between subtypes. Variations between educator and caregiver in TPE have also been highlighted in the past. The spectrum of variations observed in this research was hence more comprehensive. Variations seemed to be related to specific representations. Comparison with previous research is, however, challenging for two reasons: (1) The diversity of variations emerged from the present research. It could not be systematically explored with all the participants; (2) One single piece of research analyzed actual practices and their related representations, and therefore allows of a comparison. Karlsen analyzed variations between two (sub)types (standardized information transmission vs. personalized education). Although the number of types differs, both Karlsen and this research point to the importance of context and patient representations in variations between (sub)types. But were variations detrimental to, or supportive of patient-centred approaches? Variations consisted primarily in either an adjustment to the patient's level of understanding (variations within a subtype) or the implementation of an education approach aiming at more mutual participation (variations between subtypes). In this latter case, the practice itself could be closer to mutual participation, or temporarily more directive in order to achieve co-decided objectives. As a result, part of variation tended towards more patient-centred practices. There were, nevertheless, two notable exceptions: when the context is perceived as not supportive of TPE and when TPE is perceived as requiring more effort and specific implementation conditions (in the framework of HCP-centred practices). Strengths and limitations The two main contributions of this exploratory research were: (1) to propose a typology of actual educational practices and related representations; and (2) to highlight three types of practice variation, which also appear to be related to specific representations. It also has some limitations. Practices were self-reported and therefore subject to social desirability. Being qualitative in nature, the findings cannot be generalized. In addition, the study's cross-sectional design excludes causal relationship. Implications for research and clinical practice This research offers new perspectives both for research and for TPE implementation. Regarding research, examining representation-practice links did help to better understand what hinders the deployment of patient-centred practices. Explanatory interviews were also pointed up as a valuable technique for exploring TPE practices, since they help to solve problems of lack of reflexivity. Further research, on a larger scale, is needed to address the following questions: What is the prevalence of each (sub)type? What is the status of Type 4 (a progress in power distribution, a fallback position)? Data collection methods also need to be diversified (pair/group interview, observation, document analysis) to go beyond self-reported practices. Regarding TPE implementation, both representations related to practices and those related to variations offer perspectives. Firstly, when the HCPs’ practices were not in line with mutual participation, neither were their decisive representation(s). However, the representation(s) at work differed from one subtype to another. There is therefore no panacea to achieve/maintain patient-centred approaches. The practice (sub)type needs to be borne in mind to provide accurate support (e.g. For Types 1 and 2, training on knowledge and representations regarding TPE. For Type 2, training on patients’ representations and motivational levers. For Type 3, intervision). Secondly, the variations towards less participatory approaches emphasized the importance of the context representations. Addressing HCPs’ competencies is not enough; organizational contexts promoting patient-centred education are essential. This research aimed to explore representation-practice interactions in TPE in order to understand the difficulties in shifting towards more patient-centred practices in a mutual participation perspective. Of the thirty HCPs interviewed, four were unable to report any actual educational practice. This trend is also mentioned in the literature, by Vigil-Ripoche, who identifies the lack of reflexivity as a barrier to practice conceptualization (i.e. putting practices into words), which in turn impedes a shift towards more patient-centred practices. The typology of practices emerging from this study reflects the “power distribution in the HCP-Patient relationship”. , At one extreme (Type 1), the power is held by the HCP who tells the patient what to do; at the other end (Type 4), the power is left to the patient. While the typology of TPE practices is not, as such, novel, its originality lies in offering an empirical exploration of the theoretical models of Szasz and Hollender and Botelho. Besides the types, the proposed typology also includes many subtypes. According to Szasz and Hollender, the HCP should adapt his/her relationship to the patient in the light of the patient's health condition. They pointed to “mutual participation” as the appropriate type of relationship for chronic conditions, to “help the patient to help himself/herself”. Most of the practices reported were HCP-centred, however, and limited to knowledge transfer (18 out of 26 are at best type-2 practices). These results are consistent with the literature on representation-practice links in TPE. The training profile of the HCPs in TPE seems to play a decisive role in this low rate of patient-centred practices. In Type 3-Mutual participation, six out of seven had a university diploma, whereas those who had no specific training were mainly in Type 1. However, training is not the whole story. The same kind of training (40-to-70 h) was linked to different practice types (1, 2, 3, and even a failure to report practice). So, apart from training, what characterized the various practice types? Results showed that each educational (sub)type was linked to a decisive representation. Representations related to mutual participation (Subtypes 3.2 and 3.3) did not contribute much to understand difficulties in implementing TPE: they were self-representations which may have been reinforced by a feedback loop. Representations related to other (sub)types help more: a fragmented representation of what TPE is (Type 1), the representation that knowledge leads to behavior change (Subtype 2.1) or the self-representation as incompetent in approaches that go beyond transmitting knowledge (Subtype 2.2). Subtype 3.1 was therefore enlightening: the first stage of mutual participation was linked to task representations setting out a procedure to be used to manage a specific health problem. Context representations had a particular status. They were at work in practice variations only. Practices were not stable over time. Three kinds of variations were uncovered: variations within a subtype, variations between subtypes, and the presence/absence of educational practice. The variation phenomenon has previously been highlighted in the TPE literature on representation-practice links among HCPs. Previous research, however, concerned a single kind of variation: either variations within a subtype or variations between subtypes. Variations between educator and caregiver in TPE have also been highlighted in the past. The spectrum of variations observed in this research was hence more comprehensive. Variations seemed to be related to specific representations. Comparison with previous research is, however, challenging for two reasons: (1) The diversity of variations emerged from the present research. It could not be systematically explored with all the participants; (2) One single piece of research analyzed actual practices and their related representations, and therefore allows of a comparison. Karlsen analyzed variations between two (sub)types (standardized information transmission vs. personalized education). Although the number of types differs, both Karlsen and this research point to the importance of context and patient representations in variations between (sub)types. But were variations detrimental to, or supportive of patient-centred approaches? Variations consisted primarily in either an adjustment to the patient's level of understanding (variations within a subtype) or the implementation of an education approach aiming at more mutual participation (variations between subtypes). In this latter case, the practice itself could be closer to mutual participation, or temporarily more directive in order to achieve co-decided objectives. As a result, part of variation tended towards more patient-centred practices. There were, nevertheless, two notable exceptions: when the context is perceived as not supportive of TPE and when TPE is perceived as requiring more effort and specific implementation conditions (in the framework of HCP-centred practices). The two main contributions of this exploratory research were: (1) to propose a typology of actual educational practices and related representations; and (2) to highlight three types of practice variation, which also appear to be related to specific representations. It also has some limitations. Practices were self-reported and therefore subject to social desirability. Being qualitative in nature, the findings cannot be generalized. In addition, the study's cross-sectional design excludes causal relationship. This research offers new perspectives both for research and for TPE implementation. Regarding research, examining representation-practice links did help to better understand what hinders the deployment of patient-centred practices. Explanatory interviews were also pointed up as a valuable technique for exploring TPE practices, since they help to solve problems of lack of reflexivity. Further research, on a larger scale, is needed to address the following questions: What is the prevalence of each (sub)type? What is the status of Type 4 (a progress in power distribution, a fallback position)? Data collection methods also need to be diversified (pair/group interview, observation, document analysis) to go beyond self-reported practices. Regarding TPE implementation, both representations related to practices and those related to variations offer perspectives. Firstly, when the HCPs’ practices were not in line with mutual participation, neither were their decisive representation(s). However, the representation(s) at work differed from one subtype to another. There is therefore no panacea to achieve/maintain patient-centred approaches. The practice (sub)type needs to be borne in mind to provide accurate support (e.g. For Types 1 and 2, training on knowledge and representations regarding TPE. For Type 2, training on patients’ representations and motivational levers. For Type 3, intervision). Secondly, the variations towards less participatory approaches emphasized the importance of the context representations. Addressing HCPs’ competencies is not enough; organizational contexts promoting patient-centred education are essential.
Incidence and Progression of Diabetic Retinopathy in American Indian and Alaska Native Individuals Served by the Indian Health Service, 2015-2019
63da2960-be04-4ae0-a2d2-088af9e37dce
9999279
Ophthalmology[mh]
Although the rate of increase has declined recently, diabetes prevalence in American Indian and Alaska Native individuals remains higher than in other race and ethnic groups in the US, with 14.7% of American Indian and Alaska Native individuals having been diagnosed with diabetes. The Centers for Disease Control and Prevention predicts that one-half of American Indian and Alaska Native individuals born in 2000 will develop diabetes some time in their lives. Additionally, an analysis of 1990 to 1998 Indian Health Service (IHS) outpatient data found that diabetes is being diagnosed at younger ages in American Indian and Alaska Native individuals. Longer duration could mean greater likelihood of diabetes-associated complications. In this vein, between 2015 and 2050, the number of people who are blind is projected to double, and diabetic retinopathy (DR) will likely be a major contributor because DR remains a leading cause of preventable blindness in US adults. In this context of disease burden, it is important to understand the prevalence and incidence of diabetes complications to allocate surveillance programs and specialty services appropriately. Recent publications suggest that diabetic eye disease prevalence has declined in American Indian and Alaska Native patients of the US IHS. , However, estimates of DR incidence in American Indian and Alaska Native patients are not current; instead, they based on data from before 1992. , , , This study estimates recent cumulative incidence, incidence rates, and progression of DR in American Indian and Alaska Native patients served by this IHS primary care–based teleophthalmology program. Setting and Study Population This was a retrospective cohort study using deidentified medical record data obtained during routine clinical operations of the IHS teleophthalmology program at 75 primary care clinics distributed among 20 states. The IHS serves enrolled members of federally recognized tribes. The study was reviewed and approved by the IHS institutional review board at Phoenix Indian Medical Center under the exempt process. Written informed consent from participants was not required or obtained. Details regarding the teleophthalmology program’s origins, protocols, distribution, and outcomes have been previously described. Briefly, the program evaluates patients from participating primary care clinics. It is a validated American Telemedicine Association Category 3 program and its graders identify the Early Treatment Diabetic Retinopathy Study (ETDRS)–defined clinical levels of DR and diabetic macular edema (DME) severity. , , Graders are certified and licensed optometrists who render a diagnosis using standardized protocols. The program currently recommends that patients receive annual DR examinations. Before selecting the analytic cohort for this study, we defined a baseline period of January 1, 2015, to December 31, 2015, and a follow-up period of January 1, 2016, to December 31, 2019. Eligible patients had at least 1 IHS teleophthalmology examination with the program in both periods. Additionally, eligible patients were 20 years or older and had no evidence of DR or had mild nonproliferative DR (NPDR; ETDRS levels 10, 14, 15, 20) in the baseline period. Patients with severe/very severe NPDR (ETDRS levels 53 a-e), proliferative DR (PDR; ETDRS levels 61, 65, 71, 75, 81, 85), and/or any level of DME are referred out of the teleophthalmology program to specialty eye care; therefore, these patients were excluded. Referral recommendations of patients with moderate NPDR (ETDRS levels 35, 43, 47) are dependent on risk factors; therefore, these patients were also excluded. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guidelines. Measures Outcomes in the Follow-up Period The IHS teleophthalmology program uses 2 configurations of commercially available technology for image acquisition to assess DR severity level. The first configuration uses a low-illumination, nonmydriatic fundus photography (NMFP) digital imaging system (Topcon NW6S [Topcon Medical Systems]) with a custom digital camera back (Megavision Retinal Image Capture). Three nonsimultaneous stereo-pair 45° images from different retinal regions and 2 nonsimultaneous stereo-pair 30° digital images of the optic disc and the macula from the retina of each eye are obtained, for an approximate total retinal coverage of 90° to 135°. One external image of each eye is also obtained. NMFP showed substantial agreement with ETDRS controls for diagnosis of DR severity level (unweighted κ = 0.81; 95% CI, 0.73-0.89). The second configuration is nonmydriatic ultra-widefield imaging (UWFI) scanning laser ophthalmoscopy (SLO) (Daytona [Optos]). The UWFI protocol includes nonsimultaneous stereo-pair 200° images from each eye, centered on the macula. Previous research has shown that UWFI agrees perfectly with ETDRS photography in 84% of cases and agrees within 1 level of severity in 91% of cases (unweighted κ = 0.79). UWFI is the dominant configuration this program uses. The grading outcomes were no evidence of DR, mild NPDR, moderate NPDR, severe/very severe NPDR, PDR, or ungradable. Level of DR at any 1 imaging encounter was defined by the more severely affected eye. If 1 eye was ungradable, the diagnosis for the other eye was used. If a patient received more than 1 teleophthalmology examination during the follow-up period, their maximum diagnosis was used in this analysis. This study measured incidence and progression as follows: (1) any increase in level of DR; (2) occurrence of a 2 or more (2+) step increase; and (3) DR severity level. For patients with no evidence of DR at baseline, any increase in level meant mild NPDR or worse was found at follow-up, and a 2+ step increase meant moderate NPDR or worse was found. For patients with mild NPDR at baseline, any increase in level meant moderate NPDR or worse was found, and a 2+ step increase meant severe NPDR or worse was found. Severity levels at follow-up included patients who regressed from mild NPDR to no DR, but regression was not explored. Background Variables The IHS teleophthalmology program records patient demographics (age, sex [self-reported]) and known DR risk factors in templates used by the imagers and graders, taking data from the IHS electronic medical record patient summary. Risk factors recorded include glycosylated hemoglobin A 1c (HbA 1c ) level, diabetes therapy, duration of diabetes (but not diabetes type), hypertension, cardiovascular disease, hypercholesterolemia, peripheral neuropathy, and nephropathy. The program also records the clinic where the imaging occurred and whether UWFI or NMFP was used. We created measures indicating whether UWFI (vs NMFP) was used at baseline only, follow-up only, both examinations, or never. IHS administrative areas (derived from clinic addresses) are shown to describe the geographical distribution of the cohort. Statistical Analysis The analysis calculated descriptive statistics of the selected cohort’s baseline characteristics and the imaging modalities patients were examined with over time. The analysis also calculated descriptive statistics for the patients who did not have a follow-up examination for comparison with the selected cohort. We compared the groups with t tests and χ 2 tests. The analysis next calculated cumulative incidence, cumulative progression, incidence rate, and progression rate. The rates equaled the number of new or worse cases of DR identified during the 2016 to 2019 period divided by total person-years (PY) at risk. PY contributed by a patient were truncated at the date of their examination that identified new or worsening DR. If the patient did not develop new DR or have worsening or progression of their DR, the PY they contributed was truncated at the date of their last examination in the follow-up period (≤4 years). To estimate net associations between background characteristics and outcomes, the analyses conducted separate multivariable robust Poisson regressions. The dependent variables were as follows: (1) any new DR in patients with no DR at baseline, (2) occurrence of a 2+ step increase in DR for patients with no DR at baseline, and (3) any progression of DR in patients with mild NPDR at baseline. Variables representing imaging modality assessed whether UWFI increased detection of worsening disease net of other factors. Analyses obtained the robust SEs to calculate the CIs and 2-sided P values. P values < .05 were considered statistically significant. A model for a 2+ step increase in DR from baseline mild NPDR was not estimated due to the small number of patients in this category. Descriptive statistics were performed using SAS software, version 9.4 (SAS Institute). Incidence rates and robust Poisson regressions were calculated using R software, version 4.1.2 (R Core Team). This was a retrospective cohort study using deidentified medical record data obtained during routine clinical operations of the IHS teleophthalmology program at 75 primary care clinics distributed among 20 states. The IHS serves enrolled members of federally recognized tribes. The study was reviewed and approved by the IHS institutional review board at Phoenix Indian Medical Center under the exempt process. Written informed consent from participants was not required or obtained. Details regarding the teleophthalmology program’s origins, protocols, distribution, and outcomes have been previously described. Briefly, the program evaluates patients from participating primary care clinics. It is a validated American Telemedicine Association Category 3 program and its graders identify the Early Treatment Diabetic Retinopathy Study (ETDRS)–defined clinical levels of DR and diabetic macular edema (DME) severity. , , Graders are certified and licensed optometrists who render a diagnosis using standardized protocols. The program currently recommends that patients receive annual DR examinations. Before selecting the analytic cohort for this study, we defined a baseline period of January 1, 2015, to December 31, 2015, and a follow-up period of January 1, 2016, to December 31, 2019. Eligible patients had at least 1 IHS teleophthalmology examination with the program in both periods. Additionally, eligible patients were 20 years or older and had no evidence of DR or had mild nonproliferative DR (NPDR; ETDRS levels 10, 14, 15, 20) in the baseline period. Patients with severe/very severe NPDR (ETDRS levels 53 a-e), proliferative DR (PDR; ETDRS levels 61, 65, 71, 75, 81, 85), and/or any level of DME are referred out of the teleophthalmology program to specialty eye care; therefore, these patients were excluded. Referral recommendations of patients with moderate NPDR (ETDRS levels 35, 43, 47) are dependent on risk factors; therefore, these patients were also excluded. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guidelines. Outcomes in the Follow-up Period The IHS teleophthalmology program uses 2 configurations of commercially available technology for image acquisition to assess DR severity level. The first configuration uses a low-illumination, nonmydriatic fundus photography (NMFP) digital imaging system (Topcon NW6S [Topcon Medical Systems]) with a custom digital camera back (Megavision Retinal Image Capture). Three nonsimultaneous stereo-pair 45° images from different retinal regions and 2 nonsimultaneous stereo-pair 30° digital images of the optic disc and the macula from the retina of each eye are obtained, for an approximate total retinal coverage of 90° to 135°. One external image of each eye is also obtained. NMFP showed substantial agreement with ETDRS controls for diagnosis of DR severity level (unweighted κ = 0.81; 95% CI, 0.73-0.89). The second configuration is nonmydriatic ultra-widefield imaging (UWFI) scanning laser ophthalmoscopy (SLO) (Daytona [Optos]). The UWFI protocol includes nonsimultaneous stereo-pair 200° images from each eye, centered on the macula. Previous research has shown that UWFI agrees perfectly with ETDRS photography in 84% of cases and agrees within 1 level of severity in 91% of cases (unweighted κ = 0.79). UWFI is the dominant configuration this program uses. The grading outcomes were no evidence of DR, mild NPDR, moderate NPDR, severe/very severe NPDR, PDR, or ungradable. Level of DR at any 1 imaging encounter was defined by the more severely affected eye. If 1 eye was ungradable, the diagnosis for the other eye was used. If a patient received more than 1 teleophthalmology examination during the follow-up period, their maximum diagnosis was used in this analysis. This study measured incidence and progression as follows: (1) any increase in level of DR; (2) occurrence of a 2 or more (2+) step increase; and (3) DR severity level. For patients with no evidence of DR at baseline, any increase in level meant mild NPDR or worse was found at follow-up, and a 2+ step increase meant moderate NPDR or worse was found. For patients with mild NPDR at baseline, any increase in level meant moderate NPDR or worse was found, and a 2+ step increase meant severe NPDR or worse was found. Severity levels at follow-up included patients who regressed from mild NPDR to no DR, but regression was not explored. Background Variables The IHS teleophthalmology program records patient demographics (age, sex [self-reported]) and known DR risk factors in templates used by the imagers and graders, taking data from the IHS electronic medical record patient summary. Risk factors recorded include glycosylated hemoglobin A 1c (HbA 1c ) level, diabetes therapy, duration of diabetes (but not diabetes type), hypertension, cardiovascular disease, hypercholesterolemia, peripheral neuropathy, and nephropathy. The program also records the clinic where the imaging occurred and whether UWFI or NMFP was used. We created measures indicating whether UWFI (vs NMFP) was used at baseline only, follow-up only, both examinations, or never. IHS administrative areas (derived from clinic addresses) are shown to describe the geographical distribution of the cohort. The IHS teleophthalmology program uses 2 configurations of commercially available technology for image acquisition to assess DR severity level. The first configuration uses a low-illumination, nonmydriatic fundus photography (NMFP) digital imaging system (Topcon NW6S [Topcon Medical Systems]) with a custom digital camera back (Megavision Retinal Image Capture). Three nonsimultaneous stereo-pair 45° images from different retinal regions and 2 nonsimultaneous stereo-pair 30° digital images of the optic disc and the macula from the retina of each eye are obtained, for an approximate total retinal coverage of 90° to 135°. One external image of each eye is also obtained. NMFP showed substantial agreement with ETDRS controls for diagnosis of DR severity level (unweighted κ = 0.81; 95% CI, 0.73-0.89). The second configuration is nonmydriatic ultra-widefield imaging (UWFI) scanning laser ophthalmoscopy (SLO) (Daytona [Optos]). The UWFI protocol includes nonsimultaneous stereo-pair 200° images from each eye, centered on the macula. Previous research has shown that UWFI agrees perfectly with ETDRS photography in 84% of cases and agrees within 1 level of severity in 91% of cases (unweighted κ = 0.79). UWFI is the dominant configuration this program uses. The grading outcomes were no evidence of DR, mild NPDR, moderate NPDR, severe/very severe NPDR, PDR, or ungradable. Level of DR at any 1 imaging encounter was defined by the more severely affected eye. If 1 eye was ungradable, the diagnosis for the other eye was used. If a patient received more than 1 teleophthalmology examination during the follow-up period, their maximum diagnosis was used in this analysis. This study measured incidence and progression as follows: (1) any increase in level of DR; (2) occurrence of a 2 or more (2+) step increase; and (3) DR severity level. For patients with no evidence of DR at baseline, any increase in level meant mild NPDR or worse was found at follow-up, and a 2+ step increase meant moderate NPDR or worse was found. For patients with mild NPDR at baseline, any increase in level meant moderate NPDR or worse was found, and a 2+ step increase meant severe NPDR or worse was found. Severity levels at follow-up included patients who regressed from mild NPDR to no DR, but regression was not explored. The IHS teleophthalmology program records patient demographics (age, sex [self-reported]) and known DR risk factors in templates used by the imagers and graders, taking data from the IHS electronic medical record patient summary. Risk factors recorded include glycosylated hemoglobin A 1c (HbA 1c ) level, diabetes therapy, duration of diabetes (but not diabetes type), hypertension, cardiovascular disease, hypercholesterolemia, peripheral neuropathy, and nephropathy. The program also records the clinic where the imaging occurred and whether UWFI or NMFP was used. We created measures indicating whether UWFI (vs NMFP) was used at baseline only, follow-up only, both examinations, or never. IHS administrative areas (derived from clinic addresses) are shown to describe the geographical distribution of the cohort. The analysis calculated descriptive statistics of the selected cohort’s baseline characteristics and the imaging modalities patients were examined with over time. The analysis also calculated descriptive statistics for the patients who did not have a follow-up examination for comparison with the selected cohort. We compared the groups with t tests and χ 2 tests. The analysis next calculated cumulative incidence, cumulative progression, incidence rate, and progression rate. The rates equaled the number of new or worse cases of DR identified during the 2016 to 2019 period divided by total person-years (PY) at risk. PY contributed by a patient were truncated at the date of their examination that identified new or worsening DR. If the patient did not develop new DR or have worsening or progression of their DR, the PY they contributed was truncated at the date of their last examination in the follow-up period (≤4 years). To estimate net associations between background characteristics and outcomes, the analyses conducted separate multivariable robust Poisson regressions. The dependent variables were as follows: (1) any new DR in patients with no DR at baseline, (2) occurrence of a 2+ step increase in DR for patients with no DR at baseline, and (3) any progression of DR in patients with mild NPDR at baseline. Variables representing imaging modality assessed whether UWFI increased detection of worsening disease net of other factors. Analyses obtained the robust SEs to calculate the CIs and 2-sided P values. P values < .05 were considered statistically significant. A model for a 2+ step increase in DR from baseline mild NPDR was not estimated due to the small number of patients in this category. Descriptive statistics were performed using SAS software, version 9.4 (SAS Institute). Incidence rates and robust Poisson regressions were calculated using R software, version 4.1.2 (R Core Team). Patient Characteristics and Imaging Modality The number of patients evaluated by the program in the baseline year who were 20 years or older and had no evidence of DR or mild NPDR at that examination was 13 694 . Of these patients, 8374 (61.2%; mean [SD] age of 53.2 [12.2] years; 4775 females [57.0%]; 3599 males [43.0%]) had at least 1 examination during the follow-up period. Mean (SD) time from the baseline examination to the first follow-up examination was 20.7 (9.5) months. A total of 4581 of 8374 patients (54.7%) who were followed up had 2 or more examinations during the follow-up period. In 2015, the mean (SD) HbA 1c level of the analyzed cohort was 8.3% (2.2%; to convert HbA 1c to proportion of total Hb, multiply by 0.01) . The mean (SD) duration of diabetes was 8.6 (7.4) years, and 4401 of 8374 patients (52.6%) were managing their diabetes with oral medications only. Hypercholesterolemia (2731 [32.6%]) and hypertension (4559 [54.4%]) were the most common risk factors. A total of 53.4% of patients (2436 of 4559) in the analytic cohort with diagnosed hypertension were taking blood pressure–lowering medications (percentage not shown in ). The Phoenix, Navajo, and Oklahoma City IHS areas imaged the most patients, with 2594 (31.0%), 2402 (28.7%), and 1437 (17.2%) being imaged, respectively. A total of 5325 of 8374 patients (63.6%) were imaged with UWFI both times, 1839 (22.0%) were imaged with UWFI at follow-up only, and 734 (8.8%) were imaged with NMFP for both examinations. The selected cohort characteristics were not significantly different from patients who did not have a follow-up examination, except that proportionally fewer of them were missing information about their diabetes duration and diabetes therapy, their mean HbA 1c level was slightly higher (mean [SD], 8.3% [2.2%] vs 8.2% [2.3%]), and proportionally more of them had hypercholesterolemia (2731 of 8374 [32.6%] vs 1414 of 5320 [26.6%]), hypertension (4559 [54.4%] vs 2417 [45.4%]), or nephropathy (392 [4.7%] vs 212 [4.0%]) . Incidence and Progression Of patients with no evidence of DR at baseline, 1280 of 7097 (18.0%) had some level of DR at follow-up, for an incidence rate of 69.6 cases per 1000 PY . Of the new DR found, 839 of 1280 cases (65.5%) were mild NPDR. Cumulative incidence of PDR was 0.1% (10 of 7097), for an incidence rate of 0.5 cases per 1000 PY. Of patients with no evidence of DR at baseline, 441 of 7097 (6.2%) had a 2+ step increase in DR over time (24.0 cases per 1000 PY). A total of 347 of 1277 patients (27.2%) with mild NPDR at baseline developed a more severe DR level in the follow-up period, for an incidence rate of 111.7 cases per 1000 PY. A 2+ step increase in DR occurred for 2.3% of these patients (30 of 1277). Regarding DR severity level, cumulative incidences of severe/very severe NPDR and PDR were 0.2% (2 of 1277) and 2.2% (28 of 1277), respectively, for incidence rates of 0.6 and 9.0 cases per 1000 PY, respectively. Patient Characteristics and DR Outcomes Characteristics associated with any DR incidence as well as occurrence of a 2+ step increase were longer diabetes duration (>15 y, any DR: risk ratio [RR], 2.0, 95% CI, 1.7-2.4; P < .001; 2+ step: RR, 3.2; 95% CI, 2.3-4.4; P < .001), higher HbA 1c level (any DR: RR, 1.1; 95% CI, 1.1-1.2; P < .001; 2+ step: RR, 1.3; 95% CI, 1.2-1.3; P < .001), and diabetes therapy, particularly insulin use alone (any DR: RR, 2.1; 95% CI, 1.5-2.9; P < .001; 2+ step: RR, 4.5; 95% CI, 1.8-11.2; P = .001) or with oral medications (any DR: RR, 2.2; 95% CI, 1.6-3.0; P < .001; 2+ step: RR, 4.5; 95% CI, 1.8-11.1; P = .001) . For example, compared with patients receiving diet therapy alone, patients taking both oral medications and insulin had 4.5 times the rate of a 2+ step increase in DR. Notable characteristics associated with any progression from mild NPDR were longer duration of diabetes (>15 y, RR, 1.8; 95% CI, 1.2-2.5; P = .002), higher HbA 1c level (RR, 1.1; 95% CI, 1.0-1.1; P < .001), and presence of peripheral neuropathy (RR, 1.5; 95% CI, 1.2-2.0; P = .001). For comparison with other studies , , , , , , , , , , , , , , , we conducted several post hoc analyses. These found that of American Indian and Alaska Native patients diagnosed with diabetes before age 30 years and taking insulin alone or with oral medications, 36.9% (90 of 244) developed any new DR within 4 years. Of American Indian and Alaska Native patients diagnosed at 30 years or older and taking insulin, 28.5% (378 of 1324) and 0.1% (1 of 1324) developed any new DR and PDR, respectively. Additionally, a separate regression model with only hypertension and blood pressure medication (yes/no) found that patients with hypertension were 14% more likely to develop new DR and no more or less likely to progress than patients without hypertension. UWFI and DR Outcomes UWFI for follow-up or both examinations was associated with any DR incidence (RR, 1.2; 95% CI, 1.0-1.5; P = .04) and a 2+ step increase (RR, 1.9; 95% CI, 1.2-3.0; P = .006) in patients with no DR at baseline. For example, compared with patients imaged with NMFP at both examinations, patients imaged with UWFI at follow-up had 2.2 times (95% CI, 1.4-3.5; P = .001) the rate of a 2+ step increase in DR . UWFI was associated with DR progression when used for the follow-up examination in patients with mild NPDR at baseline. The number of patients evaluated by the program in the baseline year who were 20 years or older and had no evidence of DR or mild NPDR at that examination was 13 694 . Of these patients, 8374 (61.2%; mean [SD] age of 53.2 [12.2] years; 4775 females [57.0%]; 3599 males [43.0%]) had at least 1 examination during the follow-up period. Mean (SD) time from the baseline examination to the first follow-up examination was 20.7 (9.5) months. A total of 4581 of 8374 patients (54.7%) who were followed up had 2 or more examinations during the follow-up period. In 2015, the mean (SD) HbA 1c level of the analyzed cohort was 8.3% (2.2%; to convert HbA 1c to proportion of total Hb, multiply by 0.01) . The mean (SD) duration of diabetes was 8.6 (7.4) years, and 4401 of 8374 patients (52.6%) were managing their diabetes with oral medications only. Hypercholesterolemia (2731 [32.6%]) and hypertension (4559 [54.4%]) were the most common risk factors. A total of 53.4% of patients (2436 of 4559) in the analytic cohort with diagnosed hypertension were taking blood pressure–lowering medications (percentage not shown in ). The Phoenix, Navajo, and Oklahoma City IHS areas imaged the most patients, with 2594 (31.0%), 2402 (28.7%), and 1437 (17.2%) being imaged, respectively. A total of 5325 of 8374 patients (63.6%) were imaged with UWFI both times, 1839 (22.0%) were imaged with UWFI at follow-up only, and 734 (8.8%) were imaged with NMFP for both examinations. The selected cohort characteristics were not significantly different from patients who did not have a follow-up examination, except that proportionally fewer of them were missing information about their diabetes duration and diabetes therapy, their mean HbA 1c level was slightly higher (mean [SD], 8.3% [2.2%] vs 8.2% [2.3%]), and proportionally more of them had hypercholesterolemia (2731 of 8374 [32.6%] vs 1414 of 5320 [26.6%]), hypertension (4559 [54.4%] vs 2417 [45.4%]), or nephropathy (392 [4.7%] vs 212 [4.0%]) . Of patients with no evidence of DR at baseline, 1280 of 7097 (18.0%) had some level of DR at follow-up, for an incidence rate of 69.6 cases per 1000 PY . Of the new DR found, 839 of 1280 cases (65.5%) were mild NPDR. Cumulative incidence of PDR was 0.1% (10 of 7097), for an incidence rate of 0.5 cases per 1000 PY. Of patients with no evidence of DR at baseline, 441 of 7097 (6.2%) had a 2+ step increase in DR over time (24.0 cases per 1000 PY). A total of 347 of 1277 patients (27.2%) with mild NPDR at baseline developed a more severe DR level in the follow-up period, for an incidence rate of 111.7 cases per 1000 PY. A 2+ step increase in DR occurred for 2.3% of these patients (30 of 1277). Regarding DR severity level, cumulative incidences of severe/very severe NPDR and PDR were 0.2% (2 of 1277) and 2.2% (28 of 1277), respectively, for incidence rates of 0.6 and 9.0 cases per 1000 PY, respectively. Characteristics associated with any DR incidence as well as occurrence of a 2+ step increase were longer diabetes duration (>15 y, any DR: risk ratio [RR], 2.0, 95% CI, 1.7-2.4; P < .001; 2+ step: RR, 3.2; 95% CI, 2.3-4.4; P < .001), higher HbA 1c level (any DR: RR, 1.1; 95% CI, 1.1-1.2; P < .001; 2+ step: RR, 1.3; 95% CI, 1.2-1.3; P < .001), and diabetes therapy, particularly insulin use alone (any DR: RR, 2.1; 95% CI, 1.5-2.9; P < .001; 2+ step: RR, 4.5; 95% CI, 1.8-11.2; P = .001) or with oral medications (any DR: RR, 2.2; 95% CI, 1.6-3.0; P < .001; 2+ step: RR, 4.5; 95% CI, 1.8-11.1; P = .001) . For example, compared with patients receiving diet therapy alone, patients taking both oral medications and insulin had 4.5 times the rate of a 2+ step increase in DR. Notable characteristics associated with any progression from mild NPDR were longer duration of diabetes (>15 y, RR, 1.8; 95% CI, 1.2-2.5; P = .002), higher HbA 1c level (RR, 1.1; 95% CI, 1.0-1.1; P < .001), and presence of peripheral neuropathy (RR, 1.5; 95% CI, 1.2-2.0; P = .001). For comparison with other studies , , , , , , , , , , , , , , , we conducted several post hoc analyses. These found that of American Indian and Alaska Native patients diagnosed with diabetes before age 30 years and taking insulin alone or with oral medications, 36.9% (90 of 244) developed any new DR within 4 years. Of American Indian and Alaska Native patients diagnosed at 30 years or older and taking insulin, 28.5% (378 of 1324) and 0.1% (1 of 1324) developed any new DR and PDR, respectively. Additionally, a separate regression model with only hypertension and blood pressure medication (yes/no) found that patients with hypertension were 14% more likely to develop new DR and no more or less likely to progress than patients without hypertension. UWFI for follow-up or both examinations was associated with any DR incidence (RR, 1.2; 95% CI, 1.0-1.5; P = .04) and a 2+ step increase (RR, 1.9; 95% CI, 1.2-3.0; P = .006) in patients with no DR at baseline. For example, compared with patients imaged with NMFP at both examinations, patients imaged with UWFI at follow-up had 2.2 times (95% CI, 1.4-3.5; P = .001) the rate of a 2+ step increase in DR . UWFI was associated with DR progression when used for the follow-up examination in patients with mild NPDR at baseline. To prevent or reduce the damaging effects of diabetes complications in American Indian and Alaska Native individuals, the IHS implemented programs such as the Special Diabetes Program for Indians (SPDI) and this American Telemedicine Association Category 3 teleophthalmology program. The SDPI has increased access to diabetes treatment services and reduced hyperglycemia, blood lipid levels, and kidney failure. HbA 1c level decreased from 9.0% in 1996 to 8.0% in 2020, low-density lipoprotein cholesterol level decreased from 118 mg/dL in 1998 to 89 mg/dL in 2020 (to convert cholesterol to millimole per liter, multiply by 0.0259), and kidney failure decreased 54% between 1996 and 2013. The teleophthalmology program itself conducted 264 437 examinations of 120 075 patients between January 1, 2000, and October 31, 2021. Substantial changes in diabetes medications have occurred in the past 25 years as well. Coinciding with this expansion of diabetes programs and medications, the prevalence of diabetic eye disease in American Indian and Alaska Native individuals served by the IHS teleophthalmology program appears to have declined. , This article updates estimates of incidence and progression in this population. Eighteen percent of patients (1280 of 7097) with no evidence of DR in 2015 developed some level of DR during 2016 to 2019, 6.2% (441 of 7097) had a 2+ step increase, and 0.1% (10 of 7097) developed PDR. The estimates reported here are lower than previous estimates in American Indian and Alaska Native patients. , , , Estimates from this study are also lower than or similar to estimates of DR incidence in Hispanic American individuals. , The target populations and methods used in other US-based studies make comparisons to this study problematic. , , For comparison with the benchmark Wisconsin Epidemiologic Study of Diabetic Retinopathy (WESDR), , we conducted an additional analysis of DR incidence by age of diabetes diagnosis (before or after 30 years) plus whether the American Indian and Alaska Native patients were taking insulin. The percentages of incident any DR and PDR that we found were lower than those reported in WESDR articles with a similar follow-up period of approximately 4 years. The estimates from this study are more comparable with studies conducted after 2000 and outside the US. , , , For example, in Hong Kong, incident any DR was 15.2% and sight-threatening DR was 0.03% in Chinese people surveilled with digital fundus photography. Approximately 2.2% of patients (28 of 1277) in this study who had mild NPDR at baseline progressed to PDR in the examined time frame compared with 0.1% of patients (10 of 7097) who had no DR at baseline, which is consistent with a recent systematic analysis of international studies. The influence of most of the risk factors examined on the development of DR was as expected, with duration of diabetes, hyperglycemia, and therapeutic regimen being significant contributors. Additionally, UWFI at the follow-up examination detected more incident DR and more progression of DR, perhaps reflecting the identification of predominantly peripheral lesions (PPL). The lack of a net effect for hypertension in this study was inconsistent with prior research. , A more refined measure of hypertension (such as actual blood pressure measurements) may be needed to understand its association with DR changes in this cohort; however, that information is not currently in the program’s database. In this national, primary care–based program, UWF fluorescein angiography (UWF-FA) is not available to identify PPL or extent of retinal capillary nonperfusion on FA. Newly published results from the DRCR Retina Network found that, over 4 years, greater initial retinal nonperfusion and FA PPL on UWF-FA were statistically associated with worsening DR as measured by 2+ step progressions or DR treatment. Based on those findings, more incident DR and DR progression may have been found if UWF-FA was used in this program. Strengths and Limitations A strength of this study includes the geographical distribution of the American Indian and Alaska Native cohort and the large sample size. Previous studies focused on specific areas and had smaller sample sizes. However, the data are exclusively from the IHS, which has a user population of 2.56 million, representing approximately 25% of the total estimated 9.7 million American Indian and Alaska Native individuals in the US. Generalizations from this report should be restricted to American Indian and Alaska Native individuals served by the IHS. There are study limitations to acknowledge. This study focused on DR, omitting DME, another leading cause of vision loss in people with diabetes. Thus, this study likely underestimated the overall burden of diabetic eye disease incidence for American Indian and Alaska Native patients. However, we believe that the underestimate is modest. First, a recent previous report found that DME prevalence in American Indian and Alaska Native patients was 3.0%, using clinical data from UWFI. Aiello and colleagues found that UWFI has a low sensitivity for the detection of DME compared with spectral-domain optical coherence tomography, which suggests that DME prevalence estimates derived from UWFI may contain false positives. Second, a recent meta-analysis of European data showed that annual incidence of DME was 0.4%. Omission of DME in this study may have underestimated the incidence of diabetic eye diseases overall by approximately 1.6% to 2.0%. The percentage might be lower still because some patients with incident DME may have also had incident DR and were already counted in the estimate. Another potential limitation of this study is that 61.2% of the total patients evaluated by the program in the baseline period had an examination during the follow-up period; ie, 38.8% were not reexamined in 2016 to 2019. This follow-up rate is lower than several studies, , , , but similar to or higher than , other studies. Some retrospective studies do not report a denominator, precluding comparison of this study’s follow-up rate with theirs. To understand the implications of 38.8% attrition for the results, we compared the baseline characteristics of patients who were not reexamined with those who were and found that the groups were similar except that the followed cohort was slightly less healthy. The estimates reported here likely are reasonable even with the attrition rate. A strength of this study includes the geographical distribution of the American Indian and Alaska Native cohort and the large sample size. Previous studies focused on specific areas and had smaller sample sizes. However, the data are exclusively from the IHS, which has a user population of 2.56 million, representing approximately 25% of the total estimated 9.7 million American Indian and Alaska Native individuals in the US. Generalizations from this report should be restricted to American Indian and Alaska Native individuals served by the IHS. There are study limitations to acknowledge. This study focused on DR, omitting DME, another leading cause of vision loss in people with diabetes. Thus, this study likely underestimated the overall burden of diabetic eye disease incidence for American Indian and Alaska Native patients. However, we believe that the underestimate is modest. First, a recent previous report found that DME prevalence in American Indian and Alaska Native patients was 3.0%, using clinical data from UWFI. Aiello and colleagues found that UWFI has a low sensitivity for the detection of DME compared with spectral-domain optical coherence tomography, which suggests that DME prevalence estimates derived from UWFI may contain false positives. Second, a recent meta-analysis of European data showed that annual incidence of DME was 0.4%. Omission of DME in this study may have underestimated the incidence of diabetic eye diseases overall by approximately 1.6% to 2.0%. The percentage might be lower still because some patients with incident DME may have also had incident DR and were already counted in the estimate. Another potential limitation of this study is that 61.2% of the total patients evaluated by the program in the baseline period had an examination during the follow-up period; ie, 38.8% were not reexamined in 2016 to 2019. This follow-up rate is lower than several studies, , , , but similar to or higher than , other studies. Some retrospective studies do not report a denominator, precluding comparison of this study’s follow-up rate with theirs. To understand the implications of 38.8% attrition for the results, we compared the baseline characteristics of patients who were not reexamined with those who were and found that the groups were similar except that the followed cohort was slightly less healthy. The estimates reported here likely are reasonable even with the attrition rate. The results of this cohort study suggest that recent DR incidence and progression among American Indian and Alaska Native individuals served by the IHS are substantially lower than they were 30 or more years ago and are now comparable with estimates from non–American Indian and Alaska Native populations examined in the last 20 years. Further, these low rates support the viability of safely extending the follow-up interval for retinopathy assessment in IHS patients who have no evidence of DR or mild NPDR. This may be possible if the IHS patients also have no DME, have minimal risk factors, will be examined with UWFI, and their follow-up adherence is not jeopardized. Currently, the IHS teleophthalmology program recommends annual DR examinations, consistent with the American Academy of Ophthalmology Preferred Practice Patterns and those of other professional organizations, but biennial frequency as recommended by the American Diabetes Association is well documented and might be an appropriate frequency for the IHS as well. Such a practice change, however, requires examination of adherence to the current recommendations. If a practice change extending follow-up were implemented, further research would be needed to determine if the change affected vision outcomes and adherence rates.
Diabetic retinopathy screenings in West Virginia: an assessment of teleophthalmology implementation
df348ba1-964e-481d-88f7-44a2ff1d79d7
9999538
Ophthalmology[mh]
Among working age adults, diabetic retinopathy (DR) is the most frequent cause of blindness . Progression to eye pathology can be rapid, with nearly 100% of type I diabetes patients and > 60% of type II diabetes patients presenting with DR within the first two decades of diagnosis . It has been nationally estimated that 28.5% and 4.4% of diabetic patients in the U.S. have DR and vision-threatening retinopathy, respectively . While this is certainly a national concern with about 34.1 million American adults being diagnosed with diabetes, West Virginia (WV) has the highest prevalence of diabetes (16.2% as of 2018) . The state also faces unique challenges given its predominantly rural setting (over 37% of its population designated as rural in comparison to 14% of the total U.S. population) . The rural challenges of WV are compounded by the state’s notably high rates of poverty, unemployment, and low education . In hopes of circumventing some of these challenges, clinicians have turned to novel approaches like telemedicine in order to provide WV’s diabetic population with improved care. Teleophthalmology is one such approach and serves as the foundation for the investigations of this study. Primary care offices may be more accessible to patients than those of specialists, especially in rural locations. Trained nurses and staff at these locations use cameras to acquire fundus photographs that can be uploaded for review by off-site specialists. Although there are limitations to the single-field, nonmydriatic fundus photography implemented at these primary care sites, these tools have allowed for detection of eye pathology in a variety of settings , and it has been proven to be a sensitive screening tool for retina pathology, such as DR and diabetic macular edema (DME) . Hence, teleophthalmology systems have been emplaced within the West Virginia University (WVU) Hospitals system. Using the U.S.-Food-and-Drug-Administration-approved Intelligent Retinal Imaging Systems (IRIS), primary care offices throughout the state have incorporated teleophthalmology into their clinical practice. Utilizing data acquired via teleophthalmology, ophthalmologists of the WVU Eye Institute have been enabled to provide guidance across the state based on their assessments of images acquired at these remote locations. The aim of this study is to assess the success and shortcomings of WV’s teleophthalmology implementation by analyzing data regarding image gradeability and concordance between photographic screenings and subsequent comprehensive eye exams in clinic. While studies have shown that teleophthalmology is effective in assessing retina pathology and guiding appropriate referral decisions , we utilize this opportunity to assess the use of this technology specifically within WVU Medicine and its affiliates. Different screening modalities have been explored in the literature (nonmydriatic versus mydriatic screening , varying fields of view , artificial intelligence systems , and smartphone-based retinal photography ). Given that nonmydriatic, 45-degree photography was utilized in screening our population, we were interested in comparing our findings to that which has been observed and reported through other telehealth programs . With 20.9% of WV’s population being over 65 years, we were also interested if age would play a role in the gradeability of images obtained during screening . While diabetic retinopathy can be vision-threatening, proper management of diabetes and ophthalmic interventions like pan-retinal photocoagulation (PRP) and intravitreal anti-vascular endothelial growth factor (anti-VEGF) agents have shown to be effective and have become the current standard of care in managing diabetic retinopathy at various stages of its progression . Hemoglobin A1c (HbA1c) severity and the presence of hyperreflective spots on spectral domain optical coherence tomography (an indicator of diabetic retinopathy progression) has shown to be linear with any HbA1c over 5.4% demonstrating a high likelihood of presenting with hyperreflective spots . Therefore, we have utilized the opportunity of this retrospective chart review to investigate this correlation and to determine how it might be reflected in the process and outcomes of this screening modality. Expansion and improved accuracy in screening modalities holds substantial promise as the burden of diabetes continues to increase across the country. However, the success of these screening programs in facilitating appropriate care for patients under suspicion for vision-threatening diabetic retinopathy heavily relies on patient compliance to their providers’ recommendations. Numerous factors can affect patient compliance to care plans for diabetic retinopathy, including age, education, duration of their diabetes, practical understanding of their condition, and understanding/communication of the purpose behind teleretinal screenings . Given the rural setting of WV, we also sought to explore how the geographic boundaries might impact patient follow-up, which is essential to the ultimate success of these screening programs . This retrospective medical chart review consisted of collecting data regarding diabetic patients 18 years and older who have participated in the teleophthalmology program offered throughout the state of WV between January 2017 and June 2019. The WVU institutional review board approved the study protocol. The Volk Pictor (Volk Optical, Inc., Mentor, OH, USA) nonmydriatic cameras used by trained nurses and staff acquired 45-degree fundus images from patients at various primary care and endocrinology clinic settings. In these settings, patients waited in rooms with the lights turned off to maximize pupillary dilation sans mydriatic drop administration. Staff would use the handheld fundus cameras to take photographs that were then uploaded and subsequently reviewed by retina specialists. Both eyes were photographed when possible with hopes of acquiring at least one viable image per eye. The number of attempts made was contingent on the judgment of the trained staff acquiring the images and the tolerance demonstrated by the patients being screened for repeated attempts. Images were graded by a retina specialist at the WVU Eye Institute. These specialists included three WVU board-certified retina faculty and one vitreoretinal fellow—all patients were assigned to have their set of acquired images evaluated by one of these four specialists. Images were noted as gradable or ungradable, and the extent of DR (absent, mild, moderate, severe, or proliferative) and/or DME (absent, mild, moderate, or severe) was described in accordance to the International Classification of DR scale . Care plan recommendations and suspicion of other pathologies were also noted. The results with their accompanying care plan recommendations were uploaded to the Epic electronic medical record (EMR) for the use of primary care physicians (PCPs) in their advising of diabetic patients in accordance to the American Academy of Ophthalmology’s guidelines for DR follow-up (Fig. ). Referral recommendations were made in accordance to those proposed by the International Council of Ophthalmology (ICO) and American Diabetes Association (ADA) —albeit with the decision to recommend referral for suspected DR of any severity. Recommendations could also be made on the basis of other ocular pathologies that were remarked by reviewing ophthalmologists (e.g., age-related macular degeneration, choroidal nevi, colobomas, hypertensive retinopathy, glaucomatous optic nerves). For the purpose of this study, we exclusively followed patients whose screening findings indicated suspicion for diabetic retinopathy of any severity in at least one eye. Data collection Lists of photography instances were generated, and these lists were used to investigate all photography orders recorded in the Epic EMR utilized by WVU Hospitals between January 2017 and June 2019. Photography orders that were unfulfilled (due to premature order placement by clinicians, for instance) were excluded from the study. Patient information was de-identified, and spreadsheets in Microsoft Excel were created to collect and organize the data. Each valid photography order was investigated in the following fashion. First, the IRIS results adjoined to patients’ charts for the photography order in question would be accessed. The gradeability and presence of pathology would be recorded (specifically noting DR as mild, moderate, severe, or proliferative and DME as mild, moderate, or severe). If the screening results indicated suspicion for pathology, further investigation was conducted. Date of birth, the time that had passed since their diabetes diagnosis, their diabetes classification (Type 1 or Type 2), and HbA1c within 3-months of their photography date were all collected. Patient receipt of their results (either through record of PCP communications or indications that patients had read their results via the patient-accessible WVU MyChart system) was recorded, and whether or not an appointment was subsequently set and maintained (within 12 months of the photography order date or prior to a future repeat screening with their PCP) was also noted. Using patients’ home addresses, distances from the WVU Eye Institute to patients’ hometowns were recorded using Google Maps driving estimates. The results of patients’ dilated eye exams were recorded (noting severity as mild, moderate, severe, or proliferative for DR and absent or present for DME). Where feasible, these data were acquired from offices outside of WVU Medicine by either viewing documentation that had already been uploaded to the Epic EMR by patients or their providers or by contacting these offices directly where references in PCP notes indicated completion of ophthalmic follow-up outside WVU Medicine and permission had been granted. Statistical analysis Using the data review functions of Microsoft Excel, summations and calculations were performed with the data acquired from the 2,756 patients who were studied using teleophthalmology within our selected timeframe. The totals and percentages of each attribute of interest were calculated—gradeability and the totals and proportions of DR/DME severities in PCP screenings and subsequent dilated eye exams. Pearson’s chi-squared tests were performed to compare the gradeability data found within different age ranges (18–49 years, 50–64 years, and ≥ 65 years) and the prevalence of DR within different HbA1c ranges (5.4–6.4%, 6.5–9.0%, and 9.1–14.0%). This method was also used to investigate the relationship between patient distance from the WVU Eye Institute and compliance to follow-up with dilated eye exams. Lists of photography instances were generated, and these lists were used to investigate all photography orders recorded in the Epic EMR utilized by WVU Hospitals between January 2017 and June 2019. Photography orders that were unfulfilled (due to premature order placement by clinicians, for instance) were excluded from the study. Patient information was de-identified, and spreadsheets in Microsoft Excel were created to collect and organize the data. Each valid photography order was investigated in the following fashion. First, the IRIS results adjoined to patients’ charts for the photography order in question would be accessed. The gradeability and presence of pathology would be recorded (specifically noting DR as mild, moderate, severe, or proliferative and DME as mild, moderate, or severe). If the screening results indicated suspicion for pathology, further investigation was conducted. Date of birth, the time that had passed since their diabetes diagnosis, their diabetes classification (Type 1 or Type 2), and HbA1c within 3-months of their photography date were all collected. Patient receipt of their results (either through record of PCP communications or indications that patients had read their results via the patient-accessible WVU MyChart system) was recorded, and whether or not an appointment was subsequently set and maintained (within 12 months of the photography order date or prior to a future repeat screening with their PCP) was also noted. Using patients’ home addresses, distances from the WVU Eye Institute to patients’ hometowns were recorded using Google Maps driving estimates. The results of patients’ dilated eye exams were recorded (noting severity as mild, moderate, severe, or proliferative for DR and absent or present for DME). Where feasible, these data were acquired from offices outside of WVU Medicine by either viewing documentation that had already been uploaded to the Epic EMR by patients or their providers or by contacting these offices directly where references in PCP notes indicated completion of ophthalmic follow-up outside WVU Medicine and permission had been granted. Using the data review functions of Microsoft Excel, summations and calculations were performed with the data acquired from the 2,756 patients who were studied using teleophthalmology within our selected timeframe. The totals and percentages of each attribute of interest were calculated—gradeability and the totals and proportions of DR/DME severities in PCP screenings and subsequent dilated eye exams. Pearson’s chi-squared tests were performed to compare the gradeability data found within different age ranges (18–49 years, 50–64 years, and ≥ 65 years) and the prevalence of DR within different HbA1c ranges (5.4–6.4%, 6.5–9.0%, and 9.1–14.0%). This method was also used to investigate the relationship between patient distance from the WVU Eye Institute and compliance to follow-up with dilated eye exams. Through WVU Medicine’s teleophthalmology screenings, 2,756 patients received screenings between January 2017 and June 2019 (first order date: 01/12/2017, last order date: 06/28/2019). From these 2,756 patients, 2,327 photography results (84.43%) were deemed to possess at least one gradable eye by retina specialists at the WVU Eye Institute. Both eyes were deemed gradable in 1,940 patients (70.39%). Two hundred, eighty-nine patients (12.4% of the patients with at least one gradable eye) had results noting some form of DR or DME. These patient cases were explored further, and it was found that 152 of these patients followed up in clinic within 12 months of their screening or prior to receiving another nonmydriatic screening with their PCP (124 within WVU Medicine, 28 with outside/external ophthalmologists). DR/DME was confirmed in 101 of these patients (Fig. ). The confirmation of true positives with dilated eye exams enabled a calculation of the screening method’s positive predictive value. The positive predictive value was calculated to be 66.4% for the teleophthalmology screening’s capacity to detect true DR/DME pathology. Other pathology notes and specifiers were also recorded, and 114 instances of age-related macular degeneration, 43 instances of hypertensive retinopathy, 60 instances of glaucomatous optic nerves, 17 instances of choroidal nevi, 3 instances of dot and blot hemorrhages, and 1 case of chorioretinal scar versus coloboma were noted throughout the data collection of screening results and subsequent dilated eye exam findings. The gradeability of the screening photographs varied by age in that patients aged 65 years and older were found to have statistically significantly fewer gradable eyes than patients younger than 65 years (63.9% versus 72.7%, respectively, p < 0.00001), and the mean age was found to be 57.97 years ( σ = 12.66) (Table ). Breaking the < 65 years of age group down further reveals a statistically significant difference between the age ranges of 18–49 and 50–64 (75.6% versus 70.9%, respectively, p < 0.02), 18–49 and ≥ 65 (75.6% versus 63.9%, respectively, p < 0.000001), and 50–64 and ≥ 65 (70.9% versus 63.9%, respectively, p < 0.01) (Fig. ). With 2,756 patients screened, 5,512 eyes were attempted to be screened. However, 4,267 eyes (77.41%) were deemed gradable, and 1,245 eyes (22.59%) were deemed ungradable. No suspicion for DR was raised in 3,813 eyes (89.36%), and 4,119 eyes (96.53%) did not raise suspicion for DME. Some severity of DR was described in 451 eyes (10.6%), and 146 eyes (3.42%) showed some severity of DME. The majority of DR cases, 234 eyes (51.9%), were described as mild. Moderate DR was described in 161 eyes (35.7%), and 38 eyes (8.4%) were described to demonstrate severe DR. PDR was noted in 18 eyes (4.0%). Mild DME was described for 55 eyes (37.7%), moderate DME was described for 49 eyes (33.6%), and severe DME was described for 42 eyes (28.8%) (Fig. ). Patient cases with DR/DME pathology were further investigated. Some note of an appointment being set was indicated in the EMR for 170 patients. One hundred, nine patients had record of an appointment being set within three months of their screening, 12 patients had record of an appointment being set within six months of their screening, 17 patients had record of an appointment being set within 12 months of their screening, and 16 patients were noted to have had an appointment set beyond 12 months but prior to their next screening with their PCPs. It was found that some form of PCP follow-up occurred in 272 cases, or 94.1% of the patient cases in which DR/DME pathology was noted. PCP follow-up was deemed to have occurred if there existed some recorded form of communication between the PCP and the patient in the EMR (e-mail, phone conversation, WVU MyChart messages, et cetera) or if it was indicated that the patient had viewed their results on WVU MyChart. Compliance with follow-up varied depending on patient hometown distance from the WVU Eye Institute. It was found that patients who resided within 25 miles demonstrated statistically significantly greater compliance to follow-up with a dilated eye exam than those residing farther than 25 miles away (60% versus 43%, respectively, p < 0.01) (Table ). As mentioned previously, 28 of the 152 patients who followed up in clinic were found to have records available regarding their follow-up appointments for a dilated eye exam with an ophthalmologist outside WVU Medicine. Outside appointments made up 3% of the follow-up visits for those residing within 25 miles and 16% of the follow-up visits for those residing beyond 25 miles. Data collected from the follow-up exams revealed 187 eyes (61.5%) with DR and 67 eyes (22%) with DME. The majority of eyes with confirmed DR had mild DR (82 eyes, or 43.9%). Fifty-four eyes (28.9%) were diagnosed with moderate DR, 21 eyes (11.2%) were diagnosed with severe DR, and 30 eyes (16.0%) were diagnosed with PDR. The presence of DME was found in 67 eyes (22%) (Fig. ). Regarding patient diabetes status of those under suspicion for DR/DME pathology in their initial screenings, 91% were diagnosed with type 2 diabetes, and 9% were diagnosed with type 1 diabetes. The mean duration of diabetes (determined by calculating the time transpired between the date of the patient’s photography and the earliest mention of a diabetes diagnosis or a historical account of such a diagnosis predating the EMR) was 6.8 years ( σ = 5.3). The mean HbA1c was calculated to be 8.9% ( σ = 2.2) (Table ). When the prevalence of DR/DME pathology (confirmed in clinic via dilated eye exam) was compared among patients falling within three ranges of HbA1c levels via Pearson’s chi-squared tests, no statistically significant difference was found when comparing the 5.4–6.4% range to the 6.5–9.0% range ( p = 0.39). However, a statistically significant difference was appreciated in comparison between the 5.4–6.4% range and the 9.1–14.0% range ( p < 0.01) and between the 6.5–9.0% range and the 9.1–14.0% range ( p < 0.01) (Fig. ). The success of an implemented teleophthalmology screening program is contingent on both the screening process and subsequent patient compliance to recommendations based on the screening results. Through this retrospective chart review concerning WV’s teleophthalmology program, we have come to identify several areas of interest for improving our understanding of the screening process and its outcomes in this population. The first essential step of these programs is the acquisition of the fundus images. Accurate assessment is reliant on the successful attainment of gradable fundus photographs. Our investigation revealed that 84.43% of the 2,756 patients screened had at least one gradable eye, but both eyes were gradable in only 70.39% of cases. The proportion of images acquired and deemed gradable in our cohort was comparable to previous studies. For instance, Tarabishy et al. found that 95.1% of the 1,175 patients from which they acquired 45-degree images with a nonmydriatic camera to have gradable eyes . Benjamin et al., however, acquired 1,377 45-degree images via nonmydriatic fundus photography and found that 67.4% were gradable . Several factors may play a role in this variability of gradeability outcomes among studies. How photographers are trained, what equipment is utilized, and whether or not dilation is an option are all factors that need to be considered. Additionally, individual variation in photographers’ thresholds for the number of attempts they make in acquiring images and the number of attempts a patient may be willing to tolerate during their PCP visits may also influence these results. Time and resources at PCP offices are also likely to vary. While the equipment, guidelines, and absence of dilation were consistent among our screenings, these other factors are more elusive and may very well have impacted the outcomes we observed. Additional standardization and documentation of the imaging protocol would provide insight into the limitations of the current screening methodology. There is also the question of image grader variability. The concern of variability for deeming images gradable or ungradable could also be extrapolated to the image interpretations and the use of modifiers describing the extent of DR/DME observed. A limitation of this study was the use of single graders to evaluate the images—precluding the calculation of an inter-observer correlation. While we could not explore this aspect further in our own work, previous telemedicine investigations like those conducted by Liu et al. in 2019 have provided reassurance—concluding that, while it is recommended that uniform standards be established to improve consensus on image gradeability, it is unlikely that there exists much variability among ophthalmologists when assessing diabetic retinopathy through these screening methods . When patients’ screening results were found to raise suspicion for DR/DME, their cases were explored further to analyze concordance with subsequent dilated eye exam findings. However, in order to confirm the diagnosis, patients first needed to comply with the recommendation to see a specialist. We found that out of the 289 patients who had DR/DME pathology noted on their screening results, only 152 (52.6%) complied with subsequent appointments for a dilated eye exam. This seems to be a problem appreciated among other teleophthalmology programs as well. The investigations reported by Bresnick et al. in 2020 noted the effectiveness of these screening modalities in identifying patients in need of further examination and possibly specialized care. However, they recognized that about half of their patients failed to keep their first ophthalmology appointments and have, hence, initiated the implementation of a tracking/recall system to ensure that these at-risk patients do not miss this potentially crucial step in their vision care . We attempted to explore this aspect by investigating follow-up by PCPs. While we found that 94.1% of patients with noted DR/DME pathology on screening received some form of notice regarding their results, the notification method and content varied widely. Some patients received phone calls, e-mails, or WVU MyChart messages from their providers with explanations of their results and the appropriate next steps in their care. Others merely looked over the results themselves once uploaded and made viewable on WVU MyChart—possibly without any further explanation of what the results mean for their care. Some patients set appointments but failed to adhere. Some never made appointments, and other charts contained notes suggesting that an outside ophthalmologist or optometrist was planned to be seen with regards to their vision care in general. The variability in this crucial step of teleophthalmology may have contributed to the lack of compliance we observed. Furthermore, we observed that follow-up recommendations varied by PCP. While the ICO/ADA guidelines suggest referral for any cases in which photographs cannot be adequately obtained or assessed , we noticed a discrepancy in how PCPs managed these results. As mentioned previously, some communication of the results was sparse—some patients only seeing that their results were deemed ungradable in WVU MyChart. Other PCPs directly contacted their patients in some way, but sometimes repeat screening was chosen over referring patients to specialist care. While we chose to focus this study on the follow-up of positive screenings, this is indubitably concerning and warrants intervention for improved adherence to protocol and limiting the number of potential DR suspects who may be missing opportunities for diagnosis confirmation and subsequent care. When patients with suspected DR/DME did comply with follow-up, we found that 101 patients truly had DR/DME of various severities. While our study was limited in that we lacked the true and false negative data to explore sensitivity and specificity like previous studies (we did not have patients who screened negative report to clinic for dilated eye exams for confirmation), we were able to determine that the positive predictive value of our screening was 66.4%. One variable we anticipated having an impact on the gradeability results was patient age. As mentioned previously, a notably high proportion of WV’s population is aged 65 years and older . The state of WV also demonstrates the greatest prevalence of diabetes . With these details in mind, it was suspected that age could influence the outcomes of this study. Through our investigations, we found that there was a statistically significant difference in image gradeability between patients aged 65 years and older and those aged younger than 65 years (63.9% versus 72.7%, respectively, p < 0.00001). We suspect that this may be related to other ophthalmic changes commonly associated with the aging eye. For instance, refractive status and cataract development could impact the clarity of the images obtained. Nonmydriatic cameras were utilized in our screenings—attempts to maximize pupillary dilation only being achieved by having patients wait in a dark room prior to screening. Given that pupillary diameter is known to decrease with age, this could have contributed to the significant difference in gradeability we observed among patient screenings in this population . Further research is required to determine if dilation in more elderly populations would substantially lower the rate of ungradable images. Interestingly, stratifying this data further into three age ranges further elucidates a negative correlation between age and image gradeability. We found a statistically significant difference between the age ranges of 18–49 and 50–64 (75.6% versus 70.9%, respectively, p < 0.02), 18–49 and ≥ 65 (75.6% versus 63.9%, respectively, p < 0.000001), and 50–64 and ≥ 65 (70.9% versus 63.9%, respectively, p < 0.01). Other relevant details were explored for patients with pathology noted on screening in order to compare to previously observed trends. For instance, HbA1c severity has been shown to correlate with indicators of diabetic retinopathy severity and has served as a useful biomarker of chronic hyperglycemia, and blood glucose control has been shown to improve outcomes for retinopathy . With this in mind, we divided patients with suspected DR/DME pathology into three HbA1c categories: 5.4–6.4% to represent the prediabetes range (diabetic patients with presumably better glycemic control), 6.5–9.0% to represent a mid-range, and 9.1–14.0% to represent the most severe cases. While we did not find a statistically significant difference between the 5.4–6.4% and 6.5–9.0% ranges ( p = 0.39), both of these ranges demonstrated a statistically significant difference when compared to the 9.1–14.0% range ( p < 0.01, p < 0.01). Our mean HbA1c was 8.9% ( σ = 2.2). These findings together seem to align with previous studies . The false negative data was unfortunately not available for our study. Since we retrospectively studied a real-world application of teleophthalmology in which it was not recommended for patients to pursue ophthalmic follow-up for negative screenings , we were unable to confirm the true and false negatives. Previous studies, however, reveal data not dissimilar to those found in our study—demonstrating a greater proportion of absent or mild DR than more severe cases . Nevertheless, this does not make it possible to extrapolate true and false negative rates. Furthermore, our prevalence of DR/DME by screening is notably lower than expected when compared to pooled prevalence data reported in the literature. Globally, the prevalence of DR has been estimated to be 22.27% —some studies estimating as high as 34.6% . Our screening raised suspicion for DR/DME in only 12.4% of patients (with at least one gradable eye), and capturing an accurate prevalence of DR in our population is challenged further as only a subset of these patients maintained follow-up to confirm their diagnoses. However, variation in this prevalence data appears to be commonly reported among individual studies and population subgroups . These variations may be explained by aspects as technical as the differences in screening modalities or as fundamental as the patient demographics. Variables expected to influence the prevalence data include major risk factors, such as duration of diabetes and HbA1c . Our mean HbA1c and mean age, however, suggest these factors are less likely to be contributing to the lower-than-expected DR prevalence we observed since they bear semblance to those of other studies . According to the findings reported by Sato et al., our mean duration of diabetes also suggests there was ample time for expected progression to PDR . It has also been reported that there is a significant difference in DR prevalence among different races—with a significantly higher DR prevalence in blacks and Hispanics (36.7% and 37.4%, respectively) compared to whites (24.8%) . There also appears to be intra-ethnic variation. For instance, Yau et al. reported a significantly higher prevalence of DR in a U.S. Caucasian population compared to an Australian Caucasian population (35% versus 15.3%) . Genetic and environmental risk factors may all play a role in disease progression and management, and these variations may render it difficult to assess whether the DR population of WV is sufficiently being addressed. However, they may also suggest that some variation is to be expected with the unique genetic and environmental makeup of a population. Unlike past studies with greater representation of ethnic minorities , 93.1% of WV’s population is white . Additionally, the state’s rural setting and notably high rates of poverty, unemployment, and low education could impact the screening and subsequent follow-up on which this prevalence data relies . Interestingly, a study reported an unexpected lack of association between low socioeconomic status and higher grades of DR, which could be relevant to the socioeconomic impact in our WV population . Ultimately, it is difficult to pinpoint whether our lower-than-expected prevalence is due to false negative screenings, ungradable images of patients with DR, or selection bias of our retrospective cohort. While we are unable to explore the true and false negatives and this is undoubtedly a valuable component in understanding the fundamentals of teleophthalmology, our findings seem to align with past findings while offering the value of context in the subsequent follow-up phase. Furthermore, we had adjusted the ICO/ADA guidelines in hopes of minimizing false negatives. While current recommendations do not necessarily require referral to a specialist for cases of suspected mild non-proliferative DR , this program recommended referrals for all cases of suspected DR on screening. Not only are these recommendations comparable to those followed in previous teleophthalmology studies in other settings , but these recommendations granted some advantages relevant to patient care. For instance, the limited view and gradeability of our images may warrant concern for potentially missing false negative moderate-severe cases of DR that require more immediate referrals as per the ICO/ADA guidelines . Given our awareness of the technological limitations and our later appreciation of the gradeability and limited view concerns, it was important that even suspected mild cases of DR be investigated further to limit missed cases of moderate-severe DR. As our prevalence data revealed, a larger proportion of patients who followed up with ophthalmologists had confirmed cases of moderate, severe, and proliferative DR. We also found that out of the population with exclusively mild DR suspected on at least one screening image, 48% completed follow-up and 5% of these patients were noted to have moderate, severe, or proliferative DR and/or the presence of DME—providing perhaps some support for these recommendations in the given context. Still, 36% of these patients with suspected mild DR had at least one eye with mild DR exclusively, and 59% had bilateral absence of DR/DME. However, with only 56% of suspects for moderate or severe non-proliferative DR, PDR, and/or DME following up (52.6% of DR suspects for all severities), it is apparent that the referral process and patient adherence are important areas in need of improvement for this program and should be key points to consider for other hospital systems hoping to adopt similar programs. It is also paramount to consider that the technology involved in teleophthalmology is constantly evolving. Automation based on artificial intelligence has been proving its effectiveness in recent studies . Likewise, upgrades to imaging technologies has also been promising. We utilized handheld, nonmydriatic cameras to take 45-degree images, but newer systems could grant specialists improved field of view and resolution. Ultrawide field technology, for instance, has shown notable success . According to the findings of Silva et al., ultrawide field imaging technology has been shown to reduce the number of ungradable eyes by 81% . Improved field of view is also important for accurate disease interpretation. In addition to retinopathy, these improvements have implications for the use of telemedicine in addressing other pathology as well. In our study, we noted an abundance of other ocular pathologies, including age-related macular degeneration, hypertensive retinopathy, glaucomatous optic nerves, and choroidal nevi. Improvements in imaging would certainly benefit the identification of other pathologies as well. Our teleophthalmology program hopes to make upgrades to the imaging technologies we utilize in order to improve the outcomes we observed in our current study and hopefully draw comparisons in the future. However, several important obstacles remain. Regardless of the improvements we achieve in our screening methods, the outcomes could fall short if patient compliance to follow-up does not improve. We suspected that the unique rural setting of West Virginia could play a role in this, and we found a statistically significant difference among patients who resided within 25 miles of the WVU Eye Institute and those who resided beyond 25 miles (60% versus 43%, respectively, p < 0.01). A potential limitation of this study entails possible follow-up with external providers throughout the state. Some patients’ providers had uploaded documentation regarding outside care. For others, we managed to find documentation mentioning ophthalmologists outside WVU Medicine. With permission and when feasible, we acquired documentation regarding follow-up visits from these outside offices. Unfortunately, there were likely still many patient follow-up appointments that were missed. Nevertheless, access to specialized care is a challenge for patients, and patient compliance to follow-up appointments may not be an uncommon issue amongst telemedicine programs. For instance, Peavey et al. found poorer follow-up among socioeconomically disadvantaged patients with milder DR severities in a predominantly rural population . As mentioned previously, Bresnick et al. noted similar drops in compliance with plans to implement systems to hasten the delivery of results, improve engagement with patients when explaining their results and the implications they have for their vision, and reducing the window between result delivery and referral placement . While we found numerous studies conducted in the context of urban settings and found common ground with socioeconomic obstacles , we find our investigations of this statewide program that involves a rural setting to be unique and possibly useful to other programs. Teleophthalmology programs aiming to connect diabetic patients with specialist care through more accessible, feasible, efficient, and cost-effective screening approaches have the opportunity to improve outcomes for an ever-growing population of patients that is at risk of sight-threatening pathology. However, there are numerous obstacles to consider—one being the inherent geographic concern that is especially relevant to rural areas. New or current programs operating under similar circumstances might find a basis for comparison in our findings to set expectations and begin the process of addressing the next steps that follow the screening—ascertaining that correctly identified patients adhere to follow-up. Preferably, this can be accomplished within the hospital system or with external providers who can ensure the pipeline from screening to appropriate care is not broken. Working on methods to improve access to specialist care in WV and optimize/standardize the process of scheduling appointments for those identified by our screening to need dilated eye exams (standardizing PCP protocols/education and tracking these referrals and subsequent adherence) will be an important challenge to address as we seek to maximize the benefit of this teleophthalmology system and the quality of care it promotes. The success of teleophthalmology is contingent on a variety of factors. Many of these factors, such as age and distance from specialist care, were explored in this real-world application of teleophthalmology. These factors may be especially impactful in a rural setting, but they may also be applicable to teleophthalmology programs in other settings. While the implementation of telehealth technologies has facilitated the expansion of effective screening, follow-up confirmation of suspected diagnoses and appropriate initiation of treatment may remain hindered. This was especially suggested by the negative relationship we noted between distance from specialists and follow-up compliance among our patient population. The screening methods and statewide implementation of the program thus far among participating PCP sites has enabled extensive screening and identification of pathology. Improvements in equipment may also be promising for enhancing the accuracy of these screening approaches and possibly improving our image gradeability concern for the aging population. However, in order to improve outcomes in DR/DME patients and diabetic patients at risk of developing these sight-threatening pathologies, providers and program developers should be cognizant of the limitations brought about by geography and lack of convenient access to specialist care.
Acceptability of patient-centered, multi-disciplinary medication therapy management recommendations: results from the INCREASE randomized study
a18be7e6-58e4-4e8f-970d-0ccca5ec43d6
9999619
Patient-Centered Care[mh]
Many prior studies have provided evidence that medication therapy management (MTM) can lead to improved health and economic outcomes [ – ]. MTM involves five core components: availability of a personal medication record, medication therapy review, development of a medication-related action plan, intervention and/or referral, and documentation and follow-up of medication changes or lack thereof [ – ]. Though most MTM services share these five basic elements, there is heterogeneity in how these services are operationalized. Specifically, there is variability in how potentially inappropriate medications (PIMs) are identified, whether certain medications are targeted specifically, the types of recommendations made, and patient’s acceptance of the proposed changes from an MTM intervention. Additionally, patient and pharmacist engagement with prescribing clinicians varies, [ – ] though evidence shows that pharmacist-prescriber-clinician teams engaging together in MTM activities results in better medication optimization outcomes [ , – ]. It is important to characterize MTM-related services in collaborative practices in order to estimate their impact on patient health outcomes, especially for MTM services targeting vulnerable populations such as older adults receiving PIMs. We recently completed the INtervention for Cognitive Reserve Enhancement in delaying the onset of Alzheimer's Symptomatic Expression (INCREASE) study, a randomized controlled trial where we tested an MTM intervention that actively involved the patient, a board-certified geriatric pharmacy specialist (BCGP), and a non-pharmacist clinician . INCREASE was designed to evaluate the effect of the MTM intervention on changes in medication appropriateness and cognitive function; study data included comprehensive information on health history, medication use and experience with medication taking, as well as the process of implementing the MTM intervention. We previously reported on the successful implementation of the MTM intervention that translated into an improved medication appropriateness at the one-year follow-up . The current study characterizes the stepwise process of delivering the MTM intervention in the INCREASE trial with the goal of helping to fill a qualitative gap in the literature surrounding MTM interventions, specifically focused on patient-centered, multidisciplinary approaches. The specific approach described, including details of the process, provides a model for future evidence based, multidisciplinary MTM interventions that may be implemented rationally in practice. The objectives of the current manuscript are twofold: (1) describe the recommendations made by the study BCGPs using participant-reported medical and medication histories for all INCREASE participants, prior to randomization to either the MTM intervention (specific medication recommendations plus provision of educational materials on inappropriate medication use) or usual care (i.e., only provision of educational materials on inappropriate medication use), and (2) describe final recommendations for patients randomized to the MTM intervention. The second objective describes (a) revisions to the preliminary baseline MTM recommendations over the course of the intervention, and (b) participant response to the MTM recommendations following the intervention. INCREASE study overview The INCREASE study was a randomized controlled trial enrolling community-dwelling adults 65 years and older who did not have dementia and were using at least one PIM as defined in the 2015 Beers Criteria (the most recent version at the time of the study) . Complete details of the INCREASE protocol and results are available elsewhere and briefly described below . After 1:1 randomization that was stratified based on baseline amyloid burden, participants randomized to the control group received usual care with educational pamphlets on medication appropriateness for older adults and risks associated with polypharmacy. In addition to educational materials, participants randomized to the MTM intervention met with the BCGP and a non-pharmacist study clinician (e.g., nurse practitioner, neurologist) to discuss the baseline recommendations. This meeting allowed for 1) participant education on risks, benefits, and alternatives to optimize medication use; and 2) the collection of additional relevant information, including participant beliefs, preferences, and treatment goals. During the MTM team meeting, final recommendations were formalized, and the details of any relevant revisions to the baseline recommendation were noted in the pre-specified data collection forms. The INCREASE study was approved by the University of Kentucky Institutional Review Board (IRB #43239) and all the study participants provided informed consent. The protocol for the study was registered on clinicaltrials.gov (NCT02849639) on 29/07/2016, in accordance with the relevant guidelines and regulations or in accordance with the Declaration of Helsinki. Study data were collected and managed using the Research Electronic Data Capture (REDCap), a secure, web-based software platform designed to support data capture for research studies . Baseline recommendations (all INCREASE study participants) Before randomization, comprehensive medication reviews were conducted by BCGPs for all participants using participant-reported medical conditions and information on dose, frequency, indication, duration of treatment, tolerability, and adverse drug reactions for all prescription medications, vitamins, and supplements. The BCGP medication review process involved 1) assessing the clinical appropriateness of each medication using the Beers Criteria and Medication Appropriateness Index (MAI); 2) evaluating potential drug-drug and drug-disease interactions in accord with the above and also taking into account prescription label information; and 3) assessing whether medication regimens followed relevant disease-specific evidence-based guidelines [ , , ]. Of note, blood laboratory work results, electronic medical records, and previous therapies (e.g., medication failures) were not available to BCGPs when devising baseline recommendations, but were available to the clinician member of the MTM team. Following randomization, the MTM recommendations were only shared with those participants randomized to the intervention group ( N = 46). Recommendations for the control group were recorded in the study database but not shared with those participants. During the INCREASE study period, the pharmacy team of two BCGPs utilized drug and health information resources (e.g., Lexicomp and UpToDate [Wolters Kluwer Health Inc. Riverwoods, IL]), Beers Criteria , relevant guidelines (e.g., Diabetes Standards of Care and Clinical Practice Guidelines for Hypertension ), and clinical judgement to justify their recommendations. Each recommendation was reviewed by both BCGPs and a consensus pharmacy recommendation was decided via discussion. Detailed information for each recommendation was then entered into a series of pre-specified study protocol data collection forms, allowing for systematic categorization of recommendations as either: 1) medication discontinuation with or without tapering; 2) switch to a different medication; 3) dose adjustment (e.g., decrease dose, adjust dose for organ function/tolerability, or increase dose); 4) new medication initiation; 5) drug or disease monitoring recommendation (e.g., vital signs, falls risk, sedation); or 6) a non-pharmacologic recommendation (e.g., sleep hygiene, avoiding gastroesophageal reflux triggers, referral for diagnostic workup). Baseline recommendations were also categorized by pharmacologic class and over the counter (OTC) or supplement status of the medication prompting a baseline MTM recommendation. A full schematic for medication categorization is available in the supplementary material (see Supplementary Table S ). Final recommendations (MTM intervention group only) After 1:1 stratified randomization, study pharmacists met with the participant and study clinician to deliver the MTM intervention. During the intervention, the team gathered further information from the patient and discussed baseline recommendations together, in-person, with additional context provided by the participant on their health status, needs, and preferences. Because health status and medication use in participants may have changed in the time between the baseline assessment and the initial MTM recommendation, comparison of baseline to final recommendations was limited to those baseline MTM recommendations that proposed medication changes at the time of the initial MTM study visit. The non-medication related recommendations (see supplementary material for additional information) were discussed during the team MTM intervention, but they were not included in the present analysis. Participant responses to each final MTM recommendation for participants randomized to the MTM intervention were collected at the conclusion of the initial MTM intervention visit using a standardized form where the participant selected his or her response to the recommendation as 1) willing to change, 2) refusing to change, 3) needing to confer with a primary care provider or other specialist (e.g., cardiologist), or 4) not applicable (e.g., the participant had already discontinued the medication, dose adjustment was no longer warranted per clinical judgement). In this manuscript we are describing in detail the immediate participant response as recorded following the baseline intervention. The impact of the intervention on medication appropriateness is described in detail elsewhere . The INCREASE study was a randomized controlled trial enrolling community-dwelling adults 65 years and older who did not have dementia and were using at least one PIM as defined in the 2015 Beers Criteria (the most recent version at the time of the study) . Complete details of the INCREASE protocol and results are available elsewhere and briefly described below . After 1:1 randomization that was stratified based on baseline amyloid burden, participants randomized to the control group received usual care with educational pamphlets on medication appropriateness for older adults and risks associated with polypharmacy. In addition to educational materials, participants randomized to the MTM intervention met with the BCGP and a non-pharmacist study clinician (e.g., nurse practitioner, neurologist) to discuss the baseline recommendations. This meeting allowed for 1) participant education on risks, benefits, and alternatives to optimize medication use; and 2) the collection of additional relevant information, including participant beliefs, preferences, and treatment goals. During the MTM team meeting, final recommendations were formalized, and the details of any relevant revisions to the baseline recommendation were noted in the pre-specified data collection forms. The INCREASE study was approved by the University of Kentucky Institutional Review Board (IRB #43239) and all the study participants provided informed consent. The protocol for the study was registered on clinicaltrials.gov (NCT02849639) on 29/07/2016, in accordance with the relevant guidelines and regulations or in accordance with the Declaration of Helsinki. Study data were collected and managed using the Research Electronic Data Capture (REDCap), a secure, web-based software platform designed to support data capture for research studies . Before randomization, comprehensive medication reviews were conducted by BCGPs for all participants using participant-reported medical conditions and information on dose, frequency, indication, duration of treatment, tolerability, and adverse drug reactions for all prescription medications, vitamins, and supplements. The BCGP medication review process involved 1) assessing the clinical appropriateness of each medication using the Beers Criteria and Medication Appropriateness Index (MAI); 2) evaluating potential drug-drug and drug-disease interactions in accord with the above and also taking into account prescription label information; and 3) assessing whether medication regimens followed relevant disease-specific evidence-based guidelines [ , , ]. Of note, blood laboratory work results, electronic medical records, and previous therapies (e.g., medication failures) were not available to BCGPs when devising baseline recommendations, but were available to the clinician member of the MTM team. Following randomization, the MTM recommendations were only shared with those participants randomized to the intervention group ( N = 46). Recommendations for the control group were recorded in the study database but not shared with those participants. During the INCREASE study period, the pharmacy team of two BCGPs utilized drug and health information resources (e.g., Lexicomp and UpToDate [Wolters Kluwer Health Inc. Riverwoods, IL]), Beers Criteria , relevant guidelines (e.g., Diabetes Standards of Care and Clinical Practice Guidelines for Hypertension ), and clinical judgement to justify their recommendations. Each recommendation was reviewed by both BCGPs and a consensus pharmacy recommendation was decided via discussion. Detailed information for each recommendation was then entered into a series of pre-specified study protocol data collection forms, allowing for systematic categorization of recommendations as either: 1) medication discontinuation with or without tapering; 2) switch to a different medication; 3) dose adjustment (e.g., decrease dose, adjust dose for organ function/tolerability, or increase dose); 4) new medication initiation; 5) drug or disease monitoring recommendation (e.g., vital signs, falls risk, sedation); or 6) a non-pharmacologic recommendation (e.g., sleep hygiene, avoiding gastroesophageal reflux triggers, referral for diagnostic workup). Baseline recommendations were also categorized by pharmacologic class and over the counter (OTC) or supplement status of the medication prompting a baseline MTM recommendation. A full schematic for medication categorization is available in the supplementary material (see Supplementary Table S ). After 1:1 stratified randomization, study pharmacists met with the participant and study clinician to deliver the MTM intervention. During the intervention, the team gathered further information from the patient and discussed baseline recommendations together, in-person, with additional context provided by the participant on their health status, needs, and preferences. Because health status and medication use in participants may have changed in the time between the baseline assessment and the initial MTM recommendation, comparison of baseline to final recommendations was limited to those baseline MTM recommendations that proposed medication changes at the time of the initial MTM study visit. The non-medication related recommendations (see supplementary material for additional information) were discussed during the team MTM intervention, but they were not included in the present analysis. Participant responses to each final MTM recommendation for participants randomized to the MTM intervention were collected at the conclusion of the initial MTM intervention visit using a standardized form where the participant selected his or her response to the recommendation as 1) willing to change, 2) refusing to change, 3) needing to confer with a primary care provider or other specialist (e.g., cardiologist), or 4) not applicable (e.g., the participant had already discontinued the medication, dose adjustment was no longer warranted per clinical judgement). In this manuscript we are describing in detail the immediate participant response as recorded following the baseline intervention. The impact of the intervention on medication appropriateness is described in detail elsewhere . Baseline characteristics Of the 104 participants screened, 90 were eligible and randomized in the INCREASE study. Of these, 46 participants were randomized to the MTM intervention group. The mean (SD) age at enrollment was 73.9 years (6.0). The majority of the participants reported female gender (64%) and white race (89%), with an average of 16.5 (2.8) years of education. The mean Charlson Comorbidity Index score was 1.9 (1.9), with participants reporting an average of 12.8 (4.8) total medications 2.4 (1.4) medications per participant were identified as PIMs per 2015 Beers Criteria. Supplementary Table S provides additional information on baseline characteristics for all the participants in the INCREASE study as well as for those randomized to the MTM intervention. Baseline recommendations (all INCREASE study participants) A total of 602 pre-randomization recommendations were made across the 90 INCREASE participants, averaging 6.7 ± 3.3 MTM recommendations per participant and ranging from 1 to 17 baseline recommendations per participant (median [IQR] of 7 [4, 8.9]). Table shows the distribution of medication categories associated with baseline recommendations and the types of recommendations provided. The most common class of medications with recommendations were cardiometabolic agents ( N = 138, 23%), followed by medications for gastrointestinal conditions ( N = 102, 17%), pain management ( N = 87, 15%), anticholinergics ( N = 77, 13%), vitamins and supplements ( N = 76, 13%), neuropsychiatric agents ( N = 67, 11%), and other medications ( N = 55, 9%). Across all baseline recommendations, one-third ( N = 201, 33%) were prompted by use of PIMs available on the US market as over-the counter (OTC) products without a prescription. The most frequent OTC medications included proton pump inhibitors, vitamins/supplements, antihistamines, OTC non-steroidal anti-inflammatories, aspirin, and H 2 receptor antagonists. The most common type of baseline recommendation was continuation of therapy with dose adjustment (e.g., decrease pain medication dose, intensify antihypertensive medication dose) ( N = 170, 28%). Second most common were therapeutic switches to a less risky pharmacotherapeutic alternative ( N = 166, 28%; e.g., de-escalate from a proton pump inhibitor to a H 2 receptor antagonist ± calcium-based antacid; switch from a first-generation to non-sedating second-generation antihistamine). Monitoring ( N = 101, 17%) and non-pharmacologic recommendations ( N = 76, 13%) accounted for about one-third of all baseline MTM recommendations. The most frequent monitoring recommendations involved recommending objective testing (e.g., blood pressure, blood chemistry/organ function tests) and recording self-reported measures (e.g., dizziness, pain). Non-pharmacologic recommendations most frequently involved counseling for fall prevention strategies with and without physical therapy referral, dietary and lifestyle changes for gastrointestinal conditions, non-pharmacologic pain management, and sleep hygiene. Although recommendations to discontinue medications were relatively less frequent ( N = 46, 8%), those medications most commonly associated with a baseline MTM recommendation to discontinue included vitamins/supplements and medications with therapeutic duplication (e.g., participant was taking two separate antihistamines for seasonal allergies). All recommendations for initiation of a new medication ( N = 43, 7%) involved treating an unmet clinical need and/or initiating a preventative medication, most often a guideline-recommended statin or aspirin in the setting of cardiovascular risk factors. Final recommendations (Intervention Group only) Following randomization, INCREASE participants who were assigned to MTM ( N = 46) met with the BCGP and a non-pharmacist clinician. There were 296 baseline recommendations across the MTM arm’s participants. Of these, 37 recommendations (12.5%) proposed at baseline did not relate directly to a medication change and were therefore excluded from the final recommendation analysis included in this manuscript. An account of these 37 excluded recommendations is provided in supplementary table S . Finalized, unblinded MTM recommendations that were directly related to a medication change, comprised 259 of the original 602 blinded baseline recommendations, averaging 5.6 (SD 2.3) MTM recommendations per participant. The distribution of final recommendations by medication category was as follows: cardiometabolic ( N = 58, 22%), pain management ( N = 42, 16%), vitamins and supplements ( N = 38, 15%), anticholinergics ( N = 32, 12%), gastrointestinal ( N = 32, 12%), neuropsychiatric ( N = 31, 12%), and other ( N = 26, 10%). The distribution of by recommendation type was as follows: dose adjustment ( N = 98, 38%), switch to preferred agent ( N = 92, 36%), drug and disease monitoring ( N = 30, 12%), discontinuation ( N = 26, 10%), and initiation of a new medication ( N = 13, 5%). Table shows the results of the patient-pharmacist-clinician team MTM interventions after randomization. Less than half of the baseline recommendations were revised through the team discussion and deliberation process ( N = 104, 40%). Baseline recommendations were least likely to be revised for vitamins/supplements and cardiometabolic medications, or with a recommended dose adjustment or new initiation. Conversely, baseline recommendations were the most likely to be revised when involving GI therapy and pain management medications, or for recommended medication monitoring or discontinuation. The most frequent reasons for revisions were due to missing information relevant to the participant’s medical history (e.g., a missing diagnosis for Barrett’s esophagus warranting proton pump inhibitor use) and/or missing medication information (e.g., previous failure or intolerability of a guideline-preferred pharmacotherapeutic agent). Upon receiving the finalized MTM recommendations, participants responded about half the time that they were willing to make the changes proposed ( N = 118, 46%), and often needed to confer with a primary care provider or other clinical specialist ( N = 99, 38%) before making a decision, but rarely refused to make the proposed changes ( N = 15, 6%). In some cases ( N = 27, 10%), the recommendation was no longer clinically relevant and participant responses were recorded as not applicable. Lack of applicability arose from medication use having been appropriately modified since baseline medication use information was collected ( N = 11), or from the proposed medication change no longer being clinically relevant given additional information from the participant and/or MTM team discussion ( N = 16). A full account of these 27 recommendations in provided in supplementary table S . Participant willingness to adopt recommended MTM changes was highest for vitamins/supplements and anticholinergic agents, and for recommendations involving a pharmacotherapeutic switch. Participants most often responded that they needed to confer with a primary care provider or other specialist when the MTM recommendations included psychiatric, GI, and cardiometabolic medications, or for dose adjustments or medication switches. Participant refusal to adopt final recommended changes ( N = 15, 6%) was low across all medication categories and recommendation types in the INCREASE trial MTM intervention. Refusal was highest among recommendations involving vitamins and supplements ( N = 4) or pain management ( N = 4), as well as for recommendations involving medication discontinuation ( N = 5). Of the 104 participants screened, 90 were eligible and randomized in the INCREASE study. Of these, 46 participants were randomized to the MTM intervention group. The mean (SD) age at enrollment was 73.9 years (6.0). The majority of the participants reported female gender (64%) and white race (89%), with an average of 16.5 (2.8) years of education. The mean Charlson Comorbidity Index score was 1.9 (1.9), with participants reporting an average of 12.8 (4.8) total medications 2.4 (1.4) medications per participant were identified as PIMs per 2015 Beers Criteria. Supplementary Table S provides additional information on baseline characteristics for all the participants in the INCREASE study as well as for those randomized to the MTM intervention. A total of 602 pre-randomization recommendations were made across the 90 INCREASE participants, averaging 6.7 ± 3.3 MTM recommendations per participant and ranging from 1 to 17 baseline recommendations per participant (median [IQR] of 7 [4, 8.9]). Table shows the distribution of medication categories associated with baseline recommendations and the types of recommendations provided. The most common class of medications with recommendations were cardiometabolic agents ( N = 138, 23%), followed by medications for gastrointestinal conditions ( N = 102, 17%), pain management ( N = 87, 15%), anticholinergics ( N = 77, 13%), vitamins and supplements ( N = 76, 13%), neuropsychiatric agents ( N = 67, 11%), and other medications ( N = 55, 9%). Across all baseline recommendations, one-third ( N = 201, 33%) were prompted by use of PIMs available on the US market as over-the counter (OTC) products without a prescription. The most frequent OTC medications included proton pump inhibitors, vitamins/supplements, antihistamines, OTC non-steroidal anti-inflammatories, aspirin, and H 2 receptor antagonists. The most common type of baseline recommendation was continuation of therapy with dose adjustment (e.g., decrease pain medication dose, intensify antihypertensive medication dose) ( N = 170, 28%). Second most common were therapeutic switches to a less risky pharmacotherapeutic alternative ( N = 166, 28%; e.g., de-escalate from a proton pump inhibitor to a H 2 receptor antagonist ± calcium-based antacid; switch from a first-generation to non-sedating second-generation antihistamine). Monitoring ( N = 101, 17%) and non-pharmacologic recommendations ( N = 76, 13%) accounted for about one-third of all baseline MTM recommendations. The most frequent monitoring recommendations involved recommending objective testing (e.g., blood pressure, blood chemistry/organ function tests) and recording self-reported measures (e.g., dizziness, pain). Non-pharmacologic recommendations most frequently involved counseling for fall prevention strategies with and without physical therapy referral, dietary and lifestyle changes for gastrointestinal conditions, non-pharmacologic pain management, and sleep hygiene. Although recommendations to discontinue medications were relatively less frequent ( N = 46, 8%), those medications most commonly associated with a baseline MTM recommendation to discontinue included vitamins/supplements and medications with therapeutic duplication (e.g., participant was taking two separate antihistamines for seasonal allergies). All recommendations for initiation of a new medication ( N = 43, 7%) involved treating an unmet clinical need and/or initiating a preventative medication, most often a guideline-recommended statin or aspirin in the setting of cardiovascular risk factors. Following randomization, INCREASE participants who were assigned to MTM ( N = 46) met with the BCGP and a non-pharmacist clinician. There were 296 baseline recommendations across the MTM arm’s participants. Of these, 37 recommendations (12.5%) proposed at baseline did not relate directly to a medication change and were therefore excluded from the final recommendation analysis included in this manuscript. An account of these 37 excluded recommendations is provided in supplementary table S . Finalized, unblinded MTM recommendations that were directly related to a medication change, comprised 259 of the original 602 blinded baseline recommendations, averaging 5.6 (SD 2.3) MTM recommendations per participant. The distribution of final recommendations by medication category was as follows: cardiometabolic ( N = 58, 22%), pain management ( N = 42, 16%), vitamins and supplements ( N = 38, 15%), anticholinergics ( N = 32, 12%), gastrointestinal ( N = 32, 12%), neuropsychiatric ( N = 31, 12%), and other ( N = 26, 10%). The distribution of by recommendation type was as follows: dose adjustment ( N = 98, 38%), switch to preferred agent ( N = 92, 36%), drug and disease monitoring ( N = 30, 12%), discontinuation ( N = 26, 10%), and initiation of a new medication ( N = 13, 5%). Table shows the results of the patient-pharmacist-clinician team MTM interventions after randomization. Less than half of the baseline recommendations were revised through the team discussion and deliberation process ( N = 104, 40%). Baseline recommendations were least likely to be revised for vitamins/supplements and cardiometabolic medications, or with a recommended dose adjustment or new initiation. Conversely, baseline recommendations were the most likely to be revised when involving GI therapy and pain management medications, or for recommended medication monitoring or discontinuation. The most frequent reasons for revisions were due to missing information relevant to the participant’s medical history (e.g., a missing diagnosis for Barrett’s esophagus warranting proton pump inhibitor use) and/or missing medication information (e.g., previous failure or intolerability of a guideline-preferred pharmacotherapeutic agent). Upon receiving the finalized MTM recommendations, participants responded about half the time that they were willing to make the changes proposed ( N = 118, 46%), and often needed to confer with a primary care provider or other clinical specialist ( N = 99, 38%) before making a decision, but rarely refused to make the proposed changes ( N = 15, 6%). In some cases ( N = 27, 10%), the recommendation was no longer clinically relevant and participant responses were recorded as not applicable. Lack of applicability arose from medication use having been appropriately modified since baseline medication use information was collected ( N = 11), or from the proposed medication change no longer being clinically relevant given additional information from the participant and/or MTM team discussion ( N = 16). A full account of these 27 recommendations in provided in supplementary table S . Participant willingness to adopt recommended MTM changes was highest for vitamins/supplements and anticholinergic agents, and for recommendations involving a pharmacotherapeutic switch. Participants most often responded that they needed to confer with a primary care provider or other specialist when the MTM recommendations included psychiatric, GI, and cardiometabolic medications, or for dose adjustments or medication switches. Participant refusal to adopt final recommended changes ( N = 15, 6%) was low across all medication categories and recommendation types in the INCREASE trial MTM intervention. Refusal was highest among recommendations involving vitamins and supplements ( N = 4) or pain management ( N = 4), as well as for recommendations involving medication discontinuation ( N = 5). This study describes MTM recommendations for participants enrolled in the INCREASE trial. The most common medication categories flagged at baseline included 1) cardiometabolic medications, 2) gastrointestinal medications, 3) pain management medications, 4) anticholinergics, and 5) vitamins/supplements. The most common types of recommendations made at baseline were 1) dose adjustments and 2) switches to more appropriate therapeutic alternatives. Notably, BCGP recommendations were not strictly medication related. In this study, many MTM recommendations did not directly involve a medication change, but rather addressed other potential medical problems (e.g., disease monitoring, referral for diagnostic workup or physical therapy, addition of non-pharmacologic therapies). Each of the top five medication categories identified in the analysis for baseline recommendations included at least some OTC medication options, and one-third of baseline MTM recommendations involved a medication available OTC. OTC products are available without a prescription and were identified frequently as PIMs (13% of all baseline recommendations and 15% of final recommendations). Thus, our study points to the importance of educating patients on the risk–benefit profile of OTCs and the role of pharmacists in OTC stewardship. Among the final MTM recommendations analyzed, 40% underwent revision compared to the baseline MTM recommendation provided. This reflects the potential for several factors to influence recommendations as more information is gathered in a multidisciplinary MTM intervention. Notably, input from the patient on previous therapies, medication tolerability, feasibility/adherence, and condition severity may help inform the MTM team’s final decision-making process. Our results demonstrate that engaging the patient in a team-based intervention may result in patient-motivated revisions to baseline recommendations. This comparison of pre-intervention recommendations to final recommendations after team deliberation has not been discussed in previous literature. Participant responses indicated willingness to make recommended changes about half of the time and a need to confer with a primary care provider or other clinical specialist about one third of the time. This was interpreted as generally positive, since participants were most often willing to either accept the final recommendation as specific, or to further engage with another healthcare provider to seek additional medical advice. While participant refusal to change was generally low, our findings suggest that patients may be less willing to adopt MTM recommendations for certain categories of medications or for certain recommendation types. Previous literature has addressed acceptability of MTM recommendations [ – ]; however, the recommendation type and medication category have not been described in relation to participant willingness to make changes. Further research is needed to determine if willingness to adhere long-term to recommendations is impacted by the type of recommendation and medication in question. Though extensive medication and medical histories were collected from participants, the baseline recommendations were limited to self-reported information before randomization, and complete clinically relevant information was not always available to the BCGP at baseline (e.g., renal function from an electronic medical record). This finding indicates that pharmacists engaged in MTM processes need access to relevant clinical information and an opportunity for direct engagement with prescribers and the patient who have first-hand knowledge of such clinical variables. In addition, chart review may not capture all information necessary to make a patient-centered MTM recommendations, which has not been reviewed in previous literature [ – ]. As health status, medication use and tolerability change over time, there is a need to routinely review previous recommendations and adjust them as needed to reflect the patient’s current needs. There are several limitations to this study. The 2015 Beers Criteria . was used in the study, which was the latest version at the time of the study. During the INCREASE trial, updated Beers Criteria were published by the American Geriatrics Society in 2019 . As an example, the 2015 Beers Criteria recommended caution in aspirin use for primary cardiovascular event prevention among adults aged ≥ 80 years. In contrast, the 2019 Beers Criteria expanded the recommendation to caution in aspirin use for both primary cardiovascular event and colorectal cancer prevention for adults aged ≥ 70 years. This may limit generalizability as medical treatments, guidelines, and prescribing patterns evolve over time in response to scientific evidence. Another limitation of this descriptive study was that the INCREASE participant experiences may not be generalizable to populations that have different distributions of demographic and health characteristics. Similarly, local prescribing practices and the use of PIMs observed in the INCREASE trial may not be representative of the entire US population today. Additionally, the number of study pharmacists and prescribing clinicians was small. Ability for multiple pharmacists to independently review and adjudicate the categorization of MTM recommendations would strengthen future studies. Though this study adds to the body of descriptive literature on baseline MTM trial recommendations, further studies in diverse populations are needed to identify culturally appropriate MTM strategies, as well as to allow a more detailed examination of prescribing inequities that might influence MTM outcomes over time. When teams gather to critically evaluate an individual’s medication use process (i.e., diagnostician/prescriber, dispenser, and medication user), open dialogue may facilitate transparency in strategic medication use decisions. Negotiation of an evidence-based approach to medication use should be to the individual’s unique combination of diseases, medications, clinical status, and very importantly – personal preference that may have its roots in cultural/social/racial/ethnic diversity. It is important to note that not all MTM interventions are equivalent. The INCREASE trial modeled its intervention on a foundation of multidisciplinary team interaction with active participant engagement. This is often beyond the scope of traditional, community pharmacy-based MTM models in practice today. The present results suggest that medical advice from a patient-centered team with multiple healthcare perspectives is both appealing to patients and may elicit a stronger patient acceptance of MTM recommendations. Further studies characterizing patient responses to different MTM models are needed to determine whether the qualities such as mode of delivery and multidisciplinary involvement impact long-term recommendation adherence, and/or influences patient outcomes. Multidisciplinary interventions such as the pharmacist-clinician-patient MTM team used in the INCREASE study may hold promise for improving health-related outcomes among community dwelling persons. Thorough characterization of MTM interventions is needed to specifically describe the nuances of MTM approaches for making recommendations. It is also critical for guiding future endeavors in the area of MTM science. The present data demonstrate that the recommendations suggested by patient-centered multidisciplinary healthcare teams can be dynamic and complex, and that participant responses may vary depending on the medication targeted and the type of recommendation proposed. Additional file 1: Supplementary Table S1. Medication categorization schematic for medications prompting baseline medication recommendations in the INCREASE trial. Supplementary Table S2. Baseline characteristics of all INCREASE trial participants, and those randomized to the MTM intervention. Supplementary Table S3. Full account of baseline recommendations for the MTM intervention arm that were excluded from final recommendation analysis (N=37). Supplementary Table S4. Full account of final recommendations designated as not applicable (N=27)
Utility of P63 in Differentiating Giant Cell Tumor from Other Giant Cell-Containing Lesions
568ab7dd-9e39-4d5d-bc48-58ea4013a182
9999691
Anatomy[mh]
Morphology in correlation with clinical and radiological findings is the cornerstone for the diagnosis of primary bone tumors. The giant cell rich tumors of the bone are morphologically distinct entities which share in common the presence of multinucleated osteoclast-like giant cells ( ). With the advent of minimally invasive procedures, the material obtained for initial diagnosis of primary bone tumors is often limited and poses a diagnostic dilemma. Though routine morphology is sufficient in most of the cases, immunohistochemistry (IHC) helps to resolve the diagnostic difficulties that are especially encountered in small biopsies with atypical morphology and ambiguous imaging. Until the advent of anti-histone antibodies, there was no well-established diagnostic marker for giant cell tumor of the bone (GCTB). Studies have shown conflicting results regarding over expression of p63 by IHC and molecular methods in the stromal cells of GCTB ( ). In this article we have assessed the expression of p63 in giant cell-containing lesions of the bone and determined its utility in differentiating GCTB from other giant cell-containing lesions of the bone (GCLBs). The study included non-consecutive histologically verified cases of various giant cell-containing lesions of the bone (GCLB) where paraffin blocks were available for IHC. The clinical features, location and imaging findings were retrieved from the medical records. The diagnosis was made on 42 curettage specimens, 6 open biopsies and 5 resected specimens. The hematoxylin and eosin-stained sections of all the cases were reviewed along with the clinical, imaging and other relevant laboratory findings to confirm the original diagnosis. The appropriate paraffin block was selected for IHC after examining the representative hematoxylin and eosin-stained sections. The decalcified sections, and areas of hemorrhage and necrosis were excluded. IHC was performed on 3-4µm thick sections using mouse monoclonal antibody against p63 (Pre-diluted, Ready to Use Antibody, Biogenex).The percentage of nuclear positivity was assessed in non-giant cell component after counting a minimum of 500 nuclei in the hot spots. The intensity of staining was evaluated as weak (+1), moderate (+2) and strong (+3). Moderate to strong intensity nuclear staining in >1% of the cells was considered positive. Scoring was applied by two pathologists independently and the average of the two scores was taken into account. IHC was performed in batches and slides with a positive control were included in every batch. Statistical analysis was performed using the Mann-Whitney U test, and a p-value of <0.05 was considered significant. A receiver operating characteristic (ROC) curve analysis was done to determine the cut-off value of p-63 positivity in order to predict the diagnosis of GCTB. Both the tests were done using SPSS software version 20. Of the total number of 53 cases studied, the majority were GCTBs (23), followed by 12 cases of chondroblastoma (CBL). The other GCLBs studied included 6 aneurysmal bone cysts (ABC), 3 cases of non-ossifying fibroma (NOF), in addition to 2 cases each from brown tumor of hyperparathyroidism (BTH), giant cell lesion of small bones (GCLSB) and chondromyxoid fibroma (CMF) and a 1 case each of giant cell rich reparative granuloma (GCRG), osteoblastoma and telangiectatic osteosarcoma. Regarding the 23 GCTBs, the age of the patients ranged from 14 to 69 years with a mean age of 30.18 years. There was a slight male predominance with a M:F ratio of 1.3:1. The presentation was with pain and swelling in the distal femur and proximal tibia in 18 patients, the distal radius in 2 patients and one case each of the base of proximal phalanx of the right ring finger, the left third metacarpal and the proximal humerus. The plain radiographs of GCTB involving various sites are illustrated in (A-D). The duration of the symptoms ranged from one month to 18 months. On histopathology, all showed a characteristic biphasic pattern with spatial arrangement of the osteoclast giant cells amidst the mononuclear cells as shown in (E and F). The nuclei of the mononuclear cells resembled the giant cells, which were large and had 40 to 50 nuclei. There was no clustering of giant cells. Osteoid formation was not seen. Aneurysmal bone cyst-like changes were noted in 7 cases. However, benign fibrous histiocytoma-like areas were not seen in any of the cases. Regarding the 12 CBLs, the age of the patients ranged from 12 to 35 years with a mean age of 18.1 years and M:F of 1.4:1. The majority were located in the distal femur (4 cases) followed by the proximal tibia (3 cases) and the proximal femur (2 cases). There was one case each located in the distal fibula, calcaneum and manubrium sterni. The duration of the symptoms ranged from 2.5 months to 3 years. On histopathology, the osteoclast-like giant cells were randomly distributed. The mononuclear cells were uniform round to polygonal with well-defined cytoplasmic borders and longitudinal nuclear grooves. Pink hyaline cartilage and pericellular lace-like chicken wire calcifications were also noted. Aneurysmal bone cyst-like changes was noted in 2 cases. The plain radiographs and histopathological findings of CBL are illustrated in (A-F). The mean age of the ABC patients was 21 years and the lesions were primarily located in the humerus (3 cases), the vertebral bodies (2 cases) and the proximal femur (1 case). On microscopy, there were blood filled cystic spaces separated by fibrous septae containing osteoclast-like giant cells and proliferation of fibroblasts along with reactive woven bone rimmed by osteoblasts. The three NOF patients presented with a lytic lesion in the tibia and femur. The two cases of giant cell lesion of the small bones (fourth metacarpal and middle phalanx of the right middle finger) are now considered as solid ABC whereas the term GCRG of jaw (1 Case) is still retained as it is in the recent World Health Organization classification of soft tissue and bone ( ). The giant cells showed clustering with fewer nuclei as opposed to the uniform distribution of the giant cells in GCTB. Both the cases of CMF were located in the left tibia. The two cases of BTH were located in the mandible and left tibia. These patients had elevated serum calcium and parathormone levels and were later found to have parathyroid adenomas. The imaging and histopathological findings of various giant cell-containing lesions are shown in (A, B, D, E, G, H, J and K). A single case of osteoblastoma was located in the L4 vertebral body and a case of telangiectatic variant of osteosarcoma involved the left occipital bone. All the GCTBs showed strong nuclear positivity in the stromal cells and are depicted in (G and H). The percentage positivity of cells displaying p63 immunostaining ranged from 50.5% to 71 % except for one case located in the distal femur that had a positivity of 14%. None of the cases showed any evidence of nuclear staining in the multinucleate giant cells. All the other GCLBs except one case each of CBL and BTH showed p63 staining in the non-giant cell component/ stromal cells. Out of the 11 cases of CBL that were positive for p63, 9 cases had weak to moderate intensity staining in less than 50% of the cells as shown in (G and H). The mean p63 labeling in GCTB (56.2%) was much higher compared to CBL (28.3%), ABC (15.2%), NOF (24.5%), GCLSB (11%), BTH (6.8%) and CMF (12.3%). A single case each of osteoblastoma, GCRG and telangiectatic osteosarcoma showed nuclear staining in 52.5%, 45% and 34.5% of the cells respectively. The p63 positivity was found to be statistically significant in patients with GCTB when compared to non-GCTB as analyzed by the Mann-Whitney U test (U=46.5, p<0.001). ROC analysis showed a cut off value of 49.75 for p63 and had a sensitivity of 95% and specificity of 90% to diagnose GCTBs with an area under curve (AUC) of 93.3%, p <0.001. The staining of p63 in CMF, ABC, GCLSB and BTH are shown in C, 3F, 3I and 3L respectively. The location and distribution of p63 positive staining cells in GCTB and various GCLBs are provided in . GCLBs are a heterogeneous group of tumors and tumor-like lesions of the bone with a wide range of differential diagnosis. Definite diagnosis is challenging in the setting of limited sampling, unusual age and location at presentation. The morphology of the mononuclear cells gives a clue to the diagnosis. However, the key diagnostic component may be under represented in biopsy. Secondary changes like ABC which frequently accompanies GCTB or CBL may obscure original morphology and overshadow the underlying primary tumor in biopsy specimens ( , ). This study showed p63 expression in all cases (23/23) of GCTB. Almost all the cases except one showed more than 50% nuclear positivity. The intensity of the staining was strong and was limited to the mononuclear cells. Similar to our study, Hammas, Dickson and Linden also reported overexpression of p63 in all GCTB ( , , ). De La Rosa G, Paula and Lee reported p63 overexpression in 86.9%, 82% and 81% of the cases respectively ( ). Yanagisawa reported higher mean p63-positivity for recurrent GCTB (73.6%) compared to non-recurrent cases (29.1%) ( ). However, its usefulness as a prognostic marker in recurrence has not been evaluated in our study. Studies have shown variable expression of p63 in CBL ranging from 30% to 83.3% ( ). Dickson found expression of p63 in 30% of the cases with a mild to moderate staining intensity in 7-75 % of the cells ( ). Although De la Roza observed p63 expression in 10 out of 12 cases, the intensity of staining was weak to moderate except in one case ( ). In contrast to strong nuclear staining observed in GCTB, a weak to moderate intensity staining involving less than 50% cells were seen in 9 out of the 11 cases of CBL that showed p63 positivity. The rate of p63 positivity in ABC was much higher compared to the findings of Hammas (40%), Lee (20%), Dickson (28.6%), Paula (51%) and De la Roza (62.5%) ( , ). Although GCTB affects a relatively older population, there is often considerable overlap between the clinical features of GCTB and CBL. GCT has also been documented in children and adolescents with biological behavior similar to that seen in adults, except a marked female predominance. The presence of an open physis does not impede the tumor to involve the epiphyseal cartilage ( ). On the other hand, CBL in adults more frequently involves the flat bones and short bones of the hands/feet with an aggressive behavior compared to children ( ). As both the tumors are located in the epiphyseal region, absence of a chondroid matrix often causes confusion. To differentiate the above entities, Lee recommends the use of S100 along with p63. A strong nuclear p63 staining with weak S100 in the mononuclear cells favors GCTB over CBL ( ). Akpalo reported DOG1 as a highly sensitive and specific marker for CBL ( ). The other giant cell-containing lesions like ABC, NOF, GCLSB, and BTH showed positivity for p63 in all cases but percentage of positivity and intensity of staining was significantly lower than that of GCTB involving less than 50% of the cells. Expression of p63 in most of the GCLBs may lower its specificity as a diagnostic marker. Hence a 50% cut-off value can be used to improve the specificity that would reliably distinguish GCTB from other GCLBs after taking into consideration the age and location of the tumors. A similar suggestion was also made in the Paula study ( ). The morphology of GCRG closely resembles BTH. A careful clinical history of hyperparathyroidism helps in differentiating these two entities. All the other studies except De la Roza have shown negative immunostaining for p63 in all the cases of central giant cell granuloma (CGCG) reflecting a pathogenesis different from GCTB ( , , ). The latter has shown p63 positivity in all the four cases of CGCG ( ). The single case of CGCG in our study was also positive for p63 but the proportion of cells stained were less than 50%. The number of cases of GCRG, osteoblastoma and telangiectatic osteosarcoma included were very low and this is a limitation of this study. There is disagreement amongst the various authors regarding the utility of p63 as a diagnostic marker in GCTB. De la Roza showed no difference in p63 positivity by immunostaining among the giant-cell-rich lesions such as GCTB and CBL ( ). Our results were consistent with the reports of Hammas, Lee, Paulo et al and Dickson and we suggest its use as a diagnostic marker provided with a cut off value of 50% ( , , , ). However Dickson and Lee considered 5% and 10% of cells respectively for cut off ( , ). On the other hand, de La Roza considered any nuclear staining of p63 as positive ( ). The discrepancies in staining may be attributed to the antibody clones and antigen retrieval methods. Gene expression profiling have also substantiated the above findings with over expression of p63 in the majority of GCTBs and only a minor fraction of other GCLBs ( , ). Recent studies have identified H3 histone family member 3A (H3F3A) (G34W/V/R/L) mutations in the majority of GCTBs and H3 histone family member 3B (H3F3B) (K36M) mutations in nearly all CBLs, but these are absent in other GCLBs. IHC using mutation-specific H3G34W and H3K36M antibodies is highly specific for GCTB and CBL respectively and can be used as a diagnostic tool in limited biopsies ( ). The presence of alternate H3F3A mutations on Sanger sequencing further enhances the diagnostic yield in a subset of GCTB which are negative for H3G34W on IHC ( ). The majority of primary ABCs harbor clonal rearrangements of the USP6 gene locus. Cases without the USP6 gene rearrangement hint at the presence of morphologically undetected components of GCTB and CBL in small biopsies ( ). However, these novel diagnostic techniques require expertise, standardization and validation which are not feasible in the setting of limited resources and are presently not widely available. It is also important to differentiate GCTB from other GCLBs as Denosumab has specific therapeutic implications for GCTB and radiofrequency ablation for CBL. These can be used as treatment options alternative to surgical resections ( , ). Though P63 expression can be seen to a variable extent in all GCLBs of the bone, the percentage of positivity in GCTB is significantly high compared to other GCLBs. Hence p63 staining by IHC with a cut-off of 50% can be used as an additional marker to differentiate GCTB from other GCLBs of bone. The authors have no conflict of interest.
Detection of ALK Gene Rearrangements in Non-Small Cell Lung Cancer by Immunocytochemistry and Fluorescence in Situ Hybridization on Cytologic Samples
4298edec-74d4-45a0-be4a-508e7cc67785
9999692
Anatomy[mh]
The treatment for non-small cell lung carcinoma (NSCLC) has become personalized with the advancements in molecular pathology and identification of specific therapeutic target molecules ( ). A variety of molecular abnormalities have been recognized in lung cancer including mutations in Kirsten rat sarcoma viral oncogene homolog (KRAS), epidermal growth factor receptor (EGFR), BRAF, MEK and HER2 and phosphatidylinositol 3-kinase (PI3K) pathway oncogenes. ALK, ROS1 and RET showed structural rearrangements which come up with novel therapeutic targets. MET and fibroblast growth factor receptor 1 (FGFR1) amplification is noted in adenocarcinoma and SCC respectively ( ). EGFR mutations are seen in around 32.3% of lung adenocarcinoma cases ( ). EGFR mutations, like, point mutations in exons, and 21 and exon 19 deletions, are associated with a dramatic therapeutic response to EGFR tyrosine kinase inhibitors (TKI) ( ). The molecular methods used for detection of EGFR mutations include Sanger sequencing (SS), Next generation sequencing (NGS) and polymerase chain reaction-based methods ( ). Anaplastic lymphoma kinase (ALK), is a tyrosine kinase receptor, encoded by the ALK gene. ALK gene rearrangements are seen in 1.9-6.8% cases of NSCLCs ( ). The most common genetic rearrangement involves the echinoderm microtubule associated protein-like 4 ( EML4 ) and ALK leading to formation of the EML4-ALK fusion gene that encodes for a chimeric protein with intrinsic tyrosine kinase activity. However, fusion genes involving other partners have also been detected. Identification of the ALK gene rearrangement is a mandatory diagnostic test for NSCLC patients ( , ). This is mainly owing to the availability of effective ALK-inhibitors like crizotinib, alectinib, and ceritinib, which lead to good therapeutic response and better five-year survival rates as compared to the standard chemotherapy regimens ( ). Currently, three methods are available for detecting ALK gene rearrangements: fluorescence in situ hybridization (FISH), immunohistochemistry (IHC) and real-time PCR (RT-PCR). FISH has been considered the gold standard method for detecting ALK gene rearranged NSCLC. However, the recent guidelines recommend that IHC, using FDA approved antibodies, is an equivalent alternative for ALK testing ( ). Although ALK testing is frequently performed on histopathological tissues, testing using cytologic samples is sparsely documented. The present study was undertaken to detect ALK gene rearrangements by using immunocytochemistry (ICC) and the FISH technique on cell-blocks, in cases diagnosed as lung adenocarcinoma on cytology samples. This was a one-year prospective study performed on a total of 50 lung adenocarcinoma (ADC) cases diagnosed on fine needle aspiration cytology (FNAC) or pleural fluid cytology. The study was approved by the Institute Ethics Committee (NK/4423/MD). The objectives were to detect ALK gene rearrangements by immunocytochemical (ICC) staining using the D5F3 clone and FISH technique on cell-blocks and to compare the clinicopathologic characteristics amongst the ALK positive and ALK negative cases. Direct and/or sediment smears were prepared from cytologic samples (both air-dried and 95% ethanol-fixed) and rest of the cytologic material was rinsed into a glass tube containing 1 ml of 1% ammonium oxalate for cell-block preparation. The air-dried smears were stained with May Grunwald Giemsa (MGG) and the wet-fixed smears with haematoxylin and eosin (H&E) and/or Papanicolaou stain. The cell-blocks were prepared using an already standardized plasma clot method. ICC was performed on the cell-blocks, wherever needed, to subtype the tumors using TTF1, p40, CK7 and Napsin A. ALK IHC Using D5F3 Clone The Ventana ALK (D5F3) CDx assay was used for the detection of ALK protein expression as a surrogate marker for ALK gene rearrangements. The strength of cytoplasmic granular positivity was graded as 3+ (strong positivity in >90% tumor cells); 2+ (moderate cytoplasmic granular positivity in 20-90% tumor cells); 1+ faint cytoplasmic positivity in <20% cells; and 0 (negative for cytoplasmic positivity). ALK Gene Rearrangement by FISH FISH was performed for detection of ALK gene rearrangements in ICC- positive and equal number of randomly selected ICC- negative cases. FISH was performed using the Vysis ALK break apart FISH probe kit (Abbott Molecular). Fluorescence signals (ALK 5’ probe (Spectrum Green) and the ALK 3’ probe (Spectrum Orange)) were recorded after viewing under fluorescence microscope (Olympus WX63 Epi-illumination fluorescence microscope). At least 50 tumor cell nuclei were evaluated for each case and the positivity was taken as nuclei showing split signals or deleted signals (presence of single orange signal) ( ). A tumor was interpreted as negative if less than 5 out of 50 tumor cells (<5/50 or <10%) were positive. A tumor was interpreted as positive if 25 cells out of 50 tumor cells (>25/50 or >50%) were positive. A tumor was interpreted as equivocal if 5-25 cells (10 to 50%) were positive. For equivocal cases, a second unbiased evaluation of slide by another cytopathologist was performed, following which both the first and second cell counts/readings were added together and a final percentage was calculated for 100 cells. If the average percentage of the positive cells was <15% (<15 positive nuclei/100 evaluated tumor nuclei), the sample was interpreted as negative. However, if the average percentage of the positive tumor cells was >15% (>15 positive nuclei/100 tumor cell nuclei evaluated), the sample was interpreted as positive. Statistical Analysis For data analysis, SPSS (version 22.0) software was used. The Shapiro-Wilk test was applied to check the normality of continuous data like age. For normally distributed data, mean and SD were reported. Categorical variables like gender, smoking status, pathologic diagnoses, stage, etc. were reported as frequency and percentage. The independent t-test was used to compare the mean of normally distributed quantitative variables between two groups (ALK IHC positive and negative). The chi square/Fisher’s Exact test was applied to find out any association between categorical variables and the study groups. A p value of < 0.05 was taken as significant. The Ventana ALK (D5F3) CDx assay was used for the detection of ALK protein expression as a surrogate marker for ALK gene rearrangements. The strength of cytoplasmic granular positivity was graded as 3+ (strong positivity in >90% tumor cells); 2+ (moderate cytoplasmic granular positivity in 20-90% tumor cells); 1+ faint cytoplasmic positivity in <20% cells; and 0 (negative for cytoplasmic positivity). FISH was performed for detection of ALK gene rearrangements in ICC- positive and equal number of randomly selected ICC- negative cases. FISH was performed using the Vysis ALK break apart FISH probe kit (Abbott Molecular). Fluorescence signals (ALK 5’ probe (Spectrum Green) and the ALK 3’ probe (Spectrum Orange)) were recorded after viewing under fluorescence microscope (Olympus WX63 Epi-illumination fluorescence microscope). At least 50 tumor cell nuclei were evaluated for each case and the positivity was taken as nuclei showing split signals or deleted signals (presence of single orange signal) ( ). A tumor was interpreted as negative if less than 5 out of 50 tumor cells (<5/50 or <10%) were positive. A tumor was interpreted as positive if 25 cells out of 50 tumor cells (>25/50 or >50%) were positive. A tumor was interpreted as equivocal if 5-25 cells (10 to 50%) were positive. For equivocal cases, a second unbiased evaluation of slide by another cytopathologist was performed, following which both the first and second cell counts/readings were added together and a final percentage was calculated for 100 cells. If the average percentage of the positive cells was <15% (<15 positive nuclei/100 evaluated tumor nuclei), the sample was interpreted as negative. However, if the average percentage of the positive tumor cells was >15% (>15 positive nuclei/100 tumor cell nuclei evaluated), the sample was interpreted as positive. For data analysis, SPSS (version 22.0) software was used. The Shapiro-Wilk test was applied to check the normality of continuous data like age. For normally distributed data, mean and SD were reported. Categorical variables like gender, smoking status, pathologic diagnoses, stage, etc. were reported as frequency and percentage. The independent t-test was used to compare the mean of normally distributed quantitative variables between two groups (ALK IHC positive and negative). The chi square/Fisher’s Exact test was applied to find out any association between categorical variables and the study groups. A p value of < 0.05 was taken as significant. A total of 50 primary lung adenocarcinoma diagnosed on the basis of cytomorphology and appropriate panel of immunocytochemical markers (TTF1, p40, Napsin A and CK7), were included in the study. Of these, 17 cases were reported as adenocarcinoma and 10 as NSCLC, favouring adenocarcinoma on FNAC. Another 23 cases were reported as metastatic lung adenocarcinoma in pleural effusion samples ( A-D). The age of the patients ranged from 28-82 years with the mean age being 57.5 years (standard deviation=11.1). The male:female ratio was 1.6:1 with 31 males and 19 females. The lesions were more common in the right lung (n=36) than the left lung (n=14) and the upper lobe was more commonly involved as compared to the lower lobe of the lung. The majority of the cases (n=23; 48%) in the present study were in TNM stage IV with 14 cases having evidence of extra-thoracic metastatic disease, mostly to the central nervous system and bone. Detection of ALK Gene Rearrangements by ICC Seven (14%) cases showed cytoplasmic granular positivity for ALK antibody (D5F3 clone). Based on the staining intensity, 5 (71.4%) cases were categorized as 3+ ALK positive and two (28.6%) cases were categorized as 2+ ( A-D). Clinicopathologic parameters were compared amongst ALK positive and ALK negative groups on ICC. ALK gene rearrangements were more frequently seen in females (4/7 (57.1%) cases being ALK positive) as compared to males (3/7 (42.9%) cases being ALK positive); however, this difference was not found to be statistically significant (p=0.08). The mean age of the patients in the ALK positive group (56 years) as compared to that of ALK negative group (59 years) was also not statistically significant (p=0.6). Out of 48 cases with known smoking status, 32 (66.6%) were smokers and 2/32 (6.25%) cases were ALK positive. Among smokers, 26 (81.2%) were males and 6 (18.7%) were females. Among 16 (33.3%) non-smokers, 5/16 (31.25%) were ALK positive. It was seen that non-smokers were significantly associated with ALK gene rearrangements (p=0.03). Furthermore, it was more commonly detected in females with non-smoking status; however, this was not statistically significant (p=0.08). Pleural effusion was noted in 20 (41.7%) cases; however, the presence of pleural effusion was not found to be significantly associated with ALK gene rearrangements (p=0.10). ALK positivity was seen more commonly in cases reported as adenocarcinoma as compared to cases reported as NSCLC, favour adenocarcinoma; however, this was not statistically significant (p=0.36) ( ). Furthermore, ALK gene rearrangements were seen in cases having focal solid and acinar (n=5; 71.4%), and papillary (n=2; 28.5%) architecture ( E,F). In addition, EGFR gene mutational analysis was performed by real time polymerase chain reaction in 46 (92%) cases and no case in the ALK positive group showed known mutations in the exons 18, 19, 20 and 21 of EGFR gene, reiterating the mutually exclusive existence of these genetic alterations. Two out of seven ALK positive cases received Crizotinib or Ceritinib. One patient had progression of the disease and the other one showed a partial response. Detection of ALK Gene Rearrangements by FISH FISH testing for ALK gene rearrangements using the Vysis ALK break-apart FISH probe kit (Abbott Molecular) was performed in a total of 14 randomly selected cases (7 ALK ICC positive cases and 7 ALK ICC negative cases). ALK rearrangements by the FISH technique could be detected in 5/7 (71.4%) cases, which were positive on ALK ICC. Among the ALK -FISH positive cases, the mean percentage of ALK -FISH positive rearranged nuclei was 79.25% (range 67-91%). FISH positive cases showed presence of split signal pattern (n=3) and combined 3’ deletion and split signal pattern (n=2) ( A-D). In addition, all the 7 ALK ICC negative cases were negative on ALK testing by FISH, indicating a good concordance between ICC and FISH ( A-C). Seven (14%) cases showed cytoplasmic granular positivity for ALK antibody (D5F3 clone). Based on the staining intensity, 5 (71.4%) cases were categorized as 3+ ALK positive and two (28.6%) cases were categorized as 2+ ( A-D). Clinicopathologic parameters were compared amongst ALK positive and ALK negative groups on ICC. ALK gene rearrangements were more frequently seen in females (4/7 (57.1%) cases being ALK positive) as compared to males (3/7 (42.9%) cases being ALK positive); however, this difference was not found to be statistically significant (p=0.08). The mean age of the patients in the ALK positive group (56 years) as compared to that of ALK negative group (59 years) was also not statistically significant (p=0.6). Out of 48 cases with known smoking status, 32 (66.6%) were smokers and 2/32 (6.25%) cases were ALK positive. Among smokers, 26 (81.2%) were males and 6 (18.7%) were females. Among 16 (33.3%) non-smokers, 5/16 (31.25%) were ALK positive. It was seen that non-smokers were significantly associated with ALK gene rearrangements (p=0.03). Furthermore, it was more commonly detected in females with non-smoking status; however, this was not statistically significant (p=0.08). Pleural effusion was noted in 20 (41.7%) cases; however, the presence of pleural effusion was not found to be significantly associated with ALK gene rearrangements (p=0.10). ALK positivity was seen more commonly in cases reported as adenocarcinoma as compared to cases reported as NSCLC, favour adenocarcinoma; however, this was not statistically significant (p=0.36) ( ). Furthermore, ALK gene rearrangements were seen in cases having focal solid and acinar (n=5; 71.4%), and papillary (n=2; 28.5%) architecture ( E,F). In addition, EGFR gene mutational analysis was performed by real time polymerase chain reaction in 46 (92%) cases and no case in the ALK positive group showed known mutations in the exons 18, 19, 20 and 21 of EGFR gene, reiterating the mutually exclusive existence of these genetic alterations. Two out of seven ALK positive cases received Crizotinib or Ceritinib. One patient had progression of the disease and the other one showed a partial response. FISH testing for ALK gene rearrangements using the Vysis ALK break-apart FISH probe kit (Abbott Molecular) was performed in a total of 14 randomly selected cases (7 ALK ICC positive cases and 7 ALK ICC negative cases). ALK rearrangements by the FISH technique could be detected in 5/7 (71.4%) cases, which were positive on ALK ICC. Among the ALK -FISH positive cases, the mean percentage of ALK -FISH positive rearranged nuclei was 79.25% (range 67-91%). FISH positive cases showed presence of split signal pattern (n=3) and combined 3’ deletion and split signal pattern (n=2) ( A-D). In addition, all the 7 ALK ICC negative cases were negative on ALK testing by FISH, indicating a good concordance between ICC and FISH ( A-C). ALK gene rearrangements are seen in 1.9-6.8% cases of NSCLCs ( ). The most common ALK gene rearrangement in NSCLC is paracentric inversion on the short arm of chromosome 2, juxtaposing the 5’ end of EML4 (echinoderm microtubule associated protein-like 4) gene to the 3’ end of the ALK gene ( ). This leads to the formation of EML4-ALK fusion gene, encoding for a chimeric protein with intrinsic tyrosine kinase activity. In addition to this, other break-apart and fusion partners may also be involved in ALK rearrangements. There are 4 methods for detecting these genetic rearrangements including immunohistochemical staining (IHC), fluorescence in situ hybridization (FISH), reverse transcriptase‐PCR (RT‐PCR) and next generation sequencing (NGS). All of these methods have their own advantages and disadvantages. FISH is considered as the gold standard for detecting ALK rearrangements, however, well-validated IHC has been accepted as an equivalent alternative ( , ). NGS can detect all kinds of fusions, whereas, FISH and IHC provide no fusion specification and RT-PCR provides information only regarding EML4-ALK fusion ( ). In the present study, detection of ALK gene rearrangements in lung adenocarcinoma cases was carried out using ICC on cell-blocks. The mean age of the patients in our study is similar to that observed in the previous studies ( ). Lung adenocarcinoma is more common in smokers than in non-smokers. Similarly, in our study 66.6% patients were smokers and 33.3% were non-smokers. However, ALK gene rearrangements were seen more frequently in non-smokers (31.25%), which correlates well with some previous studies ( ). The tumors were more common in the right lung (n=35) in the present study, which is similar to a previous study ( , ). Presence of pleural effusion was found to be higher in patients with ALK gene rearrangement as seen in the previous studies; however, this was not statistically significant ( , ). There was no statistically significant difference between lung cancer stage and ALK gene rearrangements which is in agreement with the previous studies ( ). A thorough comparison of the present study with previously published studies for detection of ALK rearrangements is shown in the ( ). The prevalence of ALK rearrangements using FISH and IHC observed in the present study is in concordance with other studies wherein the prevalence ranged from 3 to 14.9% and 4 to 15.4%, for FISH and IHC, respectively. The concordance rates of ALK IHC and ALK -FISH in the published literature are variable and range from 75-100% ( ). The concordance rate between ICC and FISH in the present study was 66.7%. Higher ALK positivity rates with immunochemistry can be explained by the fact that ALK IHC detects the ALK protein expression but not the genetic changes. Similarly, lower positivity rates of ALK-FISH on cell-blocks, can be due to the presence of yet unknown type of ALK rearrangement or genetic abnormalities other than ALK rearrangements, which may be missed on FISH. As FISH is considered the gold standard test, the results may indicate that ALK-IHC had false-positive results. Similarly, a few authors have found that FISH can miss a good number of patients with ALK-EML4 rearrangements who might benefit from targeted ALK therapy, so they strongly recommended ALK-IHC ( , ). When analysed alone with FISH, their cohort had 4 (7.8%) positive cases whereas the true incidence was 7 (13.7%) cases ( ). This can be because of extremely minimal splitting of red and green signals giving false negative results. However, rare ALK translocations that do not cause over expression of ALK protein may lead to negative IHC and positive FISH results. ALK gene rearrangements in lung adenocarcinoma are more commonly seen in females, non-smokers and in patients having pleural effusions. Among the architectural patterns, ALK gene rearrangements were common in cases having focal solid, acinar and papillary architecture. Immunocytochemistry on cell-blocks using the DF53 clone is a highly sensitive and specific method for detection of ALK gene rearrangements in lung adenocarcinoma with greater number of ALK positive cases being detected on ICC as compared to ALK -FISH. The authors declare no conflict of interest. None
An Unusual Nodular Tumour of the Penile Shaft with Clinicopathologic and Immunohistochemical Correlation
2b36ca1a-2055-4f61-8f7c-fd5b3163f90e
9999694
Anatomy[mh]
Granular cell tumour is an uncommon, benign tumour of nerve sheath origin. Most are acquired and present as a solitary skin-coloured nodule, less than 2 cm in size ( ). Usually, these tumours are seen in middle age. The most common locations are the upper aerodigestive tract, and in the skin and subcutaneous tissue. Granular cell tumour of the penis is very rare. In this article, we report the case of a granular cell tumour of the penis with a clinical suspicion of an indurated epidermal inclusion cyst. A 49-year-old immunocompetent male presented with the complaint of a single, non-itchy nodule on the shaft of the penis for one month. On clinical evaluation, the nodule was on the shaft of the penis, 1.5 cm in diameter and firm with no ulceration. No definite punctum was visible. There was no discharge from the lesion or penile urethra. No inguinal lymph nodes were palpable. The patient did not have any history of sexual exposure. No comorbidities were present. Excision of the nodule was done with the clinical diagnosis of epidermal inclusion cyst and sent for histopathological examination. A clinical photograph of the lesion was not taken at that time as the clinical suspicion was that of an indurated epidermal inclusion cyst and a diagnosis of granular cell tumour was not considered prior to the excision of nodule. Microscopic examination of 5 um Hematoxylin and Eosin (H&E)-stained sections showed unremarkable mucosal epithelium. The subepithelial connective tissue showed an unencapsulated tumour comprised of sheets of large, polygonal cells with a central vesicular nucleus, variably conspicuous nucleoli, and abundant coarsely granular eosinophilic cytoplasm ( ). Large cytoplasmic granules, surrounded by a clear halo (pustulo-ovoid bodies of Milian), were frequently seen ( ). The tumour cells were surrounding an occasional peripheral small nerve. No high nuclear-cytoplasmic ratio, significant pleomorphism, spindling, necrosis, apoptosis, or mitoses were evident. Immunohistochemical stains for S100P, Inhibin, CD68, SMA, Myogenin, HMB45, GFAP, and Bcl2 were carried out. On immunohistochemistry, the tumour cells showed diffuse positivity for S100 protein ( A), and CD68 ( B); focal positivity for Inhibin ( C), and Bcl2 ( D); and were negative for SMA, Myogenin, HMB45, and GFAP. A diagnosis of benign granular cell tumour of the penile shaft was made. The patient was completely asymptomatic after surgery and had an uneventful recovery. He has been on follow up for three years with no evidence of recurrence. Granular cell tumour or Abrikossoff tumour is an uncommon, benign tumour of nerve sheath origin. Most tumours are acquired and present as solitary skin-coloured nodules, less than 2 cm in size ( ). About 10% are seen as multiple lesions ( ). Usually, these tumours are seen in middle age. The most common locations are the upper aerodigestive tract, and in the skin and subcutaneous tissue ( ). Granular cell tumour of the penis is very rare ( , ). Similar tumours called congenital epulis are seen on the anterior alveolar ridge in neonates ( ). The overlying epithelium often shows pseudo-epithelioma-tous hyperplasia, which may be misdiagnosed as squamous cell carcinoma. Often, small nerves are seen in and around the tumour. Abundant granular cytoplasm is present. The cytoplasmic lysosomal macro-inclusions or pustulo-ovoid bodies of Milian (POB) are an easily recognizable component of granular cell tumour and they appear to represent the heterogeneity of the lysosomes, giving the appearance of large granules that have partially detached from the adjacent cytoplasm ( ). No well-established criteria for malignancy have been described for this tumour. However, tumour size greater than 5 cm, vascular invasion, necrosis, increased mitosis, apoptotic cells, cell spindling and rapid growth have been reported in malignant lesions ( ). Histologically, these tumours need to be differentiated from melanocytic neoplasms, leiomyosarcoma, atypical fibroxanthoma, dermatofibroma with granular cell change, and adult-type rhabdomyoma. Absence of melanin pigment or any epithelial component, and negative HMB45 helped to rule out melanocytic neoplasms. Leiomyosarcoma with an epithelioid morphology show necrosis, atypical mitosis, and epithelioid cell morphology, and stain on immunohistochemistry for SMA and Myogenin which are negative in granular cell tumours. Dermatofibroma usually have an admixture of fibroblastic, myofibroblastic, and histiocytic cells with a storiform pattern with inflammatory cells, foam cells, and giant cells. Some of them may demonstrate granular cell change. On immunohistochemistry, the dermatofibromas are negative for S100P. Adult-type rhabdomyoma has typical histological finding of large polyhedral cells with abundant eosinophilic and granular cytoplasm. Cross striations may be appreciated and these tumours are positive for SMA, Desmin, and Myogenin, and are negative for S100P on immunohistochemical stains. Immunohistochemical studies favour a Schwann cell origin. On immunohistochemical examination, granular cell tumours usually express S100 protein, CD68, microphthalmia transcription factor (MITF), inhibin-α, and NSE ( ). This case highlights an uncommon soft tissue tumour of the penis with uncertain histogenesis proposed to have a neural origin that can clinically mimic an epidermal inclusion cyst and other entities. Most of the granular cell tumours are benign and solitary, and occur in the head and neck region in middle age. They may mimic malignancy clinically. Characteristic abundant cytoplasm, low-grade nuclear features, and cytoplasmic lysosomal macro-inclusions (POB of Milian) with relevant immunohistochemistry help to diagnose these unusual neoplasms. These tumours are treated by surgical excision and rarely recur. The knowledge about the occurrence of this rare tumour at an unusual site should be borne in mind and confirmed by relevant immunohistochemical stains that help to establish the diagnosis and rule out other mimics. The authors declare no conflict of interest.
Calreticulin Immunohistochemistry in Myeloproliferative Neoplasms - Evolution of a New Cost-Effective Diagnostic Tool: A Retrospective Study with Histological and Molecular Correlation
36db9129-fe61-47e8-a566-3b994e11621e
9999704
Anatomy[mh]
Myeloproliferative neoplasms (MPNs) are a heterogeneous group of diseases that have a diverse clinical presentation and a myriad of morphologies. These intriguing disorders have always proven to be a challenge for haematologists and hematopathologists. Recent studies have necessarily aimed at classifying MPNs based on molecular alterations as there is increasing evidence that molecular or chromosomal alterations have a better correlation with clinical presentation, response to therapies, and prognosis rather than conventional morphological classification ( , ). A significant number of gene mutations have been identified in MPNs with JAK2 and MPL being the major ones by the World Health Organization (WHO) 2007 ( ). The JAK2V617F mutation has been found to occur in almost 95% of Polycythemia Vera (PV) ( ). However, JAK2V617F has been found in only 50-60% of Primary myelofibrosis (PMF) and Essential thrombocythaemia (ET) ( ). A significant gap was present that comprised many cases of MPN that do not harbor any of these mutations, but was recently filled by the discovery of Calreticulin ( CALR ) mutation in MPNs ( , ). CALR gene mutations are predominantly found in patients with essential thrombocythemia or primary myelofibrosis and are considered to be mutually exclusive with JAK2 and MPL . In spite of the mutational diversity, all the respective mutations have been shown to function via activation of the JAK-STAT pathway ( , ). With regards to diagnostics, the identification of CALR mutations is confirmatory for a diagnosis of MPN in JAK2 and MPL wild type patients, presenting with thrombocytosis. Furthermore, its presence has also shown to carry significant prognostic implications in patients with confirmed MPN. CALR -mutated PMF patients were younger than their JAK2 -mutated counterparts and displayed higher platelet count, lower leukocyte count and longer survival ( ). The haemoglobin levels in PMF with CALR mutations were less likely to display anaemia or require transfusion ( ). Many studies demonstrated that ET with CALR mutations had higher platelet count than that with JAK2 mutation and a lower incidence of thrombosis ( ). Also, JAK2- mutated ET has a 29% cumulative risk of progressing to PV whereas polycythemic transformation was not observed in CALR- mutated ET. The CALR gene is located in the short arm of chromosome 19 ( ). The most commonly reported pathogenic mutations in CALR occur in exon 9 and include 52 base pair (bp), 34 bp, and 19 bp deletions and a 5 bp insertion ( , ). All these insertions or deletions finally result in a frame shift mutation. As a result, the new reading frame codes for a characteristic protein C-terminus that is the same across the last 36 amino acids irrespective of the underlying mutation ( ). It becomes important to understand that irrespective of the presence of all the different pathogenic mutations, the final end result is a common protein epitope consisting of a similar sequence of amino acids. If mutation-specific immunohistochemistry is directed to the characteristic C-terminus of mutated Calreticulin, it would prove to be a diagnostic screening tool which is cost effective and provide faster results. Vannucchi et al. ( ), using a rabbit polyclonal antibody, and Stein et al. ( ), using a mouse monoclonal antibody, have previously demonstrated excellent sensitivity and specificity for the diagnosis of all the different types of Calreticulin mutations. The mouse monoclonal anti-mutant Calreticulin antibody (clone CAL2), the same as that was used by Stein et al., has recently become commercially available. Till date there are limited studies ( ) on the use of Calreticulin immunohistochemistry as a diagnostic tool for myeloproliferative neoplasms. According to the current WHO Update ( ), the gold standard for mutational analysis in MPN is prioritised with initial analysis of JAK2V617F mutation and if negative followed by CALR mutation. A bone marrow (BM) biopsy is mandatory for diagnosis of MPN. It has been proposed that if immunostaining for CAL2IHC in BM biopsies is validated, it can be conveniently used for identifying patients harbouring CALR mutations. Furthermore, considering its feasibility in any routine histopathology laboratory and the lower cost compared with molecular tests, an initial testing for CAL2IHC may supervene an unnecessary molecular JAK2V617F analysis thereby reducing the healthcare charges for a patient. Therefore, we aimed to test the sensitivity and specificity of CAL2IHC in a diagnostic surgical pathology laboratory that would aid in the identification of pathogenic Calreticulin mutations in the routine clinical setting. Subjects of myeloproliferative neoplasms fulfilling the inclusion criteria from January 2014 till November 2016 were recovered with the use of key word search of the electronic data bases of the Pathology and Haematology departments of our institution. The inclusion criteria for the study were: a) subjects with biopsy-proven diagnosis of myeloproliferative neoplasm, b) the availability of adequate bone marrow trephine biopsies for immunohistochemistry, c) blood/tissue available and subjected to CALR/JAK2 V617F/ JAK Exon 12 mutational analysis. Following the selection, all formalin-fixed and paraffin-embedded sections were stained in the automated immunostainer in the General Pathology department with CAL2 antibody (clone CAL2, catalogue DIA-CAL100; Dianova, Germany) at a dilution of 1:20, and according to the protocol T40 in the Ventana Benchmark automatic immunostainer, keeping in accordance with the steps mentioned as per the protocol. The investigators were blinded to the mutation status when examining the slides. To confirm the pathological evaluation, all biopsies were reviewed by 2 pathologists (SR and MTM). Positive immunohistochemical staining of CAL2IHC was defined by the presence of any intensity (grade 1-3+) of cytoplasmic staining of megakaryocytes. If a tissue section contained more than 50 megakaryocytes, then a total of 50 megakaryocytes were counted. From this count, the number of megakaryocytes staining positively for CAL2IHC (1+ to 3+ staining intensity) were counted separately. Following this, the total percentage of CAL2 positive megakaryocytes was calculated from it (CAL2 positive megakaryocytes / 50) X100. If a section contained less than 50 megakaryocytes, then the total numbers of megakaryocytes present in a section were counted, and from among them the percentage of CAL2 positive megakaryocytes (CAL2 positive megakaryocytes / total megakaryocytes counted) X100 was calculated accordingly. The intensity of cytoplasmic staining for CAL2IHC in megakaryocytes was graded from 1+ weak positivity to 3+ strong positivity. CALR gene deletions and insertions were tested by capillary electrophoresis (gene scan analysis) and the positive cases were confirmed by bidirectional Sanger Sequencing to identify the type of mutation using published protocols ( , ). The sensitivity and specificity of CAL2IHC positivity in patients with MPN were calculated and the results compared with the gold standard molecular analysis for validation as a rapid diagnostic tool. Clinical information of the patients was collected and histological evaluation and review of CAL2 positive bone marrow trephine biopsies were also done for confirmation of the diagnosis. The parameters evaluated were as follows: a) Cellularity, b) Erythroid hypoplasia, c) Megakaryocyte hyperplasia, d) Presence of giant hyperlobulated cells, e) Presence of small to intermediate size megakaryocytes, f) Nuclear abnormality of megakaryocytes, g) Clustering and paratrabecular location of megakaryocytes, h) Reticulin fibrosis (WHO 2008 grading), i) Vascular proliferation, and j) Osteosclerosis. All procedures performed in the current study were approved by the Institutional review board (IRB Min no.10587, date 29/3/17) in accordance with the 1964 Helsinki Declaration and its later amendments. Formal written informed consent was not required with a waiver by the institutional review board committee. A total of 23 subjects with adequate bone marrow trephine biopsies and peripheral blood sample available for mutational analysis were included in the study. There were 19 males and 4 female patients, aged (30-65) years. The cohort included 15 patients with diagnosis of primary myelofibrosis, 6 with essential thrombocythaemia, and 2 with Polycythemia Vera. Detailed clinicopathological features were described in . Analysis of CAL2 Immunohistochemistry (IHC) The histopathological diagnosis, CAL2 IHC results, and correlation with the mutational analysis were described in . All the patients in our cohort had undergone mutational analysis. All 6 cases of ET ( A-D) in our study showed strong cytoplasmic (2-3+) staining of CAL2IHC, displaying 69% (20-100%) positive megakaryocyte staining and showing complete concordance with molecular analysis. Only 1 case showed 1+ positivity, whereas 3 cases showed 2+ positivity, and 2 cases showed 3+ positivity. 14/15 cases of PMF ( A-D) showed (1-3+) staining of CAL2 IHC ( B), with 62% (25-90%) of megakaryocytes showing positive staining. 5 cases showed 1+ positivity, 6 cases showed 2+ positivity, and 3 cases showed 3+ positivity. One case was negative for CAL2 IHC but was positive for the Calreticulin mutation. This continued to be negative even on repeated immunohistochemistry preparations. Two cases of Polycythemia Vera ( A,B) were negative for CAL2IHC, and were also concordant with negative Calreticulin mutation. One of these two patients was positive for the JAK2 Exon 12 mutation, and the other one was positive for the JAK2V617F mutation. CAL2IHC had a sensitivity of 95.2% and specificity of 100% for effective diagnosis of Calreticulin positive MPN. Histopathological Analysis Histological findings in the bone marrow trephine biopsies of 21 CAL2 IHC positive cases (15 PMF/6 ET) were reviewed. Upon evaluation of 15 cases with PMF, 9/15 (60%) cases showed increased cellularity, 10/15 (66.7%) had granulocytic hyperplasia, 11/15 (73%) had megakaryocyte hyperplasia, with predominantly small megakaryocytes in 12/15 (80%) patients. All cases showed clustering, paratrabecular location with Grade 3 reticulin fibrosis. 13/15 (86%) showed vascular proliferation and 14/15 (93.3%) cases showed osteosclerosis. All the 6 cases with ET showed increased bone marrow cellularity and megakaryocyte hyperplasia with giant hyperlobulated megakaryocytes. 4/6 (66.7%) cases had 1+ reticulin fibrosis and the remaining 2 cases showed Grade 1 to focal grade 2+ reticulin fibrosis. None of the cases demonstrated any evidence of vascular proliferation or osteosclerosis. The histopathological diagnosis, CAL2 IHC results, and correlation with the mutational analysis were described in . All the patients in our cohort had undergone mutational analysis. All 6 cases of ET ( A-D) in our study showed strong cytoplasmic (2-3+) staining of CAL2IHC, displaying 69% (20-100%) positive megakaryocyte staining and showing complete concordance with molecular analysis. Only 1 case showed 1+ positivity, whereas 3 cases showed 2+ positivity, and 2 cases showed 3+ positivity. 14/15 cases of PMF ( A-D) showed (1-3+) staining of CAL2 IHC ( B), with 62% (25-90%) of megakaryocytes showing positive staining. 5 cases showed 1+ positivity, 6 cases showed 2+ positivity, and 3 cases showed 3+ positivity. One case was negative for CAL2 IHC but was positive for the Calreticulin mutation. This continued to be negative even on repeated immunohistochemistry preparations. Two cases of Polycythemia Vera ( A,B) were negative for CAL2IHC, and were also concordant with negative Calreticulin mutation. One of these two patients was positive for the JAK2 Exon 12 mutation, and the other one was positive for the JAK2V617F mutation. CAL2IHC had a sensitivity of 95.2% and specificity of 100% for effective diagnosis of Calreticulin positive MPN. Histological findings in the bone marrow trephine biopsies of 21 CAL2 IHC positive cases (15 PMF/6 ET) were reviewed. Upon evaluation of 15 cases with PMF, 9/15 (60%) cases showed increased cellularity, 10/15 (66.7%) had granulocytic hyperplasia, 11/15 (73%) had megakaryocyte hyperplasia, with predominantly small megakaryocytes in 12/15 (80%) patients. All cases showed clustering, paratrabecular location with Grade 3 reticulin fibrosis. 13/15 (86%) showed vascular proliferation and 14/15 (93.3%) cases showed osteosclerosis. All the 6 cases with ET showed increased bone marrow cellularity and megakaryocyte hyperplasia with giant hyperlobulated megakaryocytes. 4/6 (66.7%) cases had 1+ reticulin fibrosis and the remaining 2 cases showed Grade 1 to focal grade 2+ reticulin fibrosis. None of the cases demonstrated any evidence of vascular proliferation or osteosclerosis. The detection of the Calreticulin mutation has been time proven to carry prognostic value. The importance of its detection also lies in the confirmation of a diagnosis of MPN. On many occasions both clinically and histologically, it becomes extremely difficult even to separate a benign reactive phenomenon from MPN. Recently, a number of reports have come up highlighting the concurrent presence of multiple MPN related mutations ( JAK2V617F , MPL Exon 10 or CALR ) with concurrent BCR-ABL positive Chronic Myeloid Leukemia ( , ). These reports show that a complex admixture of different clonal population of cells with varying mutations can possibly exist together in the same patient. The postulations for such a phenomenon are as follows. Firstly, a particular clone by gradual evolution progressing to show distinct different mutations ( ). Secondly, the presence of two independent clones in different proportions right from the beginning of disease manifestation. Targeted therapy driven towards a particular clonal population (mostly Ph +v CML) may suppress the former population and facilitate the emergence of the other relatively masked clonal ( CALR mutated) population ( , ). There are few reports showing that further research with detailed molecular studies is warranted to uncover such hidden anomalies in patients with atypical presentations of MPN. Considering such complex situations, the mutational analysis often serves as an important confirmatory marker in the diagnosis of a neoplasm ( ). Currently molecular testing is the gold standard for identification of CALR mutation ( ). Recently, the use of CAL2IHC has been reported to serve as an effective marker for detection of Calreticulin mutations. Currently, there is only limited reported data on the sensitivity/specificity of this novel antibody and its effectiveness as a diagnostic tool ( ) ( ). Our study aimed to validate the utility of CAL2IHC in routine diagnostics, which in the long run could possibly serve as a surrogate diagnostic tool for molecular studies. Among the 6 reported studies, the first study was done by Vannucchi et al. ( ) where a novel polyclonal antibody was developed against all different CALR mutations. The antibody was found to be extremely effective with 100% sensitivity and specificity. There was predominant cytoplasmic staining of megakaryocytes and weaker faint staining of erythroids. This was postulated to be due to over expression of CALR mutant protein in megakaryocytes. Our study did not show any positive staining of the erythroids or myeloids, and demonstrated a crisp cytoplasmic positive staining of megakaryocytes, therefore concurring with the proposed postulation of over expression of mutant Calreticulin in megakaryocytes. Subsequently the largest study was performed by Stein et al. ( ) where a monoclonal antibody was tried on 173 subjects. The subjects included 155 patients with MPN and the results of immunohistochemistry were compared to the gold standard Sanger sequencing for CALR mutation. A high sensitivity and specificity of 100% was quoted in the study. A study on 38 subjects by Nomani et al. ( ) also showed a sensitivity and specificity of 100%. A recent study by Andrici et al. ( ) showed a mildly lower sensitivity of 91%. Our study similarly showed a sensitivity of 95.2%, and specificity of 100%. One patient in our study with a diagnosis of PMF was consistently negative for CAL2IHC even after repeated immunohistochemical staining. This observation was also noted by Andrici et al. ( ) where a case of PMF was persistently negative for CAL2IHC. Although a possibility of true negative could not be predicted accurately, it was postulated that in end stage cases of PMF, the extensive fibrosis could mask the neoplastic clone population (CAL2 IHC positive staining) of megakaryocytes. In such a situation, only the non-neoplastic population of (CAL2 IHC negative) megakaryocytes may remain more relatively exposed and visible. Hence, based on this observation, the biopsy could be falsely interpreted to be negative for Calreticulin mutation. This observation could certainly apply for our case as there was extensive fibrosis with paucity of megakaryocytes in the trephine biopsy. The patient was eventually lost to follow up and a repeat biopsy could not be performed. The other possibility was of a false positive result on the mutational analysis. This was difficult for us to evaluate further as there was insufficient tissue for a repeat molecular analysis. If we consider this to be a true negative, it becomes imperative to realise that a negative CAL2IHC may not always predict negativity for CALR mutation. This fact is justified well by Andrici et al. ( ) and also by our study. The next part of our study focused on the morphometric assessment of CAL2IHC positivity on megakaryocytes. Our study showed 69% positive megakaryocyte staining both in ET and PMF cases. Both the cases of PV were negative for CAL2IHC. Mózes et al. ( ) in their study had also performed a manual and automated morphometric analysis and correlated it with the CALR mutation load. 45.7% (±2.6) of the megakaryocytes had demonstrated a moderate to strong CALR expression manually, and 68.5% (±1.28) of the megakaryocytes by automated analysis. It was also shown that the percentage of megakaryocytes with moderate to strong staining had a positive correlation with higher CALR mutation loads. Our study demonstrates a higher proportion of megakaryocytes (83% (2-3+ intensity) in cases of ET, and 60% (2-3+ intensity) in cases of PMF) with moderate to strong CAL2IHC staining. We could not do a detailed mutational load analysis due to financial constraints. It remains to be discovered on a larger scale study whether or not the proportion/ staining intensity of megakaryocyte staining could indeed indicate a higher mutation load and therefore be prognostically significant. Molecular analysis from peripheral blood is non-invasive and indeed provides more accurate results than IHC on bone marrow trephine biopsies. However, the cost of molecular detection via bidirectional Sanger sequencing is higher than the cost of a single immunohistochemical marker and most importantly requires a high level of technical expertise. Therefore, the need of the hour is a cost effective, sensitive and specific diagnostic test that may aid in substituting the need for molecular diagnostics. This situation becomes extremely important in centers where a set up for extensive molecular testing is not available for routine diagnostics. A novel approach to the step wise diagnosis of MPN has been recently proposed by Vanucchi et al. ( ) where instead of the step wise mutational analysis starting with JAK2 mutation, CAL2IHC can be done. If the CAL2IHC is positive, it essentially excludes the positivity of JAK2 , MPL and other mutations ( ). It also becomes important to understand from a different perspective that the current WHO update ( ) mandates the histopathological analysis of bone marrow trephine biopsy, as a major criterion for diagnosis of MPN. So needless to say, it becomes feasible, time saving and cost effective for both patient and clinician to perform immunohistochemistry with faster accurate results. Hence in small health care centers, the role for molecular mutational analysis can be considered as a secondary supporting diagnostic test for discrepant cases instead of a mandatory primary test. Whether it stands the test of time to completely substitute the present gold standard of molecular testing is yet to be seen. In summary, we conclude that CAL2IHC is rapid, cost effective and highly specific for detecting CALR mutation, and is an effective diagnostic tool for diagnosis of MPN. Our study had limitations that could not be eliminated due to financial constraints. Firstly, our sample size was limited to 23 patients with a selection bias (primarily based on cases which had a sample available for molecular analysis). A larger sample size with varying population could have highlighted the specificity more accurately. Secondly, CAL2IHC was not performed on normal/non MPN subjects. Due to limited resources and infrequent molecular testing of patients, our study was primarily focused on JAK2 negative and CALR positive MPN. Thirdly, our cohort of PMF did not include cases of prefibrotic stage of- PMF that histologically can very often be a close mimicker of ET. Finally, a detailed gene sequence analysis could not be performed to locate the exact base pair deletion in CALR mutation. This could have helped us to understand the specificity of CAL2IHC better as it is reported to be positive in all the different types of CALR mutations. The authors declare no conflicts of interest.
Role of Immunohistochemistry in the Differential Diagnosis of Pediatric Renal Tumors: Expression of Cyclin D1, Beta-Catenin , PDGFR-Alpha, and PTEN
07ac7b1d-66f9-4bf6-a400-5797f8a39f02
9999706
Anatomy[mh]
Wilms tumor (WT) is the most common genitourinary tumor of the children aged between 2-4 years. Having triphasic components, which are blastemal, epithelial, and stromal, it should be taken into the differential diagnosis with other renal tumors. Clear cell sarcoma (CCS) is one of the mesenchymal tumors of kidney, which is frequently seen in the third year of life. Histologically, epithelioid cells with round to oval nucleus form nests and cords. Malignant rhabdoid tumor of kidney is a prominently aggressive tumor, seen among children under 10 years. It is composed of rhabdoid cells that have eosinophilic nucleolus and cytoplasm, big round nucleus, and paranuclear inclusions. These cells are epithelioid, round, and polygonal in appearance and show a solid and trabecular growth pattern . Mesoblastic nephroma, classified into two groups as classical and cellular, is one of the mesenchymal tumors with low potential of malignancy and is seen among children younger than 3 years. Histologically, tumor cells form fascicles composed of spindle cells and may resemble infantile fibromatosis. Although clinical and radiological findings may be helpful in differential diagnosis, all of the morphological findings of renal pediatric tumors may overlap with the various subtypes of Wilms tumor . Bi- or triphasic Wilms tumor diagnosis might be mistaken with other pediatric renal tumors in tru-cut biopsy materials as well as monophasic Wilms tumor in nephrectomy materials. Wilms tumor which has only pure blastemal component may be confused with Ewing sarcoma and neuroblastoma; components showing rhabdoid differentiation may be confused with malignant rhabdoid tumor; and the stromal component may be confused with clear cell sarcoma and mesonephric nephroma. The stroma of Wilms tumor may be in similar appearance to renal clear cell sarcoma (CCS), particularly after pre-operative chemotherapy . Cyclin D1, PTEN, Beta-catenin, and PDGF-alpha are the pathways that play a role in pathogenesis of these tumors mentioned above. Thus, the immunohistochemical work-up may help to differentiate tumors with a similar appearance. Recently, the YWHAE-FAM22 rearrangement, which is shown in high grade endometrial stromal sarcoma, was reported in CCS cases and this rearrangement was resulted in upregulation of Cyclin D1. This immunohistochemical marker is recommended for differential diagnosis of tumors resembling CCSs . MicroRNA is present in various biological processes such as growth, development, and metabolism. It was shown that microRNA has a substantial role in the pathogenesis of renal diseases. Above all, its solid role in the progression of Wilms tumor was demonstrated. Some studies depicted that dysregulation of microRNAs starts activation of phosphatase and tensin homologue (PTEN) / phosphoinositide 3-kinase (PI3K)/protein kinase B (Akt) signaling pathway. It was shown that this pathway plays a role in the pathogenesis of Wilms tumor; PTEN positivity positively correlated with the clinical stage and negatively correlated with metastasis to lymph nodes . The signalling pathway of WNT/beta-catenin has a role in processes such as embryonic growth, tumorogenesis, cell proliferation, differentiation, migration, and apoptosis. Wilms tumor’s protein is a transcription factor that is negatively correlated with the WNT/beta-catenin pathway. Several studies have shown that the signaling pathway of WNT/beta-catenin is activated in Wilms tumor . PDGF is an angiogenic factor which is formed of PDGF-A and B chains and is coded by 2 different genes. PDGF was produced in the normal kidney and Wilms tumor cells in vitro. Studies on Wilms tumor have indicated that the PDGF-A and PDGF-alpha receptors are expressed in the epithelial component . In our study, we evaluated the staining features of immunohistochemical markers, such as Cyclin D1, PTEN, Beta-catenin, PDGFR-alfa, in morphologically overlapping tumors and compared our results with recent articles. The surgical pathology database of the Department of Pathology, Istanbul University - Cerrahpasa Faculty of Medicine was searched for pediatric renal tumors between the years of 2000 and 2018. A total of 36 cases of 16 WT (all cases were post-chemotherapy resections), 10 CCS, 3 cellular mesoblastic nephroma (CeMN), 2 classical mesoblastic nephroma (CMN), 2 malignant rhabdoid tumor, one Ewing sarcoma, one diffuse large B-cell lymphoma (DLBCL), and one malignant solitary fibrous tumor (MSFT), were included in the study. All cases were diagnosed by one pediatric and one renal pathologist. Verbal informed consent was obtained from the patients. All tissues were fixed in 10% formalin and embedded in paraffin. Four tumor tissue microarrays (TMA) blocks were constructed, containing representative 4-micron thick sections, and processed as previously described . Each component of Wilms tumor was sampled on TMA blocks. Deparaffinization was performed using solutions and they were rehydrated using a series of decreasing alcohol concentrations. Samples were kept in 10 mmol/L buffered citrate solution for 30 minutes at 36 °C. PTEN (Roche, SP218), Beta-catenin (Roche, 14), PDFGR-alpha (Thermo Scientific, Ab-1), Cyclin D1 (Roche, SP4-R) immunohistochemical markers were employed with an automatic device (BenchMark XT IHK/ISH Staining Module, Ventana Medical Systems Ins., Medical Systems, Tucson, AZ, USA), according to the manufacturer’s instructions. Staining intensity was graded as weak , moderate , or strong whereas the extent of staining was graded according to the percentage. Non-staining and weak staining below 5% were considered to be negative. Data Analysis All data have been presented as mean or median or in numbers and percentages. Statistical comparisons and tests for survival analyses were not performed due to the low number of subjects. All data have been presented as mean or median or in numbers and percentages. Statistical comparisons and tests for survival analyses were not performed due to the low number of subjects. The mean age of the 10 CCS cases was 12.5 (1-53 years) and the female/male ratio was 3/7. The mean age of the 16 WT cases was 5.18 (1-15 years) and the female/male ratio was 9/7. The mean age of the mesoblastic nephroma cases was 19 months (9 months - 3 years) and the female/male ratio was 2/1. The mean age of the classical/congenital mesoblastic nephroma cases was 2 and both patients were male. The mean age of the 2 malignant rhabdoid tumor cases was 18 months (12 months - 2 years) and the female/male ratio was 1/1. The Ewing sarcoma patient was female and 36 years old. The diffuse large B-cell lymphoma patient was female and 8 years old. The malignant solitary fibrous tumor patient was male and 4 years old. The immunohistemical staining features are summarized in the . Cyclin D-1 All 10 CCS cases stained with Cyclin D-1. Staining extent varied between 10-90% ( A). The intensity of staining was weak to strong . Seven out of 11 WT cases containing an epithelial component stained moderately with an extent of 10-20%. Fourteen cases that had a blastemal component ( B) and all 16 WT cases containing a stromal component ( C) showed immunonegativity. Two of the 3 CeMN cases were not stained, and the remaining one showed moderate staining with an extent of 40% ( D). This case was re-evaluated on H&E slides. Morphological features of the cells and the expansive growth pattern led us to consider it as clear cell sarcoma. Two CMN and two malignant rhabdoid tumor cases showed immunonegativity. One Ewing sarcoma, one MSFT, and the DLBCL cases were not stained. Beta-Catenin A cytoplasmic staining pattern was considered as positive. Three of the 10 CCS cases were negative, and 7 cases showed weak to moderate staining with a extent of 30-80% ( A). Among the 16 WT cases, all 11 cases consisting of an epithelial component showed cytoplasmic, weak to moderate immunopositivity with an extent of 40-80%. One of the 14 cases with a blastemal component was negative and the remaining cases showed weak-moderate staining with 10-80% extent ( B). Out of the 16 Wilms tumor containing a stromal component, one case showing rhabdoid features stained strongly with an extent of 80%. Four of the cases were negative. The remaining cases showed weak to moderate staining with 10-80% extent. One Ewing sarcoma and one DLBCL were negative. Two CMN cases were negative. Three CeMN cases showed moderate staining with 60-80% extent. Two malignant rhabdoid tumor cases showed weak to moderate staining with an extent of 10-60%. One MSFT case showed strong staining with 90% extent. PTEN All 10 CCS cases were negative with PTEN. Among 16 Wilms tumors, 11 cases that had an epithelial component showed weak staining with 100% extent, one case containing blastemal component was negative, and the remaining cases showed weak staining with an extent of 100%. One of the 16 cases that contained a stromal component was negative and 5 showed moderate staining with 30-100% extent; two of these 5 cases were composed of rhabdoid areas. The WT case showing negativity in the blastemal component was Stage 3, and the WT case showing negativity in stromal component was Stage 2. Other cases stained weakly with an extend of 100%. One of the two cases showing anaplasia in stromal cells had weak staining with 100% extent and the other was negative. All three CeMN cases showed weak positivity with an extent of 100%. Two congenital/classical type mesoblastic nephroma cases were negative. Two malignant rhabdoid tumor and one MSFT case stained weakly with 100% extent. One Ewing sarcoma was negative. One DLBCL showed moderate staining with an extent of 100%. PDGFR-Alpha One of the 10 CCS cases was negative, and 9 cases showed weak to moderate staining with an extent of 20-100% . In two out of 11 cases with epithelial components, the PDGFR stain could not be evaluated due to technical reasons. The remaining 9 cases showed weak to moderate staining with 40-80% extent. Twelve cases with a blastemal component stained weak to moderate, with an extent of 30-80%. Two cases with rhabdoid features were negative. In fourteen cases containing stromal components, weak staining was observed with an extent of 20-80%. Three CeMN cases showed weak to moderate staining with an extent of 50-90%. One Ewing sarcoma and 2 CMN cases were negative. Two malignant rhabdoid tumor cases showed weak to moderate staining with 10-60% extent. One case of MSFT showed moderate staining with an extent of 80%. One case of DLBCL showed moderate staining with an extent of 90%. All 10 CCS cases stained with Cyclin D-1. Staining extent varied between 10-90% ( A). The intensity of staining was weak to strong . Seven out of 11 WT cases containing an epithelial component stained moderately with an extent of 10-20%. Fourteen cases that had a blastemal component ( B) and all 16 WT cases containing a stromal component ( C) showed immunonegativity. Two of the 3 CeMN cases were not stained, and the remaining one showed moderate staining with an extent of 40% ( D). This case was re-evaluated on H&E slides. Morphological features of the cells and the expansive growth pattern led us to consider it as clear cell sarcoma. Two CMN and two malignant rhabdoid tumor cases showed immunonegativity. One Ewing sarcoma, one MSFT, and the DLBCL cases were not stained. A cytoplasmic staining pattern was considered as positive. Three of the 10 CCS cases were negative, and 7 cases showed weak to moderate staining with a extent of 30-80% ( A). Among the 16 WT cases, all 11 cases consisting of an epithelial component showed cytoplasmic, weak to moderate immunopositivity with an extent of 40-80%. One of the 14 cases with a blastemal component was negative and the remaining cases showed weak-moderate staining with 10-80% extent ( B). Out of the 16 Wilms tumor containing a stromal component, one case showing rhabdoid features stained strongly with an extent of 80%. Four of the cases were negative. The remaining cases showed weak to moderate staining with 10-80% extent. One Ewing sarcoma and one DLBCL were negative. Two CMN cases were negative. Three CeMN cases showed moderate staining with 60-80% extent. Two malignant rhabdoid tumor cases showed weak to moderate staining with an extent of 10-60%. One MSFT case showed strong staining with 90% extent. All 10 CCS cases were negative with PTEN. Among 16 Wilms tumors, 11 cases that had an epithelial component showed weak staining with 100% extent, one case containing blastemal component was negative, and the remaining cases showed weak staining with an extent of 100%. One of the 16 cases that contained a stromal component was negative and 5 showed moderate staining with 30-100% extent; two of these 5 cases were composed of rhabdoid areas. The WT case showing negativity in the blastemal component was Stage 3, and the WT case showing negativity in stromal component was Stage 2. Other cases stained weakly with an extend of 100%. One of the two cases showing anaplasia in stromal cells had weak staining with 100% extent and the other was negative. All three CeMN cases showed weak positivity with an extent of 100%. Two congenital/classical type mesoblastic nephroma cases were negative. Two malignant rhabdoid tumor and one MSFT case stained weakly with 100% extent. One Ewing sarcoma was negative. One DLBCL showed moderate staining with an extent of 100%. One of the 10 CCS cases was negative, and 9 cases showed weak to moderate staining with an extent of 20-100% . In two out of 11 cases with epithelial components, the PDGFR stain could not be evaluated due to technical reasons. The remaining 9 cases showed weak to moderate staining with 40-80% extent. Twelve cases with a blastemal component stained weak to moderate, with an extent of 30-80%. Two cases with rhabdoid features were negative. In fourteen cases containing stromal components, weak staining was observed with an extent of 20-80%. Three CeMN cases showed weak to moderate staining with an extent of 50-90%. One Ewing sarcoma and 2 CMN cases were negative. Two malignant rhabdoid tumor cases showed weak to moderate staining with 10-60% extent. One case of MSFT showed moderate staining with an extent of 80%. One case of DLBCL showed moderate staining with an extent of 90%. Wilms tumor, clear cell sarcoma, atypical teratoid rhabdoid tumor, and mesoblastic nephroma are pediatric renal tumors and less frequently Ewing sarcoma has been reported in this localization. We investigated the role of these immunohistochemical markers in the differential diagnosis: Cyclin D-1 Although there are studies suggesting that immunohistochemical markers can be helpful in these tumors, immunohistochemistry is limited in the differential diagnosis . Cyclin D-1 as an immunohistochemical marker that has been studied in pediatric renal tumors and has recently been proposed as a sensitive marker for clear cell sarcomas . Jet Aw et al., Mirkovic et al., and Uddin et al. reported immunopositivity in their CCS series with 8, 14, and 19 cases respectively . In our study, immunopositivity with cyclin D-1 was observed in all 10 CCS cases. In the study of Jet Aw et al. , cyclin D-1 was immunonegative in the blastemal and stromal components of 8 Wilms tumors, whereas the epithelial components showed immunopositivity . Mirkovic et al. reported focal positivity in the blastemal component in 18 out of 20 WT cases. The epithelial component showed immunopositivity in most of these cases. Uddin et al. reported that one of the 9 WT cases showed weak positivity in the blastemal component and 7 were positive in the epithelial component . In our study, staining intensity was weak to strong with 10-90% of extent in the epithelial components. Blastemal and stromal components were negative. The dysregulated genes of the G1-S phase of the cell cycle in Wilms tumor have been reported previously. This finding explains the cyclin D-1 immunexpression in the epithelial component of Wilms tumors . Staining of blastemal and stromal component was not observed in the Wilms tumor in most studies, that is compatible with ours. Cyclin D1 may be recommended as an immunohistochemical marker in the differential diagnosis of CCS and Wilms tumor. Morphologically, the classic mesoblastic nephroma con-sists of a uniform, fibromatosis-like proliferation of fusiform cells with a fascicular appearance and it might be confused with the stromal component of Wilms tumor, and CCS. Jet Aw et al ., Mirkovic et al., and Uddin et al. have reported Cyclin D-1 positivity in classical mesonephric blastoma cases . In our study, 2 out of 5 mesoblastic nephroma cases were classical and 3 of them were cellular. Only one of these CeMN cases showed diffuse nuclear positivity with Cyclin D-1. Cyclin D-1 is not a helpful immunohistochemical marker in the differential diagnosis of CCS with mesoblastic nephromas as there are varying rates of positivity and negativity reported. Jet Aw et al. reported patchy immunopositivity in their 6-case series, Mirkovic et al. reported focal positivity in 4 rhabdoid tumor cases, and Uddin et al. reported moderate staining in 3 of their 4 cases . In our study, cyclin D-1 was negative in 2 malignant rhabdoid tumors. Due to the variable staining characteristics of Cyclin D1 in malignant rhabdoid tumors, it cannot be recommended as an immunohistochemical marker in the differential diagnosis. While cyclin D-1 showed diffuse and strong immunoposi-tivity in 3 of 5 Ewing sarcoma cases of Mirkovic et al. and 3 of 4 cases of Uddin et al., one Ewing sarcoma was immunonegative in our study . Since the number of cases was limited and different staining characteristics were reported in the literature, the role of Cyclin D-1 in the differential diagnosis of Ewing sarcoma from other tumors was not fully determined. Studies have reported diffuse and strong staining with cyclin D-1 in neuroblastoma cases . Neuroblastoma cases were not included in our study because we could not find any neuroblastoma cases located at the kidney in our archive. However, in our study, negative staining with Cyclin D-1 was detected in a malignant solitary fibrous tumor and diffuse large B-cell lymphoma cases, which are very rare in the kidney. Cyclin D-1 is a useful immunohistochemical marker due to its strong and diffuse positivity in renal CCS cases and it might be used to differentiate CCS from the blastemal and stromal component of Wilms tumor. Beta-Catenin The catenin beta-1 (CTNNB1) gene encodes the protein of beta-catenin and the mutation of this gene primarily affects the WNT-signaling pathway. As a result, the protein of beta-catenin is stabilized, and its transcription is increased. The pathway of the aberrant WNT/beta-catenin leads to developmental malformations and associated malignancies. The pathway of WNT/beta-catenin is frequently activated in Wilms tumors. In the English literature, nuclear positivity has been reported in blastemal and stromal components of Wilms tumor . In our study, beta-catenin showed cytoplasmic positivity in all stromal, blastemal and epithelial components of WT cases. One case with rhabdoid areas among the cases that had stromal components showed strong immunopositivity with an extent of 80 %. Although nuclear positivity was not detected in our cases, cytoplasmic staining was shown, which means that WNT/beta-catenin pathway might have been activated in Wilms tumors. Besides, cytoplasmic immunopositivity was observed in 7 CCS cases. This signaling pathway has not been studied in CCS cases before. While beta-catenin was found to be negative in classical mesoblastic nephroma, cytoplasmic positivity was detected in the cellular type. In contrast to our study, Demellawy et al. indicated that their classical mesoblastic nephroma and mixed mesoblastic nephroma cases showed cytoplasmic staining while the cellular mesoblastic nephroma cases were immunonegative . We showed that 2 renal malignant rhabdoid tumors were weak to moderate immunopositive with a staning percentage of 10-60%. Contrary to our findings, Saito et al. reported immunonegativity in 6 cases of malignant rhabdoid tumors 3 of which were located in kidney . In our study, Ewing sarcoma and DLBCL cases were immunonegative. Our findings suggested that this pathway is activated in WT, CCS, CeMN and rhabdoid tumor. The use of immunohistochemistry in the differential diagnosis is limited. PTEN MicroRNAs (miRNAs) play a role in the development and progression of cancer as an oncogene or tumor suppressor gene. MiR-21 has been reported to show overexpression in almost of all solid tumors studied and play a role in the pathogenesis of renal diseases. MiR-21 regulates multiple target genes, such as PTEN, negatively. PTEN, in particular, suppresses oncogene signaling pathways. In their study on 41 cases of Wilms tumor, Cui et al. have reported a negative correlation between MiR-21 and PTEN levels. Low PTEN protein levels have been shown to correlate with a poor prognosis and late clinical stage . Liu et al . have performed PTEN immunohistochemistry on 46 WT cases and reported that the tumor did not show strong immunopositivity as much as surrounding normal tissue . In our study, we observed weak immunopositivity in the blastemal and epithelial components of WT cases. In 5 of 16 cases with a stromal component, moderate positivity was found with an extent of 30-100%. Negativity and significant loss of expression was observed in the anaplastic component. In the literature, loss of expression has been generally associated with a poor prognosis in Wilms tumor. According to our findings, negativity and significant loss of expression in the areas of anaplasia might be related with a poor prognosis. In a study conducted by Little et al., the PTEN mutation was evaluated by the PCR method and only 2 of 12 CCS cases were found to be mutated . We observed immunonegativity in all 10 CCS cases. To the best of our knowledge, there is no other study on this subject amongst the documented literature in English. In addition, negativity was detected in our classical mesoblastic nephroma and Ewing sarcoma cases. In order to determine an association with the prognosis, immunohistochemical and molecular studies should be performed in large case series in order to show the association with prognosis. PDGFR-Alpha PDGFR is an angiogenic factor and is encoded by two different genes consisting of A and B chains. The receptor tyrosine kinases, KIT, PDGFR alpha and EGFR, are involved in cell growth and malignant transformation and regulation. Overexpression of PDGFR alpha has been identified in colon, breast, lung, ovarian, and pancreatic carcinomas. Wetli et al. investigated exon 12,14 and 18 mutations by sequence analysis in 209 Wilms tumor cases, and did not detect the PDGFR alpha mutation; they concluded that PDGFR alpha immunostaining was not reliable . Epithelial, stromal, and blastemal components of the 16 WT cases of our study showed immunopositivity with varying intensity and extent. Negativity was found only in the rhabdoid component. There are no studies in the literature regarding the PDGFR-alpha mechanism in renal tumors except WT. In our study, 9 cases of CCS, 3 CeMN, one rhabdoid tumor, MSFT and DLBCL showed immunpositivity. Ewing sarcoma and classic MN cases were negative. The use of this immunohistochemical marker in the differential diagnosis is limited, but the role of PDGFR-alpha in the pathogenesis of renal tumors can be investigated. Although there are studies suggesting that immunohistochemical markers can be helpful in these tumors, immunohistochemistry is limited in the differential diagnosis . Cyclin D-1 as an immunohistochemical marker that has been studied in pediatric renal tumors and has recently been proposed as a sensitive marker for clear cell sarcomas . Jet Aw et al., Mirkovic et al., and Uddin et al. reported immunopositivity in their CCS series with 8, 14, and 19 cases respectively . In our study, immunopositivity with cyclin D-1 was observed in all 10 CCS cases. In the study of Jet Aw et al. , cyclin D-1 was immunonegative in the blastemal and stromal components of 8 Wilms tumors, whereas the epithelial components showed immunopositivity . Mirkovic et al. reported focal positivity in the blastemal component in 18 out of 20 WT cases. The epithelial component showed immunopositivity in most of these cases. Uddin et al. reported that one of the 9 WT cases showed weak positivity in the blastemal component and 7 were positive in the epithelial component . In our study, staining intensity was weak to strong with 10-90% of extent in the epithelial components. Blastemal and stromal components were negative. The dysregulated genes of the G1-S phase of the cell cycle in Wilms tumor have been reported previously. This finding explains the cyclin D-1 immunexpression in the epithelial component of Wilms tumors . Staining of blastemal and stromal component was not observed in the Wilms tumor in most studies, that is compatible with ours. Cyclin D1 may be recommended as an immunohistochemical marker in the differential diagnosis of CCS and Wilms tumor. Morphologically, the classic mesoblastic nephroma con-sists of a uniform, fibromatosis-like proliferation of fusiform cells with a fascicular appearance and it might be confused with the stromal component of Wilms tumor, and CCS. Jet Aw et al ., Mirkovic et al., and Uddin et al. have reported Cyclin D-1 positivity in classical mesonephric blastoma cases . In our study, 2 out of 5 mesoblastic nephroma cases were classical and 3 of them were cellular. Only one of these CeMN cases showed diffuse nuclear positivity with Cyclin D-1. Cyclin D-1 is not a helpful immunohistochemical marker in the differential diagnosis of CCS with mesoblastic nephromas as there are varying rates of positivity and negativity reported. Jet Aw et al. reported patchy immunopositivity in their 6-case series, Mirkovic et al. reported focal positivity in 4 rhabdoid tumor cases, and Uddin et al. reported moderate staining in 3 of their 4 cases . In our study, cyclin D-1 was negative in 2 malignant rhabdoid tumors. Due to the variable staining characteristics of Cyclin D1 in malignant rhabdoid tumors, it cannot be recommended as an immunohistochemical marker in the differential diagnosis. While cyclin D-1 showed diffuse and strong immunoposi-tivity in 3 of 5 Ewing sarcoma cases of Mirkovic et al. and 3 of 4 cases of Uddin et al., one Ewing sarcoma was immunonegative in our study . Since the number of cases was limited and different staining characteristics were reported in the literature, the role of Cyclin D-1 in the differential diagnosis of Ewing sarcoma from other tumors was not fully determined. Studies have reported diffuse and strong staining with cyclin D-1 in neuroblastoma cases . Neuroblastoma cases were not included in our study because we could not find any neuroblastoma cases located at the kidney in our archive. However, in our study, negative staining with Cyclin D-1 was detected in a malignant solitary fibrous tumor and diffuse large B-cell lymphoma cases, which are very rare in the kidney. Cyclin D-1 is a useful immunohistochemical marker due to its strong and diffuse positivity in renal CCS cases and it might be used to differentiate CCS from the blastemal and stromal component of Wilms tumor. The catenin beta-1 (CTNNB1) gene encodes the protein of beta-catenin and the mutation of this gene primarily affects the WNT-signaling pathway. As a result, the protein of beta-catenin is stabilized, and its transcription is increased. The pathway of the aberrant WNT/beta-catenin leads to developmental malformations and associated malignancies. The pathway of WNT/beta-catenin is frequently activated in Wilms tumors. In the English literature, nuclear positivity has been reported in blastemal and stromal components of Wilms tumor . In our study, beta-catenin showed cytoplasmic positivity in all stromal, blastemal and epithelial components of WT cases. One case with rhabdoid areas among the cases that had stromal components showed strong immunopositivity with an extent of 80 %. Although nuclear positivity was not detected in our cases, cytoplasmic staining was shown, which means that WNT/beta-catenin pathway might have been activated in Wilms tumors. Besides, cytoplasmic immunopositivity was observed in 7 CCS cases. This signaling pathway has not been studied in CCS cases before. While beta-catenin was found to be negative in classical mesoblastic nephroma, cytoplasmic positivity was detected in the cellular type. In contrast to our study, Demellawy et al. indicated that their classical mesoblastic nephroma and mixed mesoblastic nephroma cases showed cytoplasmic staining while the cellular mesoblastic nephroma cases were immunonegative . We showed that 2 renal malignant rhabdoid tumors were weak to moderate immunopositive with a staning percentage of 10-60%. Contrary to our findings, Saito et al. reported immunonegativity in 6 cases of malignant rhabdoid tumors 3 of which were located in kidney . In our study, Ewing sarcoma and DLBCL cases were immunonegative. Our findings suggested that this pathway is activated in WT, CCS, CeMN and rhabdoid tumor. The use of immunohistochemistry in the differential diagnosis is limited. MicroRNAs (miRNAs) play a role in the development and progression of cancer as an oncogene or tumor suppressor gene. MiR-21 has been reported to show overexpression in almost of all solid tumors studied and play a role in the pathogenesis of renal diseases. MiR-21 regulates multiple target genes, such as PTEN, negatively. PTEN, in particular, suppresses oncogene signaling pathways. In their study on 41 cases of Wilms tumor, Cui et al. have reported a negative correlation between MiR-21 and PTEN levels. Low PTEN protein levels have been shown to correlate with a poor prognosis and late clinical stage . Liu et al . have performed PTEN immunohistochemistry on 46 WT cases and reported that the tumor did not show strong immunopositivity as much as surrounding normal tissue . In our study, we observed weak immunopositivity in the blastemal and epithelial components of WT cases. In 5 of 16 cases with a stromal component, moderate positivity was found with an extent of 30-100%. Negativity and significant loss of expression was observed in the anaplastic component. In the literature, loss of expression has been generally associated with a poor prognosis in Wilms tumor. According to our findings, negativity and significant loss of expression in the areas of anaplasia might be related with a poor prognosis. In a study conducted by Little et al., the PTEN mutation was evaluated by the PCR method and only 2 of 12 CCS cases were found to be mutated . We observed immunonegativity in all 10 CCS cases. To the best of our knowledge, there is no other study on this subject amongst the documented literature in English. In addition, negativity was detected in our classical mesoblastic nephroma and Ewing sarcoma cases. In order to determine an association with the prognosis, immunohistochemical and molecular studies should be performed in large case series in order to show the association with prognosis. PDGFR is an angiogenic factor and is encoded by two different genes consisting of A and B chains. The receptor tyrosine kinases, KIT, PDGFR alpha and EGFR, are involved in cell growth and malignant transformation and regulation. Overexpression of PDGFR alpha has been identified in colon, breast, lung, ovarian, and pancreatic carcinomas. Wetli et al. investigated exon 12,14 and 18 mutations by sequence analysis in 209 Wilms tumor cases, and did not detect the PDGFR alpha mutation; they concluded that PDGFR alpha immunostaining was not reliable . Epithelial, stromal, and blastemal components of the 16 WT cases of our study showed immunopositivity with varying intensity and extent. Negativity was found only in the rhabdoid component. There are no studies in the literature regarding the PDGFR-alpha mechanism in renal tumors except WT. In our study, 9 cases of CCS, 3 CeMN, one rhabdoid tumor, MSFT and DLBCL showed immunpositivity. Ewing sarcoma and classic MN cases were negative. The use of this immunohistochemical marker in the differential diagnosis is limited, but the role of PDGFR-alpha in the pathogenesis of renal tumors can be investigated. Immunohistochemically, Cyclin D-1 can be used to differentiate renal clear cell sarcoma from other renal tumors. Loss of expression of PTEN might be associated with a poor prognosis in Wilms tumors. Its role and efficacy in the differential diagnosis with other renal tumors is limited. Although the role of immunohistochemistry is limited, the beta-catenin pathway is used in Wilms tumor, CCS, CeMN, and RT. The use of PDGFR-alpha as an immunohistochemical marker is limited, but its mechanism in renal tumors has not been investigated yet. This can be a subject for future studies. The authors declare that they have no conflict of interest. This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.