accession_id
stringlengths 9
11
| pmid
stringlengths 1
8
| introduction
stringlengths 0
134k
| methods
stringlengths 0
208k
| results
stringlengths 0
357k
| discussion
stringlengths 0
357k
| conclusion
stringlengths 0
58.3k
| front
stringlengths 0
30.9k
| body
stringlengths 0
573k
| back
stringlengths 0
126k
| license
stringclasses 4
values | retracted
stringclasses 2
values | last_updated
stringlengths 19
19
| citation
stringlengths 14
94
| package_file
stringlengths 0
35
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PMC3016568 | 21224965 | INTRODUCTION
Sepsis syndrome, a systemic response to infection, can produce devastating outcomes even in previously normal individuals. Advances in the treatment of sepsis have led to an attempt to elucidate the ideal plan of management in septic patients. Some investigators advocated the use of low-dose corticosteroid therapy,[ 1 ] while others voted for Activated protein C, as soon as the patient arrived in the hospital.[ 2 ] In the past, some researchers have also used high-dose corticosteroids to suppress the inflammatory response in sepsis, but it was not found to be beneficial.[ 3 ] Although in the past few decades, with advancement in medical science, the outcome of septic patients has improved, yet a controversy exists regarding the use of some treatment modalities like Activated protein C and low-dose corticosteroid therapy. Here, in this article, we have discussed a brief pathophysiology of sepsis, the role of these two agents in sepsis management and various research work done on these two agents till the recent past. | CONCLUSION
In this era of modern anaesthesia, sepsis remains a challenging and complex disease, despite the advances in conventional critical care. Conventional therapies used in the management of sepsis are not up to the mark, therefore, the search continues for newer modalities. Activated protein C was introduced with new hopes in sepsis management. Although APC has proven its worth in various studies conducted previously, data from recent studies have put a question mark on its efficacy. On the other hand the current use of low-dose corticosteroid therapy is somewhat ambiguous, as it has produced its effect mainly in patients who have developed adrenocortical insufficiency. As per the past studies and recommendations the following can be offered:
Long duration low-dose hydrocortisone therapy, tapered over a period of at least three to five days, is recommended for septic shock patients not responding to adequate fluid and vasopressor therapy. Adult patients with sepsis-induced organ dysfunction associated with a clinical assessment of high risk of death (APACHE II≥25 or multiple organ failure) should receive rhAPC if there are no contraindications. Adult patients with severe sepsis and low risk of death (APACHE II<20 or one organ failure) should not be given rhAPC. The effects in patients with more than one organ failure, but APACHE II<25, are unclear, and in these circumstance one may use the clinical assessment of the risk of death and the number of organ failures, to support the decision. | Despite advances in modern medicine, sepsis remains a complex syndrome that has been associated with significant morbidity and mortality. Multiple organ failure associated with sepsis leads to high mortality and morbidity. About 28 – 50% deaths have been reported in patients with sepsis. The number of sepsis patients is increasing, with considerable burden on healthcare facilities. Various factors leading to a rise in the incidence of sepsis are (1) Improvement of diagnostic procedures (2) Increase in the number of immunocompromised patients taking treatment for various autoimmune disease, carcinomas, organ transplantation (3) Advances in intensive procedures (4) Nosocomial infections (5) Extensive use of antibiotics. With the better understanding of sepsis various modalities to modify pathophysiological response of septic patients have developed. Activated protein C and low-dose corticosteroid therapy have been tried in patients, with variable results. | DEFINITION OF SEPSIS AND SEVERE SEPSIS
In 1992, the American College of Chest Physician / Society of Critical Care Medicine have defined sepsis under the following heads:[ 4 ]
SIRS (Systemic inflammatory response syndrome): Altered pathophysiology without positive blood culture.
Sepsis: SIRS induced by infection.
Severe sepsis: Sepsis with dysfunction of at least one organ or organ system.
Septic shock: Severe sepsis with hypotension.
In 2001, the International Sepsis Definition Conference developed another staging system, designated by the acronym PIRO.[ 5 ]
P _ Pre-existing co-morbid conditions that will reduce survival.
I_Insult or infection.
R_Response to infectious challenge.
O_Organ dysfunction or organ failure
PATHOPHYSIOLOGY OF SEPSIS
Normal haemostasis exists as a finely tuned balance between the coagulation cascade and fibrinolysis, where the blood typically remains in the liquid condition to allow free flow of blood in the vessels, yet clots appropriately to control bleeding. In sepsis there is total disruption of this fine tuning, shifting the balance toward increased coagulation. Endotoxins released from the cell wall of gram negative bacteria cause the activation of factor X, generation of thrombin (factor IIA) and deposition of fibrin (clot) in the microvasculature. Akasa and others,[ 6 ] injected endotoxin in rats and they observed microthombi in hepatic circulation within five minutes. When continuously exposed to endotoxin, there is a deposition of fibrin clots through the microvasculature of the body, resulting in focal areas of hypoperfusion and tissue necrosis thus causing multiple organ dysfunctions.
In normal circumstances whenever there is excessive coagulation, the tissue plasminogen activator generates plasmin from plasminogen, which then lyses the fibrin clots. In sepsis this compensatory mechanism is impaired. Inflammatory cytokines and the thrombin released in sepsis cause inhibition of fibrinolytic enzymes in two ways. First, they activate the platelet and endothelium, to release the plasminogen activator inhibitor; second, the thrombin activates the thrombin activatable fibrinolysis inhibitor. Both these factors inhibit the formation and activation of plasmin and thus impair the fibrinolytic system.[ 7 ]
In normal patients, the formation of thrombin is regulated by the anticoagulant system in the body like protein C, antithrombin and the tissue factor pathway inhibitor. Protein C is converted into activated protein C by the thrombin-thrombomodulin complex, an endothelial cell surface receptor.[ 8 ] Activated protein C then inactivates factor Va and factor VIIIa, which are the key factors in the formation of thrombin. However, in sepsis, due to endothelial injury the level of thrombomodulin on the endothelial surface decreases, thus conversion of protein C to activated protein C is also reduced.[ 9 – 11 ]
MANAGEMENT OF SEVERE SEPSIS
A. Standard therapy
Initial resuscitation: Aggressive fluid resuscitation with either natural / artificial colloids or crystalloids is required during the first six hours. The goals of initial resuscitation of sepsis-induced hypoperfusion should include — (1) Central venous pressure 8–12 mm Hg (2) Mean arterial pressure (MAP)≥65 mm Hg. Early diagnosis and antibiotic treatment: Intravenous antibiotic therapy should be started as early as possible. Appropriate cultures should be obtained before antimicrobial therapy is initiated, if such cultures do not cause significant delay in antibiotic administration. Specific anatomical diagnosis of infection requiring consideration for emergent source control should be sought and diagnosed or excluded as rapidly as possible. Treatment of underlying cause: It may require surgical intervention, such as drainage of abscess, laparotomy and so on. Vasopressors: Norepinephrine or dopamine should be used as first choice vasopressor agents to correct hypotension in septic shock. Low-dose dopamine is not recommended for renal protection. Mechanical ventilation: Invasive or noninvasive ventilatory support may be required in patients in whom pulmonary functions are compromised and they are not able to maintain optimal oxygen saturation.
B. Therapy directed to revert / inhibit pahophysiological response in sepsis
Human recombinant activated protein C Low-dose corticosteroid therapy
Human recombinant activated protein C (rh APC)
Food and drug association has approved only one drug till date and that is the human recombinant activated protein C (Drotrecogin alfa), for therapeutic interventions in patients with sepsis.
Pharmacokinetics
Protein C, a major physiological anticoagulant is an endogenous protein in humans, encoded by the PROC gene, with the ability to modulate both inflammation and coagulation. Protein C is made in the liver and circulates as a plasma zymogen (an inactive precursor of protease). This vitamin K dependent serine protease enzyme is activated to become APC on the endothelial surface by the thrombin-thrombomodulin complex.[ 12 ] Activated Protein C demonstrates a biphasic half life (t1/2) with a t1/2a and t1/2 b of 13 minutes and 1.63 hours, respectively. Activated Protein C is inactivated by endogenous plasma protease inhibitors. Due to the short half-life and metabolism, rapid elimination of the drug occurs after stopping the infusion. The volume of distribution (Vd) is comparable to the extracellular volume in healthy adults (16 – 20 L).
Chemical formula: C 1786 H 2779 N 509 O 519 S 29M
Molecular Weight: 55,000 gram/mol
Bioavailability: 100% (Intravenous application only)
Metabolism: Endogenous plasma protease inhibitor
Half-life: Less than 2 hours.
Pharmacodynamics
Activated protein C has three mechanisms of action:
Acts as an anti-inflammatory agent: It exerts an anti-inflammatory effect through indirect inhibition of Tumour Necrosis Factor Alfa, by blocking the leucocyte adhesion to selectin and by limiting thrombin-induced inflammatory responses within the vascular endothelium. As an anticoagulant agent: Through inhibiting factor Va and factor VIIIa, thus, limiting the thrombin formation. As a pro-fibrinolytic agent: Through inhibition of the plasminogen activator inhibitor-1 and decreased activation of the thrombin activatable fibrinolysis inhibitor.
Dosage regimen of activated protein C
Drotrecogin alfa is a lyophilised powder that must be reconstituted prior to dilution. It is stable in 0.9% Normal saline (NS) at a concentration of 100 – 200 mcg/ml. It must be administered through a dedicated intravenous line. It can be administered concurrently with 0.9% normal saline, Ringer’s lactate, or dextrose solution through the same line.[ 13 ] It is given in multiple infusions in a total duration of 96 hours, provided the maximum duration of one infusion is not more than 12 hours. The dosage is calculated using the formula:
Mg of drotrecogin=patient weight in kg×24 mcg / kg /hour×(hours of infusion)/1000
ADVERSE EFFECTS AND CONTRAINDICATIONS
Being a potent anticoagulant, the major adverse effect is an increased risk of bleeding and therefore rhAPC is contraindicated in patients with an increased risk of bleeding. The contraindications are:
Active internal bleeding Recent haemorrhagic shock (within three months) Recent intra-cranial or intra-spinal surgery or severe head trauma (within two months) Trauma with increased risk of life threatening bleeding Presence of epidural catheter Intra-cranial neoplasm or mass lesion or evidence of cerebral herniation
MONITORING PARAMETERS
If there is any evidence of bleeding, periodic haemoglobin, haematocrit, coagulation profile and complete blood picture should be done. As drotrecogin alfa prolongs APTT, it is not a reliable marker of the coagulation profile.
PRECAUTIONS
For percutaneous procedures, the drug should be stopped two hours prior to the procedure and can be started after one hour. For elective surgical procedures, the drug should be withheld 12 hours before and 12 hours after the procedure. Drug should be stopped at least two hours before the emergency procedure. All patients on Drotrecogin alfa should receive stress ulcer prophylaxis, such as histamine-2 antagonists. For uncomplicated, less invasive procedures, the drug can be restarted immediately. If a patient requires full dose therapeutic heparin or there is evidence of active bleeding, the drug should be stopped immediately.
DRUG INTERACTIONS
Human recombinant activated protein C should be used cautiously with other drugs that effect haemostasis concomitantly, such as, aspirin, warfarin and clopidogril; low-dose prophylactic heparin therapy can be given concurrently with drotrecogin.
STUDIES ON ACTIVATED PROTEIN C
Several studies have reported that the level of APC is low in septic patients and these levels predict the outcome.[ 14 15 ] Taylor and others[ 16 ] conducted a study of gram-negative septicaemia in baboons; administration of APC along with a 100% lethal dose of E. coli prevented the lethality in all the animals tested. When the animals were pre-treated with an antibody specific for activated protein C, the injection of a sub-lethal dose of the organism became 100% lethal.
In July 1998, a multi-center, randomised, double blind, placebo-controlled trial of 1690 patients with severe sepsis was conducted. The trial, which was completed in 2001, is popularly known as the PROWESS (recombinant human activated protein C worldwide evaluation in severe sepsis) trial.[ 17 ] Patients either received drotrecogin alfa at the rate of 24 mcg /kg/hour or placebo for 96 hours of total infusion time. APACHE II (Acute Physiology and Chronic Health Evaluation) was calculated during the 24-hour period, immediately preceding the start of drug administration. Statistical analysis indicated that the 28-day mortality was 30.8% in the placebo group and 24.7% in the drotrecogin alfa group. Thus, there was an absolute risk reduction in the mortality of 6.1%The difference in the mortality between patients given APC and those given placebo was limited to patients with high risk of death, that is, APACHE II scores ≥25. In these groups mortality was reduced from 44% in the placebo group to 31% in the treatment group. However, the efficacy was doubtful in patients with low risk of death (APACHE II<25). Serious bleeding occurred more often in patients receiving drotrecogin alfa (3.5%) than in patients receiving placebo (2%), in the PROWESS trial. The results clearly indicated that one in every five patients who would have died was saved with drotrecogin alfa treatment..
In 2002 the European Medicine Evaluation Agency (EMEA) approved the use of APC in patients with severe sepsis and multiple organ failure with an annual required review.[ 18 ]
In 2005, the ADDRESS trial submitted its report. This trial, required by the FDA, evaluated the standard 24 mcg/kg dose of APC for 96 hours in a double blind, placebo-controlled, multi-center trial, in patients with severe sepsis and APACHE II score<25 or in patients with single organ failure. No significant difference was found in the 28-day mortality (17% placebo versus 18.5% APC, P =0.34). The rate of serious bleeding was 2.4% with APC and 1.2% with placebo, during the infusion period.[ 19 ]
The RESOLVE trial with 240 children getting APC and 237 children getting placebo submitted its report in 2007.[ 20 ] In this trial too, the 28-day mortality rate was not improved significantly (Placebo 17.5% versus APC 17.7%, P =0.39). Unlike the previous studies, the risk of bleeding was equal for placebo and APC (6.8% placebo and 6.7% APC, P =0.97).
The most recent meta-analysis on APC from the Cochrane Database submitted its review in 2008. The review included 4434 adults and 477 paediatric patients and did not find any significant reduction in the 28-day mortality in adults. However, the risk of bleeding was increased.[ 21 ]
On recommendation of FDA another multi-center, placebo-controlled trial has been started to determine the efficacy of APC, called the PROWESS-SHOCK trial. The trial is expected to be completed by the end of 2011.[ 22 ]
After the PROWESS trial, investigators were very enthusiastic about use of APC in all septic patients. Now its use is restricted to selected patients with severe sepsis because of two reasons, (1) cost of treatment with APC is very high, and (2) reduction in the 28-day mortality is not very significant in each and every septic patient.
On the basis of these studies, the SSC guidelines suggested that adult patients with sepsis-induced organ dysfunction associated with a clinical assessment of high risk of death, most of whom will have APACHE II≥25 or multiple organ failure, should receive rhAPC if there are no contraindications (grade 2B except for patients within 30 days of surgery, for whom it will be grade 2C). The SSC guidelines further recommended that adult patients with severe sepsis and low risk of death, most of whom will have APACHE II<20 or one organ failure, should not be given rhAPC (grade 1A). The effects in patients with more than one organ failure, but APACHE II<25 are unclear, and in such circumstances one may use the clinical assessment of the risk of death and the number of organ failures, to support the decision.
LOW-DOSE CORTICOSTEROID THERAPY IN SEPSIS
In sepsis there is an imbalance between pro-inflammatory and anti-inflammatory cytokines. There is an increase in the concentration of factors IL-1, IL-6 and TNF-α, which are released in abundance from the lymphocytes, macrophages and endothelial cells, during the development of sepsis.[ 23 ] Studies show that NF-kappa B activity was highest in the non-survivors of sepsis.[ 24 25 ]
Corticosteroids could reduce the exaggerated inflammatory response in sepsis and prove beneficial by,
Inhibiting the pro-inflammatory, cytokine-like factor NF-kB, both directly and indirectly.[ 26 ] Promoting the production of anti-inflammatory cytokines such as IL-4 and IL-10. Several studies have suggested that the non-survivors of sepsis have a lower concentration of anti-inflammatory cytokines[ 27 28 ] and a higher concentration of pro-inflammatory cytokines.[ 29 – 31 ] Enhancing the activity of adrenergic receptors. Increasing myocardial contractility. Corticosteroids inhibit inducible nitric oxide synthetase, a vasodilator molecule.[ 32 ] They also inhibit serum phospholipase A2, resulting in decreased production of vasodilator PG E and prostacycline.[ 33 ] Thus, the overall effect is an increase in blood pressure.[ 34 ]
Adrenal insufficiency in sepsis
Relative adrenal insufficiency and resistance to glucocorticoid may arise during severe sepsis. TNF-alfa and IL-6 decrease cortisol production from the adrenal gland and ACTH production from the pituitary gland.[ 35 36 ] Average production of cortisol by the adrenal gland is approximately 5.7 mg/m 2 of the body surface per day. In conditions of sepsis, the adrenal gland may produce 150 – 200 mg/m 2 of cortisol daily. If the adrenal glands are exposed to continuous activation by pro-inflammatory cytokines, the glands are exhausted, thereby causing relative or absolute adrenal insufficiency. Relative adrenal insufficiency is more common, seen in 16.3 to 55% of the patients with septic shock. Absolute adrenal insufficiency is seen in only 3% of the patients.[ 37 38 ]
Selection and dosages of steroid
The most preferred corticosteroid in sepsis is Hydrocortisone. It is usually given 200 – 300 mg/dl in divided doses or in a continuous infusion. Hydrocortisone is preferred because it is the synthetic equivalent to the physiological final active cortisol. Therefore, treatment with hydrocortisone directly replaces the cortisol independent of metabolic transformation. Another advantage of hydrocortisone is that it has intrinsic mineralocorticoid activity. Annane D[ 39 ] used fludrocortisone in addition to hydrocortisone, but the contribution of the mineralocorticoid to the benefit observed in this trial is unknown. Given that hydrocortisone has some mineralocorticoid activity and absolute adrenal insufficiency is rare in sepsis, and using hydrocortisone by itself has been found to be beneficial, the current guidelines for the treatment of septic shock do not advocate use of fludrocortisone[ 40 ]
Role of steroid therapy in shock reversal
Multiple randomised trials in patients with septic shock confirm that low-dose steroid therapy in these patients improves blood pressure, thereby, causing reduction in vasopressor support.[ 41 42 ] The postulated mechanisms of the corticosteroid affecting the vascular tone are numerous and cover signal transduction, prostaglandin metabolism, Na+ and Ca+ transport, modulation of adreno-angiotensin, endothelin, mineralocorticoid receptors and inhibition of nitric oxide formation.[ 43 44 ] In a French multiple-center, a randomised, controlled trial[ 45 ] in 300 patients with severe volume and catecholamine refractory septic shock, low-dose corticosteroid therapy improved survival. This study demonstrated that a low dose of hydrocortisone reduced the risk of death in septic shock patients with relative adrenal insufficiency.
Duration of corticosteroid therapy
Intravenous hydrocortisone (200–300 mg/day) for seven days in three-to-four divided doses or continuous infusion is recommended in patients with septic shock, who, despite adequate fluid and vasopressor therapy, are not able to maintain their blood pressure. Minneci and colleagues performed a meta-analysis and found that trials done after 1997, used a median total hydrocortisone dose of 1,209 mg versus 23,975 mg in the earlier trials. The later trials also used a steroid taper.[ 46 ] Gradual tapering of steroid therapy is required because abrupt discontinuation may lead to rebound hypertension and increase in inflammatory response.[ 47 ]
Adverse effects of corticosteroid therapy
Worsening of infection that initiated sepsis Development of super-infections Hypernatraemia Hyperglycaemia Gastrointestinal bleeding
However, most of these side effects are seen with high-dose corticosteroid therapy. Annane and colleagues performed a meta-analysis on steroid therapy. In data collected from 12 trials, in 1705, a patient’s risk of superinfections in the corticosteroid group was not increased; 1321 patients, in 10 trials did not show increased incidence of gastrointestinal bleeding. In 608 patients from six trials the incidence of hyperglycaemia was not increased.[ 48 ]
Results from these data clearly indicate low-dose steroid therapy is safe in septic shock patients.
Studies on low-dose corticosteroid therapy
The debate on low-dose corticosteroid therapy began in the year 2000, with the publication of the study of Annane and others. They studied 189 patients with septic shock and found that patients having baseline plasma cortisol in the level of ≤9 mg/dl, with corticotrophin stimulation, had the best survival rates.[ 49 ]
Another randomised, placebo-controlled, double-blinded trial conducted by Annane and others, in 2002, re-defined the use of low-dose corticosteroid therapy in septic shock patients. A total of 299 patients received either hydrocortisone (50 mg intravenously every six hours) and fludrocortisone (50 mcg tablet once a day) or matching placebos for seven days. Prior to drug administration, the patients received a corticotrophin stimulation test, with two-thirds qualifying as non-responders and the rest as responders. The significant 28-day survival benefit was reported in all patients with corticosteroid responders. The significant 28-day survival benefit was reported in all patients with corticosteroid treatment. A significant improvement in survival was seen primarily in nonresponders.[ 50 ]
The next major study to address this issue was the CORTICUS trial. This study was a randomised, double-blinded, placebo-controlled trial that studied 251 septic shock patients who received 50 mg of intravenous hydrocortisone and 248 septic shock patients who received placebo every six hours, for five days. There was no significant survival benefit in the patients treated with hydrocortisone in comparison to the patients treated with placebo ( P =0.69).[ 51 ]
A recent updated meta-analysis published in 2009, indicated the beneficial effect of corticosteroid in septic shock patients, but investigators felt the need for few more research studies in this population of patients.[ 52 ]
Annane and co-workers had published an updated review of their research in the past and concluded that regardless of the dose and duration, the use of corticosteroid was not beneficial in septic shock patients.[ 53 ]
In the year 2008, the most recent Sepsis Survival Campaign Guidelines have downgraded the role of hydrocortisone in the treatment of septic shock to lower the 2C recommendation, and recommended the use of corticosteroid only to those adults whose blood pressure is not responding to adequate fluid management and vasopressor therapy. The SSC guidelines suggest that patients with septic shock must not receive dexamethasone if hydrocortisone is available (grade 2B). Fludrocortisone is considered optional if hydrocortisone is used (grade 2C). In addition they have also discouraged the corticotrophin stimulation test, to identify septic shock patients requiring steroid therapy with a low-grade 2B recommendation.[ 54 ] | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):496-503 | oa_package/43/d7/PMC3016568.tar.gz |
||||
PMC3016569 | 21224966 | INTRODUCTION
Preanaesthesia evaluation is the process of clinical assessment by an anaesthetist, which precedes the delivery of anaesthesia care for surgery and non-surgical procedures.[ 1 ] Preanaesthetic clinic (PAC) is a specialty clinic where patients are evaluated before surgery to establish a database upon which risk assessment and perioperative management decisions can be made. It includes an interview and examination of the patient, a review of previous medical, surgical and anaesthesia problems, a detailed account of current medication use, and provisions for obtaining and reviewing preoperative tests.
The goals are to:[ 1 2 ]
create a rapport with the patients and their families and allay their anxiety; reduce the morbidity of surgery; increase the quality; reduce surgical delays and case cancellations and decrease the cost of preoperative care.
Traditionally, elective surgical patients were admitted to hospital the day before surgery to undergo preanaesthetic assessment, risk optimisation and preoperative preparation. This practice is no longer a routine in many parts of the world because of its lack of cost-effectiveness. In addition, in-patient evaluation did not effectively eliminate day-of-surgery cancellations due to inadequate optimisation of co-morbidities[ 3 ] and administrative factors.
Moreover, as the focus of health care delivery has been recently towards ambulatory care, an efficient working PAC is required. | CONCLUSION
A thorough preoperative evaluation can be as effective as an anxiolytic premedication. All the departments of anaesthesia should have guidelines regarding timing, preoperative testing and super speciality consultations for effective functioning of a PAC to reduce the number of visits. These guidelines should be continuously updated and made available online to all providers within the institution. Efforts should be made to utilise the upcoming telemedicine technology in our preanaesthesia consultations also. | The goal of preoperative risk assessment is to identify and modify the procedure and patient factors that significantly increase the risk for complications. Preanaesthesia clinics (PACs) have been developed to improve the preoperative experience of the patients by coordinating surgical, anaesthesia, nursing and laboratory care. These clinics can also help in developing practice guidelines, and decreasing the number of consultations, laboratory tests and surgical cancellations. Though these clinics are present in most of our hospitals, a major effort is needed to upgrade these setups so as to maximise the benefits. This review gives a brief account of organisation and functioning of PACs. | LOCATION OF PREANAESTHESIA CLINICS
It can be situated in the same hospital complex as other surgical specialty clinics to promote easy accessibility and convenience. Moreover, this may allow patients to be assessed at the PAC on the same day as their surgical appointments.
The main parameters for an ideal PAC location are the following.
It should be easily accessible from the main entrance of the hospital. Preferably, it should be in the main out patient department (OPD) block near surgical specialties. It should be separate from the in-patient facility. It should be preferably on the ground floor. There should be an easy access to the hospital diagnostic and other support facilities (waiting area, safe drinking water and toilets for patients and their relatives).
The physical design of the clinic should provide adequate space demarcated into areas for
registration and reception, patient interview and examination, patient and family preoperative education and staff rest.
The PAC should provide a relaxed and private atmosphere for the following activities.[ 4 ]
Preanaesthesia evaluation through review of the medical records, history, examination and relevant ancillary testing, followed by risk optimisation through appropriate interventions and consultations. Discussion of the risks and benefits of anaesthetic options and pain management strategies. Alleviation of anxiety through counselling. Patient and family education on topics such as fasting, medications to continue on the day of surgery, special nursing care requirements, anticipated duration of hospital stay, transportation issues and contingency for undercurrent illness. Validation of consent and documentation of advanced medical directives (if any). Reduction of day-of-surgery delay or no-show via telephone calls made on the day before surgery.
STAFFING REQUIREMENTS
A senior specialist/consultant should be responsible for policy administration and quality assurance.
Medical officers as well as trainees’ in anaesthesia should be posted to the clinic as part of their training to ensure training in preanaesthesia evaluation and optimisation. Any queries regarding the optimisation and fitness of the patient should be discussed with the consultant posted. Any case found to have a American Society of Anesthesiologist (ASA) status score of >II should also be discussed with the senior anaesthesiologist involved in giving anaesthesia, so as to avoid last minute cancellations.
APPOINTMENTS
In general, patients are screened at the PAC from 2 to 30 days or more before their scheduled date of surgery.[ 1 ] There are not enough data in the literature on the optimal timing for preanaesthesia evaluation. Factors that should be used to guide the timing of preanaesthesia evaluation are patient demographics and clinical conditions, the type and invasiveness of the procedure, and availability of resources provided by the specific practice environment. ASA Task Force has recommended that preanaesthesia evaluations should be performed prior to the day of surgery for patients with high severity of disease and/or undergoing procedures of high surgical invasiveness.[ 1 ]
Scheduling should also ensure an even patient flow, the timely reporting of results by laboratory and diagnostic imaging services, a system of medical referral for timely optimisation and an efficient appointment system.
DATA COLLECTION AND RECORDING
The basic information about the patients, the surgery planned, past medical and surgical history can be obtained with the help of a specially designed questionnaire. This can be distributed to the patients in the waiting area and filled by them with the help of relatives/staff nurses. Such practices are widely prevalent in other countries and should be enforced in our country. This will reduce the load on the anaesthetists and will go a long way in improving the overall functioning of the PAC clinic.
A computer database of the details of the preanaesthtic check-up of the patients can be made so that it can be reviewed by anybody connected to the network. Such electronic medical records allow standardisation of patient information, avoid redundancy, and provide a database for research.
BENEFITS OF AN EFFECTIVE FUNCTIONING PREANAESTHETIC CLINIC
Reduction in excessive preoperative testing
The ASA Task Force recommends that preoperative tests may be ordered, required or performed on a selective basis for purposes of guiding or optimising perioperative management. The indications for such testing should be documented and based on information obtained from medical records, patient interview, physical examination, and type and invasiveness of the planned procedure. A test should be ordered only if it can correctly identify abnormalities and will change the diagnosis, the management plan or the patient’s outcome.
It has been estimated that 60–75% of preoperative tests ordered are medically unnecessary.[ 5 – 7 ] Indiscriminate testing can increase the risk of iatrogenic injury arising from unnecessary testing or treatment when a borderline or false-positive result is obtained.
A false-positive result distracts the physician from detecting or pursuing a clinically more significant problem and may eventually harm the patient. Furthermore, unnecessary testing may cause a delay or cancellation of the planned surgery. It is better not to order an unnecessary test because the medico-legal risk is greater for not following an abnormal test result than for not ordering a test that was not indicated. Hiding clinically insignificant abnormal laboratory test finding can result in legal action if it is not evaluated further.
PACs should help hospitals to standardise and optimise preoperative laboratory testing. A protocol based on scientific evidence and local practices should be formulated and circulated among the operating surgeons and the anaesthetists. The surgeons should refer the patients to the PAC with the preoperative testing based on the protocol so as to minimise the delay in their surgery.
REDUCTION IN SUBSPECIALTY CONSULTS
Preanaesthesia evaluation consists of the consideration of information from multiple sources which may include the patient’s medical records, interview, physical examination, and findings from medical tests and evaluations. A thoroughly conducted PAC can improve the safety and effectiveness of anaesthetic processes involved with perioperative care by optimising the preoperative co-morbid conditions. Every effort should be done to minimise the unnecessary subspecialty consults which may include interventions that result in injury, discomfort, inconvenience, delays or costs that are not commensurate with the anticipated benefits.
A PAC can reduce the use of costly subspecialty consults without affecting patient outcome. The implementation of more stringent consultation algorithms through a high volume, tertiary care PAC led to a significantly reduced rate of preoperative cardiology consultations.[ 8 ] Alternatively, having a PAC staffed by physicians who are trained in both internal medicine and anaesthesia can further enhance quality patient care with hospital cost savings.
Enhanced operative room functioning
PACs have a positive impact on effective functioning of the operating room. Preoperative risk assessment can only be accomplished if adequate knowledge of co-morbid conditions is obtained with the help of old medical records, test results and notes from other hospitals at the time of the preoperative evaluation. Optimisation of preexisting/recently diagnosed medical conditions plays a major role in reducing the cancellations and delays on the day of surgery.[ 9 ]
Delay or cancellation within 24 hours of planned surgery is highly undesirable as it causes distress to the patient, disrupts bed management, reduces operating room efficiency and increases costs incurred by having to maintain a facility that is essentially not generating productivity. Several studies have found significant reduction in the cancellation rates after implementation of outpatient preanaesthesia evaluation services.[ 10 – 12 ]
Fischer reported a decrease in the rate of day-of-surgery cancellation from 1.96% in the year before implementation of the anaesthesia preoperative evaluation clinic to 0.21% in the year following its implementation at the Stanford University Hospital.
Cancelled cases may delay subsequent cases and waste expensive case setups. By reducing case cancellations, a PAC can improve operating room efficiency on the day of surgery and have a significant financial benefit to a hospital with a busy operating room schedule. This becomes more relevant in our country because of limited resources available.
Preanaesthesia evaluation and day care surgery
Ambulatory surgery is being practiced widely throughout the world. Out-patient preanaesthesia assessment needs to keep pace with the increasing number and complexity of the ambulatory surgery population. Controversies exist with regard to the timing of preoperative evaluation and the need for out-patient PAC assessment.[ 13 ] There is no strong evidence in the literature on the optimal timing for PAC. Traditionally, patients undergo PAC screening 1–30 days before and are admitted a day prior to the scheduled day of surgery. In order to maintain efficiency and patient safety while minimising delays and cancellations, out-patients posted for day care surgeries are now routinely screened preoperatively using several methods. These include questionnaires, telephone interviews,[ 14 ] automated interviews, and evaluation at a preanaesthesia assessment clinic. This information identifies potential problems (medical, anaesthetic or social), helps to triage patients according to the risk involved, and order relevant laboratory tests and consultations.[ 15 16 ] This also decreases the workload of the PAC and reduces the number of hospital visits. So, there is a reduction in the day-of-surgery delays/cancellations and the overall cost, which is of utmost importance in the day care surgery.
Preanaesthesia clinics and telemedicine technology
The internet-based health solutions (telemedicine) have a substantial impact on health care and are used in medical and surgical specialties since many years.[ 17 ] There is recently an increased interest in utilising telemedicine technology for preadmission anaesthesia consultations. The patient does not have to travel for routine consultations, and moreover, if an expert consultation is required in case of a comorbid disease, it can be achieved immediately via teleconferencing in the presence of an anaesthetist.[ 18 ] This reduces the time spent for super specialist consultations and the extra investigation which would be ordered otherwise. The patients’ estimated cost and time involved in telemedicine consultation is lesser as compared to a conventional preoperative anaesthesia consultation. The format of the telemedicine anaesthesia consultation has been found to be satisfactory for the patient, the consulting and attending anaesthetists. So, telemedicine is a boon to health care system and providers and helps them provide a better coordinated and quality care to the patients.
LIMITING FACTORS
The most limiting factors for implementation of a functioning PAC are lack of finance and shortage of anaesthetists to run the clinic. Lack of finance is a frequently reported problem, especially in a private setup where the anaesthetists often work on a fee per case basis. The overall benefits are of a greater magnitude than the limitations perceived by us. With the motivation and cooperation of the anaesthetists, a PAC can be established even with limited resources. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):504-507 | oa_package/32/43/PMC3016569.tar.gz |
||||
PMC3016570 | 21224967 | INTRODUCTION
With technological advances, the world is moving fast and, at the same time, the population worldwide is becoming fatter and fatter. Obesity now prevails in all sections of people irrespective of whether it is a developed or developing or a poor country. As the person becomes rich in his fat the economic reserves of that country becomes thin. The World Health Organization (WHO) predicts that there will be 2.3 billion overweight adults in the world by 2015, and more than 700 million of them will be obese.[ 1 ]
Obesity is associated with more than 30 medical conditions, including diabetes, high blood pressure, high cholesterol and triglycerides, coronary artery disease (CAD), sleep apnoea, strokes, gallbladder disease and cancers of the breast and colon.[ 2 ] | The purpose of this article is to review the fundamental aspects of obesity, pregnancy and a combination of both. The scientific aim is to understand the physiological changes, pathological clinical presentations and application of technical skills and pharmacological knowledge on this unique clinical condition. The goal of this presentation is to define the difficult airway, highlight the main reasons for difficult or failed intubation and propose a practical approach to management Throughout the review, an important component is the necessity for team work between the anaesthesiologist and the obstetrician. Certain protocols are recommended to meet the anaesthetic challenges and finally concluding with “what is new?” in obstetric anaesthesia. | DEFINITION OF OBESITY
Obesity is a disorder of energy balance. It is derived from the Latin word obesus, which means fattened by eating. Obesity is the state of excess adipose tissue mass. The most widely used method to gauge obesity is body mass index (BMI), which is equal to weight/height 2 (kg/m 2 ), also known as Quetelet’s Index [ Table 1 ].
PREVALENCE
America is the fattest country in the world. The poor in the developed and the effluent in the developing countries are obese. Data from the national health and nutrition examination surveys show that the percentage of the American adult population with obesity BMI more than 30 has increased from 14.5% to 30.5% by 2000. As many as 64% of the US adults >20 years of age were overweight between the years of 1999 and 2000. The extreme obesity BMI >40 has also increased and effects 4.7% of the population. In 2003–4, 17.1% of the US children and adolescents were overweight and 32.2% of the adults were obese.[ 4 5 ] In the United Kingdom, there were 665 women with extreme obesity in an estimated 764,387 women delivering, representing an estimated prevalence of 8.7 cases per 10,000 deliveries (95% CI 8.1–9.4).[ 6 ]
PREVALENCE IN INDIA
In India, obesity has reached an epidemic proportion, affecting 5% of the country’s population.[ 7 ] The obesity trend has been found to be higher in women of Jalandhar District (Punjab) as compared with other Indian women populations studied so far, except the women population of West Bengal (Das and Bose 2006) and the Punjabi Bhatia women of Jaipur.[ 8 – 10 ] Indians are genetically susceptible to weight accumulation, especially around the waist.
Saha and others[ 11 – 13 ] submitted to the National Family Health Survey that there is an increased trend towards overweight or obesity in Indian women from 10 in 1998–9 to 14.6 in 2005.
In India, states that topped the list of rates of obesity were Punjab (30.3% males, 37.5% females), Kerala (24.3% males, 34% females) and Goa (20.8% males, 27% females).[ 14 ]
ADIPOCYTE
Although adipocyte [ Figure 1 ] has generally been regarded as a storage depot for fat, it is also an endocrine cell that releases numerous molecules. These include energy balance regulating hormone leptin, cytokinin such as tumour necrosis factor alpha and interleukin (IL)-6, compliment factors such as factor-D, prothrombotic factors such as plasminogen activator inhibitor 1 and a component of the blood pressure regulating system, angiotensinogen. These factors play a role in the physiology of lipid homeostasis, insulin sensitivity blood pressure control, coagulation and vascular health, and are likely to contribute to obesity-related pathologies.
ROLE OF GENES VERSUS ENVIRONMENT
Obesity is commonly seen in families and heritability of body weight is similar to that for height. Inheritance is not Mendelian. Whatever the role of the genes, it is clear that the environment plays a key role in obesity. However, identification of ob gene mutation in genetically obese ob/ob mice represented a major breakthrough in this field. The OB gene is present in humans and is expressed in fat.
PHYSIOLOGICAL CHANGES IN PREGNANCY AND IN THE RESPIRATORY SYSTEM OF MORBIDLY OBESE PREGNANT PATIENTS
Changes in the respiratory system during pregnancy are manifest as alterations in the upper airway, minute ventilation, lung volumes and arterial oxygenation [ Table 2 ].
Minute ventilation
An increase in minute ventilation is one of the earliest and most dramatic changes in the respiratory function during pregnancy, whereby chest wall compliance is decreased. As a result, the work of breathing is increased and ventilation become diaphragmatic and position-dependent. Pulmonary function studies in obesity suggest a restrictive pattern of lung disease, and the most constant changes are reduction in expiratory reserve volume, vital capacity and functional residual capacity; however, the inspiratory capacity is increased in obese parturients. Increased closing volume with decreased expiratory reserve volume results in underventilation of the dependent lung regions [ Figure 2 ].
Obstructive sleep apnoea is not uncommon in obese women who become pregnant. Pregnancy has some protective effects on sleep apnoea despite the hyperemia of nasal passages.
The combination of increased minute ventilation and decreased functional residual capacity (FRC) demonstrate the rate at which changes in the alveolar concentration of an inhaled anaesthetic drug can be achieved. Induction and emergence and depth of anaesthesia are notably faster.
Ventillatory changes are more important than circulatory alterations in determining the alveolar concentration of inhaled anaesthetics. Dose requirements for volatile anaesthetics drugs are reduced during pregnancy. Thus, lower concentrations of inhaled anaesthetics may result in a loss of protective upper airway reflexes during the delivery of inspired concentration of anaesthetics that are usually considered safe.
CARDIOVASCULAR CHANGES
An increased demand for oxygen of the obese individuals results in an increased workload on the heart. The cardiac output and blood volume are increased. Each kilogram of fat contains 3,000 m of blood vessels. Other important problems are pulmonary arterial hypertension (PAH), left ventricular (LV) hypertrophy, decreased LV contractility, supine hypotension syndrome, ECG changes shows LV strain, flat T waves and low-voltage QRS complex [ Table 3 ]. Different positions of a obese patient has effects on the lung volumes [ Figure 3 ].
GASTROINTESTINAL CHANGES
Progesterone relaxes the smooth muscles. Consequently, it impairs oesophageal and intestinal motility during pregnancy. Although it was always accepted that the gastric emptying was delayed during pregnancy, it has recently been suggested that gastric emptying is not always delayed in pregnant woman.[ 15 – 17 ]
OBESITY-ASSOCIATED SYSTEM-WISE COMORBID CONDITIONS
The following conditions are associated with obesity [ Table 4 ].
MATERNAL COMPLICATIONS
Hypertension
In a study of 4,100 deliveries in California, the prevalence of pregnancy-induced hypertension was 4.2% in normal weight women and increased to 9.1% in the obese women, the values being 1.2 and 5.3% for what the authors called hypertension.[ 18 ] The incidence of gestational hypertension increased from 4.8% in the normal weight group to 10.2% in the obese group ( n = 1,473) and 12.3% in the morbidly obese group ( n = 877).[ 19 ]
Gestational diabetes
In a study of 16,102 women, the incidence of Gestational diabetes (GDM) was 2.3% in the control group and increased to 6.3% in the obese group (OR 2.6) and 9.5% in the morbidly obese group (OR 4.0).[ 19 ] In a UK study, women with a BMI greater than 30 kg/m 2 are 3.6-times more likely to develop GDM compared with women with a normal BMI.[ 2 ] Diabetes is associated with increasing overweight and obesity. Sixty percent of women have an unplanned pregnancy and may have undiagnosed diabetes. The pregnancy is at increased risk of foetal malformation in addition to foetal macrosomia.
Maternal deaths
18 th in a series of reports within the Confidential Enquiries into Maternal and Child Health (CEMACH) in the UK, in the years 2003–5, there were six women who died from problems directly related to anaesthesia, which is the same as the reported deaths in the 2000–2 triennium. Obesity was a factor in four of these women who died, indicating the magnitude of the problem.[ 20 ]
FOETAL COMPLICATIONS
Congenital malformations
One case–control study found that women with a BMI greater than 31 kg)/m 2 had a significantly increased risk of delivering infants with neural tube defect (NTD) and defects of the central nervous system. The increased risk of NTD in infants of obese women was thought to be related to the lower levels of folic acid that reach the embryo due to poor absorption and higher metabolic demands.
Macrosomia
Several studies have shown that maternal obesity and excessive weight gain during pregnancy are associated with macrosomic babies.[ 19 21 ] Obesity and pre-GDM are independently associated with an increased risk of large-for-gestational-age infants, and this impact of abnormal body habitus on birthweight increases with increasing BMI and is associated with significant obstetric morbidity.[ 22 23 ]
Thromboembolism
The risk of thromboembolism is increased in obese parturients. Edwards and others[ 24 ] reported 683 obese women (BMI > 29 kg/m 2 ) who were matched to 660 women of normal weight (BMI 19.8–26.0 kg/m 2 ). The incidence of thromboembolism was 2.5% in the obese women and only 0.6% in the control subjects.[ 24 ] The Royal College of Obstetricians and Gynaecologists (RCOG) in the United Kingdom recommends thromboprophylaxis for 3–5 days using low-molecular weight heparin after vaginal delivery for women who are over the age of 35 years and have a pre-pregnancy or early pregnancy BMI >30 kg/m 2 or weight >90 kg.[ 25 ] In addition, the RCOG recommends thromboprophylaxis before and for 3–5 days following caesarean section for women with a pre-pregnancy or early pregnancy BMI >30 kg/m 2 or with a current weight >80 kg. The RCOG also recommends considering thromboprophylaxis in “extremely obese” women who are hospitalized antenatally.[ 25 26 ]
PRE-OPERATIVE ISSUES
A thorough pre-operative evaluation is essential to minimize surgical complications. All severely obese patients should undergo pre-operative chest radiograph, electro cardiogram (EKG) and laboratory screening with a complete blood count, liver function tests, electrolyte panel, coagulation profile and urine analysis. A pre-operative cardiology consult is highly recommended. Patients with severe obesity and no further cardiac risk factors should be placed on a peri-operative beta blocker, whereas those with identifiable cardiac risk factors should undergo additional non-invasive cardiac testing pre-operatively.
The pre-operative anaesthesia consult should include an assessment of airway and potential vascular access sites. If there is any evidence of pulmonary dysfunction, an arterial blood gas should be performed to identify patients with carbon dioxide retention and to determine the peri-operative oxygen requirements. Patients with significant pulmonary dysfunction should be evaluated by a pulmonologist pre-operatively.
Wound infection, deep venous thrombosis and pulmonary embolism are all associated with obesity. Pre-operative antibiotic prophylaxis with cefazolin or vancomycin should be given at least 30 min before skin incision to allow for adequate tissue penetration.
To prevent a venous thromboembolic event, pneumatic compression devices should be placed on the calves pre-operatively. Place pneumatic compression stockings on the lower extremities of all obese parturients prior to and during surgery as prophylaxis against deep vein thrombosis, ensuring that the compression stockings remain in place until the patient is fully ambulatory. For short-out patient procedures, this is probably sufficient prophylaxis. For longer surgeries or surgeries performed under general anaesthesia, heparin prophylaxis is recommended. Most authors recommend unfractionated heparin 5,000 IU or low-molecular weight heparin every 12 h starting before surgery and continuing until the patient is ambulatory.
CUFF SIZE FOR BLOOD PRESSURE
Another problem that the anaesthesiologist often encounters when dealing with morbidly obese patients is difficulty with non-invasive blood pressure monitoring. Unless the length of the cuff exceeds the circumference of the arm by 20%, systolic and diastolic blood pressure measurements may overestimate true maternal blood pressure. Direct arterial pressure measurement may be useful in the morbidly obese women where sphygmomanometry is often inaccurate, especially in patients with comorbidities such as chronic hypertension and pre-eclampsia. An intra-arterial catheter also offers the advantage of having the opportunity to perform repeated blood gas sampling, if indicated.
OPERATING TABLE AND POSITION
Operating table
Selection of appropriate operating table should occur before surgery. Standard operating tables can hold up to 450 lbs, but tables capable of holding up to 1,000 lbs are available and may be necessary for morbidly obese individuals. An appropriately sized operating table is imperative. The use of two operating tables (side by side) has been described.[ 27 ] The problem with this technique is that it is impossible to raise, lower or change the position of the tables in a completely synchronous manner. Another possibility is to use one set of arm boards, placed parallel to the operating table to extend the width of the table, while an extra set of arm boards can be used to position the arms of the patient.
Position
All morbidly obese patients undergoing caesarean section should be placed in a ramped position with left uterine displacement regardless of primary anaesthetic technique. The ramped position has been shown to improve the laryngoscopic view. The effect may be even more important for parturients with a large breast, which can obstruct the insertion of the laryngoscopic blade. In the ramped position, blankets are folded under the chest and head to achieve the horizontal alignment with external auditory meatus and sternal notch [ Figure 4 ].[ 28 ] This position aligns the oral, pharyngeal and tracheal access and frees the mandible to accommodate the tongue and the laryngoscopic blade. A 30° head-up position may minimize the impact on the respiratory mechanics and oxygenation.
Cephalad retraction of the heavy panniculus can cause aortocaval compression, maternal hypotension, non-reassuring foetal heart tones and, also, foetal death.[ 29 ]
ANAESTHESIA
Caesarean section regional techniques
Continuous lumbar epidural analgesia
A higher percentage of morbidly obese parturients will require cesarean delivery compared with non-obese parturients. Epidural anaestheisa offers several advantages. First, the ability to titrate the dose to achieve the desired level of analgesia, ability to extend the block for prolonged surgery, a decreased incidence and, perhaps, slow speed of developing hypotension and utilization for post-operative analgesia. In obese parturients, the administration of local anaesthetic should be closely titrated using a small incremental dose. Epidural anaesthesia alone is usually well tolerated in the obese parturients. The level of analgesia should be carefully tested before the surgeon is allowed to begin the procedure.
Extending labour analgesia for caesarean section requires additional local anaesthetic of higher concentration than the dilute solutions used to provide labour analgesia. The level of anaesthesia required for caesarean section is at least T4-5. Following an injection of the test dose, many anaesthesiologists administer incremental doses of 2% lidocaine with epinephrine until the desired effect is attained. Bupivacaine 0.5% can also be used.
Prophylactic placement of an epidural catheter when not contraindicated in labouring morbidly obese women would potentially decrease the anaesthetic and perinatal complications associated with attempts at emergency provision of regional or general anaesthesia.
POSITION
The sitting position is more preferred because the line joining the occiput or the prominence of C7 and the gluteal cleft can be used to approximate the position of the midline sitting position, which allows the fat of the back to settle laterally and symmetrically and improves the identification of the midline. Morbidly obese woman tend to be more comfortable sitting on the side of the bed with a stool placed under their feet.
The horizontal lateral recumbent head-down position reduces the incidence of intravascular placement by reducing the venous congestion in the epidural veins.[ 30 ]
NUMBER OF ATTEMPTS
Jordan and others noted that 74.4% of these patients needed more than one attempt for successful epidural needle placement.
DURAL PUNCTURE
There is 4% incidence of dural puncture in morbidly obese parturients[ 31 ]
Epidural space distance from skin
Hamza and others[ 32 ] found that the distance from the skin to the epidural space was significantly shorter when the epidural was performed with the patient in a sitting position as compared with the lateral decubitus position.[ 33 ] Computed tomography (CT) was used to measure the depth of the epidural space in non-pregnant patients. It is sufficient to use a standard epidural needle for the first attempt. BMI is a poor predictor of distance to the epidural space.[ 34 ]
IDENTIFICATION OF MIDLINE
Patient assists
The parturient assists the anaesthesiologist verbally by indicating whether she feels the needle more on the left or on the right side of the spine,[ 35 ] which may prove to be a valuable tool when trying to identify the midline in these morbidly obese patients [ Figure 5 ].
Needle help guide[ 36 ] uses an 8.5 cm 26 needle to probe for the posterior process of the lumbar vertebra. When the lumbar process is located, it can be used as a landmark for epidural needle insertion.
In case difficult epidural placement is encountered, ultrasound image should be considered.[ 37 38 ] Grau and others suggested that the quality of images obtained with the paramedian longitudinal approach is superior compared with images obtained with the transverse and median longitudinal approaches. The transverse approach is easier to perform. It is often difficult in obese patients to identify the shadow of the spinal process. Instead, the symmetry of the paraspinal muscles can be used.
CORRECT PLACEMENT OF CATHETER AND FIXATION
The risk of epidural catheter dislodgement is increased in obese patients. Sliding of skin over the subcutaneous tissue has been proposed as an important factor in epidural catheter migration.[ 39 ] Iwama and Katayama[ 40 ] noticed a 3 cm skin movement in some patients. To avoid the tendency of epidural catheter to walk, the catheter is placed 7 cm in the epidural space. Hamilton and others[ 41 ] demonstrated that the epidural catheter not fixed at the skin could move 1–2.5 cm inward when the parturient posture is changed from the sitting to the lateral recumbent position, with the greatest change seen in patients with BMI >30. Suturing the epidural catheter to the skin using an adhesive dressing has been recommended.[ 42 ] Nevertheless, the failure rate of the epidural catheter in the general obstetric population varies between 8% and 13%,[ 43 44 ] with major causes of failure being no analgesia.
COMBINED SPINAL EPIDURAL ANAESTHESIA
Combined spinal epidural anaesthesia (CSE) has become a well-established alternative to epidural analgesia. This provides a faster onset of effective pain relief and increases patient satisfaction. The potential drawback of CSE is that the location of the epidural catheter is initially uncertain. In an emergency, this unproven catheter may fail to provide adequate anaesthesia. On the other hand, studies[ 43 45 – 47 ] have shown that catheter inserted as part of the CSE technique produces anaesthesia more reliably than that via a standard epidural technique. The appearance of cerebrospinal fluid (CSF) at the hub of the spinal needle indirectly confirms the correct epidural needle placement. This increases the likelihood of a proper working catheter. Lower epidural analgesic requirements have been reported in obese parturients when compared with normal patients, probably secondary to a reduced volume in their epidural and subarchanoid space due to increased abdominal pressures.[ 48 49 ]
SPINAL ANAESTHESIA
Single-shot spinal anaesthesia remains the most common type of anaesthesia employed for delivery of the foetus by caesarean section. The advantage of using subarachnoid block includes a dense reliable block of rapid onset. However, technical difficulties comprises of potential for high spinal blockade, profound dense thoracic motor blockade leading to cardiorespiratory compromise and inability to prolong the blockade. It is widely believed that local anaesthetic requirements are lower in pregnant patients and that the duration of surgery may extend beyond the duration of single-shot spinal anaesthsia. In such cases, intra-operative induction of general anaesthesia is undesirable and potentially hazardous.
CONTINUOUS SPINAL ANALGESIA
With the unreliability of the epidural placement of the catheter, it is often preferred to conduct an intentional continuous spinal analgesia. Accidental dural puncture during epidural space identification can be converted as continuous spinal analgesia. This technique provides considerable predictability and reliability, allowing good control of the anaesthetic level and duration of block. The catheter is introduced 2–3 cm into the subarchanoid space. The low incidence of post-dural puncture headache may be attributed to the engorged extradural veins and the large amount of extradural fat, which reduce the CSF leak.[ 50 ] In a study, Michaloudis and others found that continuous spinal anaesthesia was useful for the peri-operative management of morbidly obese patients undergoing laparotomy for gastroplastic surgery.
GENERAL ANAESTHESIA CONSIDERATIONS
General anaesthesia imposes great discipline and plan on the part of the anaestheisologist in balancing the altered physiology and anatomy and, applying the pharmacological knowledge on a huge mass of fat, the anatomical and physiological changes caused by both obesity and pregnancy are less favorable to anaesthetists, resulting in an increased incidence of difficult intubation and rapid desaturation during the apnoeic phase.
AIRWAY ISSUES
A “difficult airway” has been defined as the clinical situation in which a conventionally trained anaesthesiologist experiences problems with mask ventilation, with tracheal intubation or with both.[ 51 ] The tracheas of obese patients are believed to be more difficult to intubate than those of normal weight patients.[ 52 – 54 ]
Equipment for difficult intubation
Mayo clinic
Flexible fiberoptic bronchoscope Bullard laryngoscope, Circon, Stanford, CT, USA ProSeal laryngeal mask airway, LMA North America, San Diego, CA, USA Intubating laryngeal mask airway Combitube, Kendall-Sheridan Catheter, Argyle, MA, USA Trachlight, Laedal Medical, New York, NY, USA Jet ventilation apparatus Cricothyrotomy Seldinger kit
Difficult intubation is defined as inadequate exposure of the glottis by direct laryngoscopy.
Voyagis and others reported that difficult intubation increases with increasing BMI.[ 53 ] Factors that have been associated with difficult laryngoscopy include short sternomental distance, short thyromental distance, large neck circumference, limited head, neck and jaw movement, receding mandible and prominent teeth.[ 55 56 ] Of these factors, only large neck circumference was associated with problematic intubation.[ 57 ] Logistic regression identified neck circumference as the best single predictor of problematic intubation. Neck circumference was measured at the level of the superior border of the cricothyroid cartilage. Problematic intubation was associated with increasing neck circumference and a Mallampati score of 3.
AIRWAY ASSESSMENT
Most airway catastrophes occur when airway difficulty is not recognized before induction of anaesthesia. Timely evaluation of the parturient’s airway and adequate preparation to deal with the airway in the non-emergent setting are helpful in avoiding airway catastrophes.
There are a few simple pre-operative bedside determinations that can be performed quickly to evaluate the airway in a pregnant patient. These include, but are not limited to, mouth opening, Mallampati class,[ 58 59 ] thyromental distance and atlanto occipital extension. It is recommended that the airway be reassessed before induction of general anaesthesia .[ 60 ]
The ability to protrude the mandible should be assessed. The ability of the lower incisors to protruded anterior to the upper incisors rarely poses difficulty in intubation.[ 61 ]
CAESAREAN SECTION IN ANTICIPATED DIFFICULT AIRWAY SITUATION
When a caesarean section has to be performed in an anticipated difficult situation, we are left with three options: awake intubation, regional anaesthesia and local anaesthesia.
AWAKE FIBEROPTIC INTUBATION
Full-aspiration prophylaxis should be instituted before intubation. An anticholinergic drying agent such as glycopyrrolate allows better application and absorption of local anaesthetics to the airway mucosa and thus improves visualization of the oropharyngeal structures. The route of fiberscopic intubation is important in pregnant patients. The nasal mucosa is engorged in pregnancy and, despite vasoconstriction, this can precipitate epistaxis, leading to a compromised airway. The oral route is commonly used and preferred. Topical anaesthesia is the primary anaesthetic for an awake intubation. It can be achieved with a spray of lidocaine at the base of the tongue and lateral pharyngeal walls along with application of lidocaine jelly to the base of the tongue via a tongue blade. Sufficient time must be allowed to anaesthetize all portions of the airway. This helps to minimize the swallowing and gag reflexes. The larynx and trachea can be topically anaesthetized by injection of lidocaine through the cricothyroid membrane or via the suction port of the fiberscope.[ 62 ] The patient is at risk for aspiration if regurgitation or vomiting takes place after topical anaesthesia and before the airway is secured. A shorter interval between application of topical anaesthesia and tracheal intubation lessens the potential of aspiration.[ 63 ]
REGIONAL ANAESTHESIA
Regional anaesthesia is the best possible choice in most cases of anticipated difficult airway. Either spinal or epidural anaesthesia is acceptable, provided no contraindications exist in the absence of foetal compromise. When a caesarean section is non-emergent, epidural anaesthesia can be used. When time is limited, spinal anaesthesia is the choice. The advantages of regional anaesthesia include the following: the mother is awake and can protect her airway, airway manipulation is not necessary; the incidence of acid aspiration is decreased. If regional anaesthesia is administered to a patient with difficult airway, close monitoring by an experienced anaesthesiologist is essential.
LOCAL ANAESTHESIA
In the developing countries, this method is still used when the emergency condition of the parturient demands immediate intervention. In India, where there are certain communities with pseudocholenesterase deficiency, this poses a special problem. In those situations, succinyl choline is not given. Then, the anaesthesiologist is left with firbreoptic intubation or local anaesthesia. The awake mother has a protective airway.
CAESAREAN SECTION: UNANTICIPATED DIFFICULT AIRWAY
In a patient requiring an emergency caesarean section for foetal distress and failed intubation, management goals include maternal oxygenation, airway protection and prompt delivery of the baby. If possible, consider returning to spontaneous ventilation, awakening the mother and calling for help.
Failed initial attempts at intubation
The recommendation in the case of a grade III laryngoscopic view is that no more than three attempts at laryngoscopy and intubation should be made. In a grade IV laryngoscopic view, the Difficult Airway Algorithm should be followed without delay.[ 64 ] Call for help immediately if surgery needs to be performed.
Non-emergent pathway: Can ventilate, cannot intubate situation
In an elective caesarean where we can ventilate but cannot intubate, mask ventilation is continued with cricoid pressure until the patient is fully able to protect her airway. Adequate oxygenation without aspiration is the goal.
LARYNGEAL MASK AIRWAY
As per practice guidelines 2003 for difficult airway, Laryngeal mask airway (LMA) is the tool of choice in a can ventilate, cannot intubate (CVCI) situation. LMA has revolutionized management of difficult airway. LMA should be used earlier rather than later following failed endotracheal intubation. Han and colleagues reported the successful use of LMA as a ventillatory device in 1,060 of 1,067 patients for elective caesarean delivery.[ 65 ] In a German survey, LMAs were available in 91% of the obstetrics departments, similar to figures from the United Kingdom (91.4%). According to the same survey, 72% of the anaesthesiologists favoured LMA as the first treatment option for the CVCI situation.[ 66 ] In a survey in the United Kingdom, 71.8% of the obstetrical anaesthesiologists advocated use of LMA in a CVCI situation. Eight anaesthesiologists stated that LMA proved to be a “lifesaver”.[ 67 ] Recently, 18 obstetrics units in Ireland were surveyed for difficult airway equipment. All of the units had LMA as an alternative device for ventilation and intubation. Fifty percent of the units also had an intubating laryngeal mask airway (ILMA) among their airway equipment.[ 68 ] zri and colleagues conducted a survey in Israel to evaluate the practices of Israeli anaesthetists regarding familiarity with airway devices. Ninety-six percent of the anaesthetists were skilled with LMAs and 73% with fiberoptics. Of the obstetrical rooms surveyed in this study, only 36% were equipped with laryngeal masks, 24% with fiberscopes and 22% with equipment for tracheal puncture.[ 69 ]
PROSEAL-LMA
The design of the proseal-lma (PLMA) reliably allows positive pressure ventilation up to 30–40 cm H 2 O. Thus, the seal is 10 cm H 2 O higher, giving it greater ventillatory capability than the classic LMA. The PLMA has been successfully used in parturients after failed intubation during rapid-sequence induction.[ 63 70 71 ]
ILMA
ILMA has also been used in parturients after failed intubation.[ 72 73 ]
LARYNGEAL TUBE
Laryngeal tube (LT) is a new supraglottic airway device. LT is a newer generation LT that is fitted with a second lumen for suctioning and gastric drainage. LTs has been recently used in a parturient having an urgent cesarean section in a CVCI situation.[ 72 74 ]
COMBITUBE
Combitube has been successfully used for the management of failed intubation in caesarean delivery.[ 75 ] Combitube provides as option for blind intubation of either the oesophagus or the trachea. In either position, the patient can be oxygenated and ventilated and the airway is protected against aspiration of gastric contents. Combitube is successfully used for the management of failed intubation in caesarean delivery.[ 75 ]
Transtracheal jet ventillation
It is probably the fastest route to oxygenation in a desaturating patient.
Cricothyroidotomy and surgical tracheostomy
Percutaneous cricothyrotomy is safe, quick and easy to perform as Transtracheal jet ventillation (TTJV).[ 76 ]
FAILED INTUBATION DRILL
If the initial attempts to intubate the trachea fail, it is critical to follow a difficult air way algorithm [ Figure 6 ]. Focus on maternal oxygenation mask ventilation is best achieved with an oral airway and three people, one to apply cricoid pressure, a second to maximize jaw thrust and a third to squeeze the bag and monitor the patient. If ventilation fails, the team should insert a supraglottic air way device and prepare to create a surgical airway. The LMA is the preferred choice by many anaesthesiologists. In elective cases, fiberoptic intubation is considered.
GENERAL ANAESTHESIA PROCEDURE
General anaesthesia considerations: Prevention of acid aspiration and its related precautions
It is standard practice to administer 30 ml of non-particulate antacid 0.3 M sodium citrate 30 min before the initiation of any anaesthetic being administered to the patient. H 2 antagonist, such as ranitidine or a proton pump inhibitor, such as omeprazole, the evening before and again 60–90 min before the induction of anaesthesia further reduce gastric acidity, and volume prokinetic agents like inj metaclopromide may help further, especially in diabetes-associated patients.[ 77 – 80 ]
PRE-OPERATIVE OXYGENATION
Pre-oxygenation and denitrogenation is crucial in these patients before induction of general anaesthesia. The most common method is 3–5 min of 100% oxygen breathing. Baraka et al .[ 81 ] showed that pre-oxygenation achieved by eight deep breaths within 60 s at an oxygen flow of 10 L/min not only resulted in a higher PaO 2 but also in a slower haemoglobin desaturation compared with the four deep breathes technique.
INDUCTION AND MAINTENANCE
Induction may be achieved with pentothal sodium 4 mg/kg and up to 500 mg can be done as per the unit body weight. Prolonged duration of action is expected due to increased central volume distribution and prolonged elimination half-life. Intubation can be achieved with succinyl choline 1–1.5 mg/kg up to 200 mg. Plasma cholinesterase activity is increased in the obese requiring an initial larger dose. Capnography and bilateral lung auscultation should be used to confirm successful intubation before surgical incision. Patients with morbid obesity experience further decrease in FRC under general anaesthesia. Techniques to maintain oxygenation include (1) increase tidal volume to 12—15 ml/kg, (2) increase FIO 2 >50%, (3) head up and (4) panniculus suspension.
Isoflurane, sevoflurane and desflurane are all used in standard concentrations in obese parturients. Desflurane allows faster recovery when compared with sevoflurane. Dense intra-operative neuromuscular blockade is best achieved by titrating intermediate-acting agents using a twitch monitor. Emergence, extubation and recovery represent critical periods for obese woman who deliver under general anaesthesia. (1) To maximize the safety during this period, ensure adequate return of muscle function with a nerve stimulator and neostigmine reversal, (2) insert an orogastric tube to empty the stomach just before emergence, (3) delay extubation until the patient is completely awake and is able to meet the intensive care extubation criteria, (4) administer oxygen and (5) continue monitoring.
POST-OPERATIVE CARE
Obese parturients are at increased risk of post-operative complications such as hypoxaemia, atelectasis and pneumonia, deep vein thrombosis and pulmonary embolism, pulmonary oedema, post-partum cardiomyopathy, post-operative endometritis and wound complications such as infection and dehiscence.[ 82 83 ] Early mobilization, thromboprophylaxis, aggressive chest physiotherapy and adequate pain control are the key to the success of effective post-operative care. Nursing in the reclined position and oxygen supplementation can potentially reduce critical respiratory events.
Early mobilization has been shown to improve the respiratory volumes in the immediate post-operative phase.[ 84 ] Interestingly, Hood and Dewan found that, in morbidly obese women, all post-partum complications occurred in those undergoing caesarean section and not in those having vaginal delivery.[ 83 ] Pain control should be adequate in the post-operative period to facilitate mobilization and chest physiotherapy as it is one of the determinants of post-operative maternal morbidity. Epidural analgesia has been shown to improve the post-operative respiratory function in patients undergoing abdominal surgery.[ 85 ] Epidural infusion of local anaesthetic with opioids improves the quality of dynamic post-operative pain relief.[ 86 ] Patient-controlled intravenous opioids have also been successfully used for post-operative pain relief in the morbidly obese.[ 87 ] Thromboembolic episodes remain the leading cause of direct maternal deaths in the UK. Obesity is a known independent risk factor for deep vein thrombosis. Both pharmacological and mechanical strategies are used for thromboprophylaxis, and an adequate dose of an anticoagulant for an appropriate duration is recommended. Obesity cardiomyopathy is a well-recognized clinical entity and at least three cases of peripartum cardiomyopathy in obese patients have been reported.[ 83 88 89 ] Wound complications occur more frequently in obese than in non-obese patients and often lead to prolonged recovery. They have been found to be increased with midline abdominal incision compared with Pfannenstiel incision.[ 90 ] Hospital stay and costs have been found to be increased for morbidly obese patients after both vaginal delivery and caesarean section.[ 91 ]
GUIDELINES RECOMMENDED
All obstetric units should develop protocols for the management of morbidly obese women. These should include pre-assessment procedures, special community, ward and theatre equipment such as large sphygmomanometer cuffs, hoists, beds and operating tables and long regional block needles Morbidly obese women should be referred for anaesthetic assessment and advice as part of their antenatal care management by consultant anaesthetists is essential and difficulties with airway management and intubation should be anticipated. Positioning the women requires skill and sufficient manpower in the event of a requirement for induction of general anaesthesia is essential.[ 92 ] Direct arterial pressure measurement may be useful in the morbidly obese women where sphygmomanometry is often inaccurate. All morbidly obese women in childbirth should be given prophylactic low-molecular weight heparin, and the duration of therapy needs to be determined in view of likely immobility. Thromboembolic stockings of an appropriate size need to be available.
FAT IS NOT A THREE-LETTERED WORD. IT KILLS THE WORLD
What is new obstetric anaesthesia?
Ephedrine versus phenylephrine to treat hypotension after spinal anaesthesia: investigators compared varying combinations of the two drugs given by infusion to keep the blood pressure at baseline. Haemodynamic control was better for the mother and acid–base status was better in the foetus when phenylephrine was used instead of ephedrine. Low-dose bupivacaine with phenylephrine provided the best haemodynamic stability during subarachnoid block. Mallampati classification is not static and should be assessed just before instrumentation. Mallampati score ≥3 and large neck circumferences were most useful and it is suggested that neck circumference should be included in our pre-operative assessment. Oxytocin bolus produced hypotension, tachycardia, chest pain and signs of myocardial ischaemia on 12-lead ECG. Oxytocin is not safe to give as an IV bolus. Embolism remains the #1 cause of maternal death in the US. Spinal anaesthesia is preferred in severe pre-eclampsia. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):508-521 | oa_package/79/40/PMC3016570.tar.gz |
|||||
PMC3016571 | 21224968 | INTRODUCTION
The clinical environment contains a plethora of bells, beeps, and buzzers. Intensive care unit (ICU) monitors have alarm options to intimate the staff of critical incidents.[ 1 ] Since critical patients have different underlying diseases, the ability of ICU monitors to detect abnormality is limited and needs to be appropriately adjusted. With this objective in mind, this study was done among resident doctors, with the aim of assessing the existing attitude among resident doctors towards ICU alarm settings. | METHODS
This study was conducted among residents working at ICU of a multispeciality centre, with the help of a printed questionnaire.
What all monitors are routinely employed in the ICU? Are you aware of presence of types of alarm and settings of alarm limits on the monitor? Do you check whether alarm limits have been set for various parameters? Do you set the alarm limits daily for each and every parameter? Within what range of the baseline do you set these limits? What do you do when an alarm sounds in any of the monitors? (Choose to ignore it the first time it sounds, Disable it temporarily, Disable it temporarily and start looking for a cause yourself, Disable it temporarily and summon expert senior help) What do you do in case the above measures fail and the alarm keeps on sounding persistently? (Increase alarm limits, Choose to ignore the alarm, Try to disable the alarm permanently, Switch off the monitor) Are you aware of the alarms priority and colour coding of alarms? What are the various reasons of alarms in ICU? (Truly related to patient clinical status, Technical error, Patient movement/during disturbance of sensor by doctor/nursing personnel) Who should set the alarm levels?
The data were spread on excel sheet and analysed. | RESULTS
The study involved 80 residents. Of these, 34 were postgraduate residents (PG) and the rest were senior residents (SRs) in the field of anaesthesiology. All residents were in full agreement on routine use of electro cardio gram (ECG), pulse oximeter, capnograph and non invasive blood pressure (NIBP) monitoring. 86% residents realised the necessity of monitoring oxygen concentration, apnoea monitoring and expired minute ventilation monitoring. This awareness was best among 1 st year postgraduates and 3 rd year SRs. 100% of residents were aware of presence of types of alarm and settings of alarm limits on the monitor. 87% PGs and 70% SRs routinely checked alarm limits for various parameters. 50% PGs and 46.6% SRs set these alarm limits daily for all parameters. Awareness increased from 1 st to 3 rd year PGs but decreased form 1 st to 3 rd year SRs. 11% of the residents set 30% as alarm limits, 10% residents preferred a 15% range, 3% residents set a limit of 10%, 3% residents set a limit of 25% and the rest set alarm limits of 20%. The initial response to an alarm among all the residents was to disable the alarm temporarily and try to look for a cause. 55% of PGs and 66% of SRs increased the alarm limits so that the alarm would stop sounding. If their attempts to find a cause becomes unsuccessful, 14% of 3 rd year SRs ignored the alarm altogether and 6% of 2 nd year PGs disabled the alarm permanently. 92% of PGs and 98% of SRs were aware of alarms priority and colour coding. 55% residents believed that the alarm occurred due to patient disturbance, 15% believed that alarm was due to technical problem with monitor/sensor and 30% thought it was truly related to patient’s clinical status. 82% residents set the alarms by themselves, 10% believed that alarms should be adjusted by nurse, 4% believed the technical staff should take responsibility of setting alarm limits and 4% believed that alarm levels should be pre-adjusted by the manufacturer. | DISCUSSION
The need of various monitoring parameters varies among the patients because of their clinical status. In our study, we find that the residents were in full agreement regarding the need of basic monitoring parameters for patients in the ICU but varied response was observed regarding specific parameters and also for the alarm settings.
Many of the alarms are spurious due to patient movement, artefact, problems with the sensor, algorithms or the patient–equipment contact.[ 1 ] The proper settings of alarm limits are essential. The default alarm limits cannot be applicable for all patients because of the different baseline parameters and action required if the monitoring parameters change by a certain limit. Various studies have concluded that over 90% of alarm sounds may not be clinically important.[ 1 2 ] In a study, 72% of all alarms resulted in no medical action.[ 3 ] The study reported a positive and negative predictive value for alarms to be 27 and 99%, respectively. ICU alarms produce sound intensities above 80 decible (dB). False alarms in the ICU can lead to a disruption of care, impacting both the patient and the clinical staff through noise disturbances, desensitisation to warnings and slowing of response times, leading to decreased quality of care.[ 1 ] Also, sleep deprivation and depressed immune systems have been reported.[ 1 ] In a survey regarding ICU alarms, it was mentioned that 52.2% of the nurses considered themselves for controlling alarm limits.[ 4 ] Being able to set multiple levels with possibly different levels of alarm urgency would be a more ergonomic way of dealing with this problem. In all cases, it is necessary to set the threshold alarm limit. There is no standard for default alarm settings. Seibeg and others suggested combining of several alarms into a new one and the addition of trend alarms to allow narrower threshold alarm limits.[ 4 ]
In another clinical study, when asked for their opinion on how to adequately set monitoring alarms, cardiac anaesthesiologists named only heart rate and arterial blood pressure alarms, and all other cardiovascular alarms were disabled.[ 5 ] Regarding alarm limit settings, it was revealed that they would set systolic arterial pressure alarms at ±30 mmHg, heart rate alarms at ±30 bpm and the oxygen saturation lower alarm limit to 90%.[ 6 ] In an interventional study, there was a statistically significant improvement in the alarms’ reading as their limits were adjusted according to patient’s real value.[ 7 ]
We conclude that although alarms are an important, indispensable, and lifesaving feature, they can be a nuisance and can compromise quality and safety of care by frequent false positive alarms. We should be familiar of the alarm modes, check and reset the alarm settings at regular interval or after a change in clinical status of the patient. | Intensive care unit (ICU) monitors have alarm options to intimate the staff of critical incidents but these alarms needs to be adjusted in every patient. With this objective in mind, this study was done among resident doctors, with the aim of assessing the existing attitude among resident doctors towards ICU alarm settings. This study was conducted among residents working at ICU of a multispeciality centre, with the help of a printed questionnaire. The study involved 80 residents. All residents were in full agreement on routine use of ECG, pulse oximeter, capnograph and NIBP monitoring. 86% residents realised the necessity of monitoring oxygen concentration, apnoea monitoring and expired minute ventilation monitoring. 87% PGs and 70% SRs routinely checked alarm limits for various parameters. 50% PGs and 46.6% SRs set these alarm limits. The initial response to an alarm among all the residents was to disable the alarm temporarily and try to look for a cause. 92% of PGs and 98% of SRs were aware of alarms priority and colour coding. 55% residents believed that the alarm occurred due to patient disturbance, 15% believed that alarm was due to technical problem with monitor/sensor and 30% thought it was truly related to patient’s clinical status. 82% residents set the alarms by themselves, 10% believed that alarms should be adjusted by nurse, 4% believed the technical staff should take responsibility of setting alarm limits and 4% believed that alarm levels should be pre-adjusted by the manufacturer. We conclude that although alarms are an important, indispensable, and lifesaving feature, they can be a nuisance and can compromise quality and safety of care by frequent false positive alarms. We should be familiar of the alarm modes, check and reset the alarm settings at regular interval or after a change in clinical status of the patient. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):522-524 | oa_package/37/36/PMC3016571.tar.gz |
|||
PMC3016572 | 21224969 | INTRODUCTION
Post operative ventilation of patients undergoing “on pump” cardiac surgery has been the standard practice for the last three decades because of a relatively high risk of respiratory insufficiency and low cardiac output state due to universal use of high dose opioid anaesthetic technique.[ 1 ] However, with the recent advancement in anaesthesia, surgery, myocardial protection, haemodynamic monitoring and postoperative analgesia, several authors have promoted operating room (OR) extubation after on pump cardiac surgery[ 2 ] and observed reduced resource utilisation, mortality rate and postoperative complications.[ 2 – 5 ] High-quality postoperative analgesia and quick rehabilitation was found with immediate extubation of the patients in the OR, using thoracic epidural analgesia (TEA) along with general anaesthesia.[ 2 – 5 ] We therefore conducted a prospective randomised study in which we compared haemodynamic, respiratory, pain control parameters and complications of patients, who had been extubated in the OR, with another group of patients who had been electively ventilated in the postoperative period after on pump cardiac surgery with TEA in both the groups. | METHODS
After obtaining institutional ethical committee’s approval, we conducted this prospective, randomised study. Written informed consent was obtained from all patients for epidural catheter insertion and immediate extubation in the OR as well as elective ventilation in the postoperative period.
From June 2006 to July 2008, a total of 72 patients, aged between 28 and 45 years, undergoing on pump open heart surgery by one surgical team, were included in our study. Patients having a contraindication to the insertion of an epidural catheter[ 6 ] and who refused to participate were excluded from our study.
Preoperative anticoagulation with clopidogrel/warfarin was replaced with low molecular weight heparin (LMWH) 10 days before surgery and the last dose of LMWH was administered 12 hours before insertion of epidural catheter. (INR) >1.2 and activated partial thromboplastin time (APTT) were brought to normal values before the insertion of epidural catheter.
Anaesthesia was performed in the same fashion in all patients. Monitoring included electro cardio gram (ECG), pulse oximetry (SpO 2 ), end tidal carbondioxide (EtCO 2 ), invasive blood pressure using a radial artery catheter, central venous pressure using internal jugular venous access, arterial blood gas (ABG), core temperature (oesophageal and rectal), urine output, BIS (Aspect 2000 Monitoring System) and train of four (TOF watch, Organon, Dublin, Ireland). In all the patients, an epidural catheter was inserted at T 2 -T 3 or T 3 -T 4 interspace under local anaesthesia, at least 12 hours preoperatively. The epidural catheter was discontinued 48-72 hours after surgery.
All the patients were premedicated with tablet diazepam (0.2 mg/kg), 90 minutes before arrival in the OR. Before induction of anaesthesia, all monitors were attached and the epidural catheter was checked for proper placement in the epidural space. Anaesthesia was induced with fentanyl 5 μg/kg i.v., followed by administration of propofol 1-2 mg/kg. Endotracheal intubation was facilitated with i.v. administration of vecuronium 0.1 mg/kg. Intermittent positive pressure ventilation was started and adjusted to a rate to achieve EtCO 2 in the range of 30-35 mm Hg. Anaesthesia was maintained with sevoflurane to achieve a BIS between 40 and 50. TOF was measured every 6 minutes; when TOF count was 1 or more, vecuronium (0.02 mg/kg) was administered. Intraoperative analgesia was provided with a bolus of 4-8 ml of 0.25% bupivacaine and 3 μg/ml of fentanyl citrate through epidural catheter, 15 minutes before skin incision, and thereafter, epidural infusion of bupivacaine (0.25%) at the rate of 10 ml/hour and fentanyl at the rate of 40 μg/hour using infusion pump. In the intraoperative period, if BIS was maintained between 40 and 50 and there was inadequate analgesia [increase in heart rate (HR) and mean arterial pressure (MAP)>20% of baseline value], the epidural infusion rate was increased. Conversely, when hypotension (MAP<60 mm Hg) was detected, the infusion rate was decreased in conjunction with fluid replacement. A circulating water mattress, air warmer, routine use of warmed fluid and increased OR temperature ≥22°C were used to maintain the core temperature within 35-37°C. An open cardiopulmonary bypass (CPB) with arterial filter was used. Heparinisation (300 IU/kg) of the patient was adjusted to maintain activated clotting time>400 seconds and antagonised using protamine 1.3 mg for 100 IU of heparin after completion. Blood flow was maintained between 2.4 L and 2.8 L/min/body surface area (BSA). Perfusion pressure was kept within the range of 60-70 mm Hg. The CPB temperature was maintained between 33 and 34°C and full rewarming to 37°C was achieved before weaning CPB. Intermittent whole blood cardioplegia from the arterial line of the CPB unit, supplemented with K + , Mg +2 and adenosine, was given in an antegrade or a retrograde manner. During the ischaemic period, bradycardia (HR < 40 beats/minute) was treated with increments of atropine 0.3 mg i.v. Hypotension (MAP<60 mm Hg) was treated with increments of phenylephrine 50 μg i.v.
At the end of the surgery, the patients were randomly allocated into two groups with an equal number of patients: Group V ( n =36) and Group E ( n =36). Patients of Group V were electively ventilated in the postoperative period for 8-12 hours using propofol for sedation and endotracheal tube tolerance. In Group E, at the end of surgery, neuromuscular block was reversed with neostigmine 0.05 mg/kg and glycopyrrolate 0.01 mg/kg and patients were extubated in the OR. The standard extubation criteria were as follows: (i) cooperative and alert patient; (ii) smooth spontaneous ventilation; (iii) sustained head lift and TOF>0.8 at the adductor pollicis; (iv) SpO 2 >96% on FIO 2 of 1, EtCO 2 <45 mm Hg; (v) stable haemodynamics; (vi) core temperature ≥35°C and (vii) no evidence of early surgical complications.
Postoperative epidural infusion rate was increased by 1ml/hour every 2 hours, along with a bolus of 3-5 ml of solution (bupivacaine 0.125% and fentanyl 3 μg/ml) was administered in both the groups in case of inadequate analgesia (determined by increase in MAP and HR 20% above normal and on patient demand).
Postoperatively, HR, blood pressure and respiratory rate were recorded every 2 hours after surgery. Complications such as reintubation, bleeding, haemodynamic problems (bradycardia, hypotension, arrhythmia, need for cardiac pacing) and respiratory dysfunction (PaO 2 , PaCO 2 ) were also noted. Complications such as nausea, pruritus and episodes of paraesthesia were also recorded.
Group size was calculated to achieve a power of more than 90%. We calculated a group size of 20 patients to show at least a 20% difference of MAP and HR, 4 hours after surgery, with a power of 0.8. Intergroup comparison was done by applying F test and unpaired two sample “ t ” test as appropriate. A P value of <0.05 was considered significant. | RESULTS AND ANALYSIS
There were no significant differences in age, sex, weight, ejection fraction, preoperative medical conditions, surgical procedures and CPB time between the two groups [Tables 1 and 2 ]. Room and body temperature at the beginning and at the end of surgery were comparable between the two groups. The insertion of thoracic epidural catheter was successful in all the patients. The catheter was taken out at 56 hours (SD 5) without any group related difference. Out of the 36 patients in Group E, 32 patients (89%) satisfied the protocol of extubation and were successfully extubated in OR, and no patient required reintubation and ventilation within 24 hours. Out of the rest of the four patients, two with low core temperature, one with impaired consciousness (due to transient cerebral ischaemia) and the other one with an HR>120/minute were not extubated in the OR. These patients were ventilated postoperatively in the intensive care unit and extubated within 4 hours after satisfying the protocol for extubation. The mean (SD) values of PaO 2 (on FIO 2 of 1.0) immediately after extubation in Group E was 247 (63) mm Hg and in Group V it was 242 (61) mm Hg. The mean (SD) values of PaCO 2 in Group E immediately after extubation and in Group V after the end of surgery were 39.75 (5.25) mm Hg and 38.25 (3.75) mm Hg, respectively. These values were not statistically different. Moreover, PaO 2 and PaCO 2 values estimated at different intervals postoperatively were also comparable in the two groups [ Table 3 ].
There was no intraoperative bradycardia in any group. Increments of phenylephrine were used during ischaemia in 15 patients in Group E and 12 patients in Group V, which was found to be statistically insignificant. Mean blood pressure and HRs were stable and did not differ statistically between the two groups in the postoperative period [ Table 4 ]. The rates of epidural infusion were also comparable.
The frequency of postoperative complications was comparable in the two groups [ Table 5 ]. No patient suffered from respiratory insufficiency after extubation or postoperative myocardial infarction. A transient atrial fibrillation occurred in 14 patients in Group E and 12 patients in Group V with no statistical difference and was managed in all patients with synchronous defibrillation. Three patients in each group required reintubation 24 hours after extubation. Three patients in Group E and two in Group V were reintubated due to unstable haemodynamics and required reoperation due to mediastinal bleeding. One patient in Group V required reintubation, three days after operation, due to chest infection and respiratory insufficiency. Seven patients in Group V and six in Group E required postoperative ionotropic support. There was no perioperative death in either group.
Side effects related to epidural analgesia were similar in both the groups. Twenty percent patients in Group E and 23% patients in Group V complained of paraesthesia in dermatomes T1 and C8. In these patients, paraesthesia subsided on lowering the epidural infusion rate. Two patients in Group E and one in Group V complained of pruritus. The study solution was changed to plain bupivacaine. All the patients were awake during the study period. No patient in either group showed neurological signs and symptoms of epidural haematoma. There was no difference between the two groups in the incidence of nausea and vomiting. No patient required supplementary analgesia. | DISCUSSION
To date, there has been no study comparing the patients of fast-track extubation with patients of elective postoperative ventilation after on pump cardiac surgery. In this study, we applied TEA in both the groups of patients and found no difference in respiratory and haemodynamic parameters as well as complications. However, patients extubated in the OR were awake immediately after surgery and had less chest infection compared to ventilated group. TEA provided high-quality perioperative analgesia in all patients. Our study corroborates with the findings of other studies.[ 2 – 5 ]
Edgerton and others[ 7 ] observed that patients immediately extubated after off pump coronary artery bypass grafting had a reduced incidence of atrial fibrillation, shorter length of stay and also reduced mortality. However, few authors argued against the early extubation, immediately after cardiac surgery. Their views are that immediate extubation activates the sympathetic nervous system, thereby causing haemodynamic instability and myocardial ischaemia.[ 8 ] Borracci and others[ 9 ] suggested that immediate extubation after on pump and off pump cardiac surgery should be avoided in patients with heart failure, left ventricular dysfunction, cross-clamping time, pacemaker usage, haemodynamic compromise and difficult cardiopulmonary bypass weaning. The chances of requiring reintubation are increased if the patients are haemodynamically unstable, cold, hypovolaemic or had considerable opioid medication.[ 2 ] In the present study, patients with core body temperature <35°C, haemodynamically unstable, requiring pacemaker during intraoperative period or prolonged CPB weaning time were not attempted for immediate extubation after surgery and were excluded.
Prospective studies by Royse and others[ 2 ] and Hemmerling and others[ 5 ] using TEA or opioid based analgesia in patients undergoing on pump or off-pump cardiac surgery showed that immediate extubation in OR was possible with both the techniques of pain control when normothermia was maintained. However, Hemmerling and others[ 5 ] had the opinion that as there are more adverse effects such as confusion, sedation, intestinal ileus or respiratory depression with morphine based patient controlled analgesia, TEA should be the preferred modality of pain control due to better postoperative analgesia with reduced incidences of pulmonary complications, thereby improving postoperative lung function[ 10 ] and perioperative arrhythmias.[ 11 ] Moreover, TEA can produce cardiac sympatholysis which increases coronary blood flow, decreases myocardial oxygen consumption[ 12 ] and reduces the incidences of postoperative myocardial infarction linked to tight stenosis and ratio of delivery to utilization of Oxygen (DO 2 /VO 2 ) imbalance.[ 13 ] TEA also provides good protection from stress response and ensures haemodynamic instability.[ 14 ]
With the use of TEA as perioperative analgesia and by maintaining body temperature to >36°C in the OR, our results demonstrated no statistically significant increase in the rate of reintubation, haemodynamic instability, arrhythmias, myocardial infarction or respiratory complications compared to patients ventilated for 8-12 hours postoperatively after cardiac surgery.
There is controversy regarding the use of TEA in full anticoagulation during CPB due to risk of epidural haematoma. It is more likely if the epidural catheter is inserted or removed while the patient is anticoagulated.[ 15 ] We followed the guidelines of neuraxial anaesthesia in patients receiving antithrombotic therapy adopted from Horlocker and others[ 6 ] In our study, we inserted the epidural catheter at least 12 hours before operation and removed the catheter after the procedure when coagulation had returned to normal (INR>1.5). This practice was also followed by other investigators.[ 15 ] We did not observe any neurological complications due to TEA. Several other studies using TEA for cardiac surgery support this observation.[ 2 3 ]
Epidural analgesia is therefore an important adjunct to immediate extubation after open heart surgery because analgesia is optimised, patients remain awake and are not depressed by use of lots of narcotics, and early mobilisation with restoration of normal physiological function is possible.
Our limitation was that we could not use transoesophageal echocardiography during CPB for direct visualisation of left ventricular filling, detection of abnormal ventricular function, haemodynamic instability and new regional wall motion abnormalities. Moreover, as we studied two groups of patients, one on postoperative ventilation and thereby sedated, and another group conscious and awake after surgery, we could not use visual analogue scale (VAS) for assessment and comparison of pain.
Compared to conventional techniques, the potential benefits of early extubation in OR after cardiac surgery were reduced airway and lung trauma, improved cardiac output and renal perfusion with spontaneous respiration, reduced stress and discomfort of endotracheal tube suctioning and weaning from ventilation. There was no need for sedative drugs, ventilator disposables could be avoided and patients were transferred to lower dependency ward early from post anaesthesia care unit (PACU). Moreover, fewer nursing staffs were required to manage each patient. By this, cost saving also was therefore possible. | CONCLUSION
Thus, immediate extubation after on pump cardiac surgery may be safely achieved with optimisation of perioperative analgesia with TEA and by maintaining normothermia, with several advantages. | Elective postoperative ventilation in patients undergoing “on pump” open heart surgery has been a standard practice. Ultra fast-track extubation in the operating room is now an accepted technique for “off pump” coronary artery bypass grafting. We tried to incorporate these experiences in on pump open heart surgery and compare the haemodynamic and respiratory parameters in the immediate postoperative period, in patients on standard postoperative ventilation for 8-12 hours. After ethical committee’s approval and informed consent were obtained, 72 patients, between 28 and 45 years of age, undergoing on pump open heart surgery, were selected for our study. We followed same standard anaesthetic, cardiopulmonary bypass (CPB) and cardioplegic protocol. Thirty-six patients (Group E) were randomly allocated for immediate extubation following operation, after fulfillment of standard extubation criteria. Those who failed to meet these criteria were not extubated and were excluded from the study. The remaining 36 patients (Group V) were electively ventilated and extubated after 8-12 hours. Standard monitoring for on pump open heart surgery, including bispectral index was done. The demographic data, surgical procedures, preoperative parameters, aortic cross clamp and cardiopulmonary bypass times were comparable in both the groups. Extubation was possible in more than 88% of cases (n=32 out of 36 cases) in Group E and none required reintubation for respiratory insufficiency. Respiratory, haemodynamic parameters and postoperative complications were comparable in both the groups in the postoperative period. Therefore, we can safely conclude that immediate extubation in the operating room after on pump open heart surgery is an alternative acceptable method to avoid postoperative ventilation and its related complications in selected patients. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):525-530 | oa_package/e1/59/PMC3016572.tar.gz |
||
PMC3016573 | 21224970 | INTRODUCTION
To examine the accuracy and precision of pulse oximetry in cyanotic heart disease patients in the early post-operative period:
When sensor is placed at five different locations (viz. finger, palm, toe, sole and ear). At low saturation states (SaO 2 < 90). To identify the best sensor location. | METHODS
After ethics committee approval, 50 children in the age group of 1 month to 7 years after various corrective surgeries for cyanotic congenital heart diseases in the early post-operative period were selected for this observational study.
Exclusion criteria
Core body temperature <35°C Children with diseases or conditions known to affect pulse oximeter accuracy, e.g. sickle cell disease, congenital methaemoglobinaemia etc. Lactate level >2 mmol/L High inotropic support (dopamine/dobutamine ≥5 μ /kg/min).
All the patients were on ventilatory support with an FiO 2 of 0.6 and invasive arterial line in place. Measurements were taken within 3h of arrival in the intensive care unit (ICU) after corrective surgery.
The pulse oximeter used was a Philips M1020A pulse oximetry module (Philips Medical Systems, Eindhoven, Netherlands). An appropriate-sized pulse oximeter sensor (Phillips M1020A/ M1192A/ M1194A/ M1195A) was applied to the finger and palm of the same upper limb, on the toe and sole of the same lower limb along with the ear lobe. At the same time, an arterial blood sample was drawn. A single reading of SpO 2 is taken in individual sensor locations after the pulse oximeters had achieved optimal plethysmographic signals and heart rates matching with the electrocardiogram monitor (CMS; Philips Medical Systems). The sensors were covered with black carbon paper, the overhead light was dimmed and ambient lights were reduced by screens to prevent interference. Simultaneously, an arterial blood gas (ABG) analysis was done for co-oximetric measurement of SaO 2 using an ABL 800 FLEX ABG machine (Radiometer America Inc., Ohio, USA). Core temperature, arterial pressure, heart rate and ABG value of haemoglobin were recorded simultaneously.
The study population was further subdivided into Group A (SaO 2 <90%) and Group B (SaO 2 ≥90%) to analyse changes in accuracy and precision at low oxygen saturation.
Comparison of pulse oximetry reading (SpO 2 ) with arterial oxygen saturation (SaO 2 ) is reported in terms of bias and precision as described by Bland and Altman.[ 1 ] Bias is the difference between the specific body location SpO 2 (finger, palm, toe, sole or ear) and SaO 2 (i.e., SpO 2 -SaO 2 ) and precision is the ±1 standard deviation of the difference.[ 1 ] A low bias in a sensor site implies that the pulse oximeter sensor gives a more accurate reading at that site and vice versa. Precision implies the reproducibility of the measurement [ Figure 1 ]. Software SPSS 12.0 version was used for statistical analysis. One-way analysis of variance (ANOVA) test was used to compare the bias of two sensor sites statistically. The mean bias values of the two groups (Normoxaemic and Hypoxaemic) were compared using the unpaired t -test. P <0.05 was considered as significant. | RESULTS
A total of 50 SaO 2 measurements were obtained in 50 patients. The mean SaO 2 was 96.1%±4.6%, with a range of 83–99.8%. Eight patients had SaO 2 measurements <90% (Group A). The preoperative diagnosis of the study population is shown in Table 1 .
Ear probes consistently showed the highest SpO 2 results along with or without other sensor sites in 45 patients.
When the bias (i.e., SpO 2 - SaO 2 ) is calculated in all the patients, it was observed that the bias is lowest with the sole sensor (-0.088) and highest in the ear sensor (1.572) ( P =0.0049, using one-way ANOVA). Thus, in terms of accuracy (i.e., inverse of bias), we can say that the sole sensor is most accurate among the five sites of sensor location.
In Group A also, the sole sensor was found to have the least bias [ Figure 2 ] and hence the most accuracy. Statistical correlation between the sole and the ear sensor was found to be highly significant ( P <0.001) using the one-way ANOVA test.
The same trend was maintained by the sole sensor even in Group B [ Figure 3 ]. A statistically significant correlation was found between the sole and the ear sensors ( P <0.0001) using one-way ANOVA.
It is noted that in Group A, the value of mean SpO 2 (mean of all the five SpO 2 readings) is always higher than the SaO 2 [ Table 2 ]. Thus, we can say that at low saturation states, SpO 2 overestimates SaO 2 .
When the SaO 2 value is deducted from this mean SpO 2 value of a patient, we get the bias for that particular patient. When we consider the average of all the bias values in a particular group, we get the “mean bias” value for the group. The mean group bias in our study was found to be 0.631 and 2.74 in Group B and Group A, respectively [ Figure 4 ] ( P =0.0003 using unpaired t -test). | DISCUSSION
Pulse oximetry in the words of Severinghaus and Astrup is “arguably the most significant technologic advance ever made in monitoring the well being of patients during anaesthesia, recovery and critical care”.[ 2 ] Pulse oximetry estimates arterial oxygen saturation by measuring the absorption of light of two wavelengths (approximately 660 nm and 940 nm) in human tissue beds. The amount of light absorption varies with the amount of blood in the tissue bed and the relative amounts of oxygenated and deoxygenated haemoglobin.[ 3 ] The accuracy of commercially available oximeters differ widely, probably due to the algorithm differences in signal processing.[ 4 ]
The aim of our study is to evaluate the accuracy and precision of pulse oximeter at five different sensor locations and to report any significant decline with hypoxaemia (SaO 2 <90%). Various studies reported a decline in accuracy and precision as the SaO 2 decreases below various cut-off values (<90% or <80%)[ 5 6 ] The Philips M1020A module was selected for clinical use in our patients because of the accuracy and precision reported by Carter and colleagues.[ 7 ] They showed that the Philips M1020A was not affected by foetal haemoglobin (HbF). Different sensors also affect the accuracy of SpO 2 measurements. Clayton and others found that the overall rankings were much better for the finger sensors in patients with poor peripheral perfusion.[ 8 ] Bell and others,[ 9 ] while comparing the traditional band-wrap disposable pulse oximeter sensor with the reusable clip-type sensor, found that the type of sensor selected has little effect on the accuracy of pulse oximetry in children.
Ambient light, skin pigmentation, dyshaemoglobinae-mia, low peripheral perfusion states and motion artefact can affect the performance of pulse oximeters.[ 10 11 ] The interference of ambient light can be overcome by simply wrapping the oximeter sensor in opaque material. Villanueva et al .[ 12 ] found that age, weight, core and skin temperature, haemoglobin concentration, pulse pressure and percent flow have little effect on the accuracy of pulse oximetry in children. At low levels of saturation (SaO 2 below 80%), pulse oximetry is not as accurate as at higher saturations and overestimates the true value.[ 6 ] Although the exact mechanism is not known, various investigators have found that SpO 2 overestimates SaO 2 in polycythemia and underestimates SaO 2 with anaemia.[ 13 14 ] This might explain the overestimation of SaO 2 in the hypoxaemic group observed in the aforementioned studies as well as in our study. Sedaghat-Yazdi and others[ 15 ] while studying the effect of sensor location on pulse oximeter accuracy and precision in cyanotic children found that there are no significant differences in bias and precision between finger and toe sensors regardless of SaO 2 values. They also found that sensor locations with the worst accuracy and precision were the sole and palm when SaO 2 was <90%. This is contrary to the results of our study. Although the reason of our finding is not explainable clearly, factors like increased viscosity of blood, vascular and tissue changes due to clubbing in fingers and toes, etc. might play some yet unproven role. Better tissue perfusion in sole as compared to more peripheral sites like toe, finger and ear lobes might cause the sole sensor to perform better in terms of accuracy. In healthy volunteers, oximeters commonly have a mean difference (bias) of <2% and a standard deviation (precision) of <3% when SaO 2 is 90% or above.[ 16 17 ] Comparable results have also been obtained in critically ill patients with good arterial perfusion.[ 18 ] However, the accuracy deteriorates when SaO 2 falls to 80% or less (bias varies from –15.0 to 13.1 while the precision ranges from 1.0 to 16.0).[ 16 ] | CONCLUSION
Cyanotic heart disease patients pose a unique dilemma in terms of the reliability and precision of the pulse oximetry readings and the determination of the best location of the sensor, especially in infants. An understanding of the bias and precision of the pulse oximetry at various sensor sites would go a long way in the effective management of patients with cyanotic heart disease in the perioperative period. We strongly recommend that clinicians should verify the measurements by a co-oximeter and evaluate the pulse oximetry sensor used with the particular body site reliability indices to avoid any unacceptable over- or underestimation of the SaO 2 . This becomes even more relevant in hypoxaemic patients with low SaO 2 readings as the margin of safety is very small. We found that sole is the most accurate site of sensor location in cyanotic heart disease paediatric patients. We could also re-establish the finding that at low saturation states, pulse oximetry accuracy deteriorates and tends to overestimate the SaO 2 . In terms of reproducibility, the best sensor site could not be determined definitely and consistently in our study. | Since the invention of pulse oximetry by Takuo Aoyagi in the early 1970s, its use has expanded beyond the perioperative care into neonatal, paediatric and adult intensive care units (ICUs). Pulse oximetry is one of the most important advances in respiratory monitoring as its readings (SpO 2 ) are used clinically as an indirect estimation of arterial oxygen saturation (SaO 2 ). Sensors were placed frequently on the sole, palm, ear lobe or toes in addition to finger. On performing an extensive Medline search using the terms “accuracy of pulse oximetry” and “precision of pulse oximetry”, limited data were found in congenital heart disease patients in the immediate post-corrective stage. Also, there are no reports and comparative data of the reliability and precision of pulse oximetry when readings from five different sensor locations (viz. finger, palm, toe, sole and ear) are analysed simultaneously. To fill these lacunae of knowledge, we undertook the present study in 50 infants and children with cyanotic heart disease in the immediate post-corrective stage. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):531-534 | oa_package/fc/e5/PMC3016573.tar.gz |
||
PMC3016574 | 21224971 | INTRODUCTION
Ventilator-associated pneumonia (VAP) refers to bacterial pneumonia developed in patients who have been mechanically ventilated for a duration of more than 48 h.[ 1 ] It ranges from 6 to 52% and can reach 76% in some specific settings.[ 2 ] Hospital-acquired pneumonia (HAP) is the pneumonia after 48 h or more after admission, which did not appear to be incubating at the time of admission. The presence of HAP increases hospital stay by an average of 7–9 days per patient[ 3 4 ] also imposes an extra financial burden to the hospital. The risk of VAP is highest early in the course of hospital stay, and is estimated to be 3%/day during the first 5 days of ventilation, 2%/day during days 5–10 of ventilation and 1%/day after this.[ 5 ]
Lack of a gold standard for diagnosis is the major culprit of poor outcome of VAP. The clinical diagnosis based on purulent sputum may follow intubation or oropharyngeal secretion leakage around airway, chest X-ray changes suspected of VAP may also be a feature of pulmonary oedema, pulmonary infarction, atelectasis or acute respiratory distress syndrome. Fever and leukocytosis are non-specific and can be caused by any condition that releases cytokines. Although microbiology helps in diagnosis, it is not devoid of pitfalls. In fact, it was proven that colonization of airway is common and presence of pathogens in tracheal secretions in the absence of clinical findings does not suggest VAP.[ 6 7 ] The Clinical Pulmonary Infection Scoring (CPIS) system originally proposed by Pugin and others helps in diagnosing VAP with better sensitivity (72%) and specificity (80%). This study aims to critically review the incidence and outcome, identify various risk factors and to conclude specific measures that should be undertaken to prevent VAP. | METHODS
The study was conducted over a period of 1.5 years, extending from July 2008 to December 2009, in an intensive care unit (ICU) of a tertiary care centre. A total of 100 patients who were kept on mechanical ventilator were randomly selected. Cases included were patients of both sexes who were kept on mechanical ventilator for more than 48 h, having the age of >15 years. Patients who died or developed pneumonia within 48 h or those who were admitted with pneumonia at the time of admission and patients of ARDS (Acute Respiratory Distress Syndrome) were excluded from the study. Most of the patients put of ventilator support were primarily treated elsewhere with antibiotics either in the indoor ward or in other health care centres that was not traceable. A questionnaire was prepared and each patient selected to be included in the study was screened and monitored according to the questionnaire. Age, sex, date of admission to ICU, date of initiating mechanical ventilation and mode of assess to the patients’ airway, i.e. orotracheal or tracheostomy, were recorded. Indication of mechanical ventilation was noted. In each patient, ventilator mode and settings were recorded and any change in setting was recorded daily. Patients’ vitals, general and physical examination, oxygen saturation and position of the patients were monitored regularly. During the initial stage of ventilation, patients were adequately sedated. All necessary measures were taken for prevention of hospital-acquired infections. A battery of routine investigations was performed and special investigations, like culture of tracheal tube, blood and urine and others like serum cholinesterase levels when needed, were performed. Sputum from the patients were collected from the tip of the suction catheter and transported to the laboratory in a sterile tube. Patients were monitored from the date of inclusion in the study to the final outcome in the ICU. VAP was diagnosed on clinical grounds based on the modified CPIS system [ Table 1 ] originally developed by Pugin and others,[ 1 ] giving 0–2 points each for fever, leukocyte count, oxygenation status, quantity and purulence of tracheal secretions, type of radiographic abnormality and result of sputum culture and Gram stain. The VAP group was classified into two groups, early-onset type (within 48–96 h) and late-onset type (>96 h). Once the clinical suspicion was established, empirical antibiotic therapy was initiated based on guidelines prescribed by the American Thoracic Society. Patients were routinely screened by arterial blood gas (ABG) analysis every 12 hourly and appropriate steps were taken to correct any change.
Statistical analysis
The study cohort was classified into two groups, “early-onset VAP (onset after 48 h but within 96 h)” and “late-onset VAP (onset after 96 h)”. With a sample size of 70, calculated based the results of a previous study,[ 8 ] the power of the study was set at 80%. After evaluating, the data were subjected to univariate analysis using the chi-square test. The level of significance was set at P <0.05. | RESULTS
The study cohort comprised of 100 patients of various cases of poisoning, neurological disorders, sepsis and others. The mean age of the patients was 34 years, having a predominance of male population. Of the 100 patients, 37 patients developed VAP during the ICU stay. The mean duration of mechanical ventilation was found to be 11 days for the non-VAP group and 19 days for the VAP group. It was analysed in our study that those requiring prolonged ventilator support (>15 days) had a significantly higher incidence of VAP ( P -value, 0.001). Supine position and stuporous, comatose patients were found to be risk factors, having a high incidence of VAP, and proved to be statistically significant ( P -value, 0.003 and 0.0023, respectively). The PaO 2 /FiO 2 ratio was analysed in VAP patients and was found to be <240 mmHg in 86% of the cases. In the remaining 14%, the ratio was higher (>240 mmHg). Of the 37 patients who developed VAP, 10 patients developed early-onset (27.02%) VAP and 27 patients developed the late-onset type (72.97%). The overall mortality was found to be 46%, while mortality in the VAP patients was found to be 54%. The mortality of the early-onset type was found to be 20%. In case of the late-onset type, it was found to be 66.67%. Late-onset VAP had a significantly high association ( P =0.0234) as far as mortality was concerned in comparison with early-onset pneumonia. The order of prevalence of organism in our study was found to be Pseudomonas (43.2%), Klebsiella (18.91%), followed by MRSA, E. coli , Acinetobacter, MSSA and S. pneumoniae . | DISCUSSION
In the study of our set up, males predominated (62%). Although the incidence of VAP was also high in males, it was statistically not significant ( P =0.2086) [ Table 2 ]. The mean age group in our study was 34 years. The young population group in our set up is due to the number of cases of poisoning that predominated our study.
The incidence of VAP in our setting was 37%. In the era of advanced diagnosis and early management of possible complications, the incidence tends to be lower. In recent studies,[ 9 10 ] the reported incidence is very low, ranging from 15 to 30%. The high incidence in our study may be due to a lower number of cases (i.e., 100) and lack of adequate nursing staff (which should ideally be 1:1 as compared to 4:1 in our institute) which may have adversely affected the quality of care given to the patients. Another factor in our study was a higher number of cases of patients of Organo-phosphorous poisoning that required prolonged ventilation, which is proved to be a risk factor having a statistically significant relation ( P -value, 0.001; Table 3 ) with incidence of VAP, and may have influenced the incidence. The mean duration of ventilation in our study for non-VAP patients was 11 days whereas it was almost 19 days for VAP patients, which almost matches other studies.[ 11 ] It was proved in our study that duration of mechanical ventilation is an important risk factor for VAP, which is similar to other studies[ 12 ] where the mean duration of ventilation was around 10 days and the incidence of VAP was found to be 9.3%.
The mean duration of ventilation can effectively be reduced by administrating a proper weaning protocol. It was estimated that 42% of the time a medical patient spends on the mechanical ventilator is during the weaning process.[ 13 ] Among the various methods of weaning, spontaneous breath trial has been proved to be very effective as compared with the intermittent mandatory ventilation (IMV) because of the fact that IMV promotes respiratory fatigue.[ 14 15 ] A once-daily trial of spontaneous breathing and a prolonged period of rest may be the most effective methods of weaning to recondition respiratory muscles that may have been weakened during mechanical ventilation.[ 16 17 ]
Reintubation resulted in a very high incidence of VAP[ 18 ] and proved to be an independent risk factor in various studies.[ 19 ] This may be due to impaired reflexes after prolonged intubation or due to the altered level of consciousness, increasing the risk of aspiration. In our study, the number of patients reintubated were only five in number, but four patients developed VAP. This number is too less to compare with other studies. A recent case–control study of 135 patients following heart surgery also found reintubation to be a major risk factor as VAP occurred in 92% of the reintubated patients versus 12% of the control subjects.[ 20 ]
It was noted that patients having unaffected lung during admission, like snake bite and meningitis, have a considerably low incidence of VAP and a significantly higher incidence of VAP in supine positioning as compared with the semi-recumbent position ( P =0.003) [ Table 4 ] because it may facilitate aspiration, which may be decreased by a semi-recumbent positioning matches to the outcome of the other studies when position is considered as a risk factor.[ 21 – 23 ] In fact, it was proved using radioactive-labeled enteral feeding, that cumulative numbers of endotracheal counts were higher when patients were placed in the completely supine position (0°) as compared with a semi-recumbent position (45°).[ 21 22 ] Infection in patients in the supine position was strongly associated with the simultaneous administration of enteral nutrition. Thus, intubated patients should be managed in a semi-recumbent position, particularly during feeding.
Level of consciousness has a significant impact on the incidence of VAP. It was found in our study that the incidence of VAP in stuporous (62.5%) and comatose (50%) patients is significantly higher ( P =0.0023) than that in conscious (35.75%) and drowsy (18.42%) patients [ Table 5 ]. This may be due to the higher chances of aspiration in comatose patients. An early and planned tracheostomy was found to decrease the VAP significantly but could not be studied as it will take time to be accepted by one and all. When used for stress ulcer prophylaxis, Sucralfate appears to have a small protective effect against VAP because it does not raise the gastric pH like H 2 receptor antagonists.[ 11 ] Therefore, whenever feasible, Sucralfate should be used instead of H 2 receptor antagonists.
The PaO 2 /FiO 2 ratio was assessed during the course of ventilatory support and it was observed that the ratio dropped at least 12–24 h before the onset of the clinicoradiologic picture suggestive of VAP. Thus, a decline in the PaO 2 /FiO 2 ratio was found to be an early indicator of onset of VAP. Although fever was found almost in 100% of the patients, it was of no significance as far as diagnosis was concerned because most of the patients were of poisoning cases that required atropine in due course of time and, as such, fever is a non-specific sign itself.
The most common organism associated with VAP is Pseudomonas (43.24%), followed by Klebsiella (18.91%). Also, the overall mortality rate was high in the Pseudomonas group (62.5%). In other studies,[ 24 25 ] isolation of Pseudomonas ranges from 15 to 25%. Susceptibility testing could not be studied in all patients due to a lack of clinical microbiologic support as it is not done routinely and sending samples outside is not allowed by the hospital authority except in special cases.
Early-onset VAP in our study was found to be 27.02% while in various study it was found to be around 40%.[ 26 ] The low incidence in our study may be due to antibiotic use before admission to the ICU. Studies[ 27 ] have shown that previous antibiotic use decreases early-onset VAP but markedly increases multidrug-resistant (MDR) pathogens, which is also reflected in our study. Our study also demonstrated that early-onset VAP had a good prognosis as compared with the late-onset type in terms of mortality, which is also statistically significant ( P =0.0234) [ Table 6 ]. Probably, the de-escaltation strategy[ 28 ] fully endorsed by the American Thoracic Society, which means initiation of a broad-spectrum antibiotic and changing to a narrow spectrum after the sensitivity results are made available, will reduce inappropriate antibiotic use and, subsequently, the drug-resistant pathogens. Although it is not performed in our ICU, invasive bronchoscopic sample collection and quantitative sample culture reduces inappropriate antibiotic use.[ 29 ]
The mortality rate in our study was found to be 54.05% in the VAP group as compared to 41.2% in the non-VAP group. Although the incidence was slightly high in VAP, it is statistically not significant. Table 6 implies that VAP as such does not increase the mortality in ICU patients.
Hand washing is widely recognized as an important but underused measure to prevent nosocomial infections.[ 30 ] According to the 2004 CDC (Center for Disease Control) guidelines, hands should be washed before and after patient contact and also in between patient contact. Chlorhexidine has been shown to be effective in the control of ventilator-circuit colonization and pneumonia caused by antibiotic-resistant bacteria.[ 31 ] Oropharyngeal decontamination with Chlorhexidine solution has also been shown to reduce the occurrence of VAP in patients undergoing cardiac surgery.[ 32 ] | CONCLUSION
We arrive at the following conclusions:
Incidence is directly proportional to duration of mechanical ventilation and re-intubation is a strong risk factor for development of VAP. Therefore, duration of ventilation has to be reduced to get rid of morbidity and mortality associated with mechanical ventilation, which can be achieved by administering a proper weaning protocol and titrating sedation regimens as per the need of the patients. Promoting nasogastric feeding. Although necessary for critically ill patients, it should be given keeping the patients in a semi-recumbent position with the head end elevated to 45° because the supine position promotes aspiration. A decrease in the PaO 2 /FiO 2 ratio is an early predictor of VAP. Pseudomonas is the most common organism in our institution. Late-onset VAP is associated with poor prognosis as compared to the early-onset variety. Inappropriate antibiotic use prior to ventilatory support decreases the early-onset variety but predisposes to a high incidence of MDR pathogens. | Ventilator-associated pneumonia (VAP) is a major cause of hospital morbidity and mortality despite recent advances in diagnosis and accuracy of management. However, as taught in medical science, prevention is better than cure is probably more appropriate as concerned to VAP because of the fact that it is a well preventable disease and a proper approach decreases the hospital stay, cost, morbidity and mortality. The aim of the study is to critically review the incidence and outcome, identify various risk factors and conclude specific measures that should be undertaken to prevent VAP. We studied 100 patients randomly, kept on ventilatory support for more than 48 h. After excluding those who developed pneumonia within 48 h, VAP was diagnosed when a score of ≥6 was obtained in the clinical pulmonary infection scoring system having six variables and a maximum score of 12. After evaluating, the data were subjected to univariate analysis using the chi-square test. The level of significance was set at P <0.05. It was found that 37 patients developed VAP. The risk factor significantly associated with VAP in our study was found to be duration of ventilator support, reintubation, supine position, advanced age and altered consciousness. Declining ratio of partial pressure to inspired fraction of oxygen (PaO 2 /FiO 2 ratio) was found to be the earliest indicator of VAP. The most common organism isolated in our institution was Pseudomonas. The incidence of early-onset VAP (within 96 h) was found to be 27% while the late-onset type (>96 h) was 73%. Late-onset VAP had poor prognosis in terms of mortality (66%) as compared to the early-onset type (20%). The mortality of patients of the non-VAP group was found to be 41% while that of VAP patients was 54%. Targeted strategies aimed at preventing VAP should be implemented to improve patient outcome and reduce length of intensive care unit stay and costs. Above all, everyone of the critical care unit should understand the factors that place the patients at risk of VAP and utmost importance must be given to prevent VAP. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):535-540 | oa_package/bd/9b/PMC3016574.tar.gz |
||
PMC3016575 | 21224972 | INTRODUCTION
Children have been the earliest patrons of anaesthesiology from its earliest clinical applications of surgical anaesthesia.[ 1 ] An endotracheal tube (ET) is always considered to be the gold standard[ 2 3 ] device to maintain an airway because of its inherent ability to provide positive pressure ventilation (PPV) and to prevent occurrences of gastric inflation and pulmonary aspiration.[ 3 ] Haemodynamic responses, situations of failed intubation and damage to the oropharyngeal structures[ 3 ] during intubation are also a serious concern. The first supraglottic airway device–the Laryngeal Mask Airway (LMA)–was designed in 1981 by Dr. Archie Brain.[ 4 ] The paediatric ClassicLMA forms a less effective glottic seal[ 5 ] with the subsequent risk of gastric distension and regurgitation due to leakage of gas in the stomach which can lead to pulmonary aspiration.
The ProSeal LMA was introduced by Dr. Archie Brain in 2000.[ 4 ] ProSeal LMA has a gastric drainage tube, placed lateral to the main airway tube. The gastric drainage tube forms a channel for regurgitated gastric contents[ 5 ] and prevents gastric insufflation and pulmonary aspiration. A gastric tube can be placed through a drain tube and can detect the malposition[ 5 ] of PLMA. The paediatric PLMA lacks the dorsal cuff.[ 5 ] The paediatric ProSeal LMA available sizes are 1, 1.5, 2 and 2.5.
In the present study we compared the PLMA (size 2) and ET tube with respect to number of attempts for the placement of devices, haemodynamic responses during placement and perioperative respiratory complications. | METHODS
After approval from institutional ethical committee, written informed consent was taken from all the parents. Sixty children of 2-8 years of age group of either sex, weighing 10-20 kg and belonging to physical status of ASA Grades I and II scheduled for elective ophthalmological and lower abdominal surgical procedures of 30-60 min duration were undertaken for the study. All the patients were randomly divided into two groups by a draw method (simple randomization) of either PLMA (group A) or ET (group B). Patients with the lack of written informed consent anticipated difficult airway, hiatus hernia, gastro-esophageal reflux diseases, cardio respiratory diseases, upper respiratory tract infection (URI), history of convulsions and full stomach were excluded from the study. A thorough preoperative assessment was done a day before surgery.
All patients were premedicated with i.v. Glycopyrrolate 0.004 mg/kg, i.v. Fentanyl 2 μ g/kg and i.v. Ondansetron 0.1 mg/kg 5 min prior to induction of anaesthesia. Standard monitoring were applied which included precordial stethoscope, pulse oximeter, capnography, electrocardiography and automated noninvasive blood pressure (NIBP). Base line vital parameters were recorded. After preoxygenation, anaesthesia was induced with i.v. propofol 2 mg/kg mixed with i.v. lignocaine 0.5 mg/kg. Atracurium 0.5 mg/kg was used as neuromuscular blocking agent (NMBA) and intermittent boluses of i.v. atracurium 0.1 mg/kg were given as required.
PLMA size 2 was selected for group A patients, the cuff was fully deflated and posterior surface of PLMA was well lubricated with 2% lignocaine jelly. The child’s head was maintained in the sniffing position. The PLMA was inserted through oral cavity using the index finger technique. The cuff was inflated with 7-10 ml of air as recommended by manufacturer. A number of insertion attempts were recorded. Three attempts were allowed for the placement before the device was considered a failure and the device was replaced with an ET tube. Removal of the device from mouth was termed as failed attempt. After obtaining an effective airway (defined as normal thoracoabdominal movements, bilaterally equal audible breath sounds on auscultation and a regular waveform on capnograph), the PLMA was fixed by taping over the chin. The PLMA position was confirmed by the gel displacement test, bilateral chest movements and square wave capnography. Gastric tube number 10 was inserted through a drain tube. Two attempts were allowed before gastric tube insertion was considered a failure and repositioning of PLMA was done. In group B patients, endotracheal intubation was done using appropriate size cuffed or uncuffed tubes. Uncuffed eETs were preferred for children younger than 6 years.
All patients were maintained on nitrous oxide 66% in oxygen and halothane 0.8-1% and manually ventilated using Jackson Ree’s modification of Ayre’s T-piece. EtCO 2 was maintained between 35 and 45 mmHg. Immediately after placement of PLMA and ET intubation, vital parameters were recorded. Haemodynamic parameters were recorded at 5 min and 10 min interval after placement of PLMA and intubation. At the end of surgery, anaesthetic agents were discontinued and patients were kept on 100% oxygen and i.v. glycopyrrolate 0.004 mg/kg followed by i.v. neostigmine 0.05 mg/kg for adequate reversal of residual neuromuscular blockade was given. After full deflation of cuff the PLMA was removed in a spontaneously breathing patient. Similarly, extubation was done after thorough oral suction.
During emergence, the occurrence of any complications like coughing, bronchospasm and laryngospasm was noted. After removal of both airway devices, blood staining of the ET tube and posterior aspect of cuff of PLMA, tongue-lip-dental trauma and hoarseness was recorded.
The patients were monitored throughout the perioperative period till stay in the post-anaesthesia care unit. The patients were followed for next 24 h for any sore throat and hoarseness.
For statistical analysis the data were analyzed using the SPSS software (6.0 version). Student’s t test was applied. P value <0.05 was considered significant. | RESULTS
The patients’ demographic profile were comparable in both the groups [ Figure 1 ]. There was male predominance in both the groups.
The success rate to place the PLMA at first attempt was 83.33% and only 16.67% patients required second attempt. The success rate to intubate the patients was 96.67% and 3.33% at first and second attempt respectively [ Table 1 ].
Haemodynamic responses were lower for the placement of PLMA than the ET intubation. The mean pulse rate (bpm) increased from a baseline value of 103.70±11.56 to 109.50±12.41 and from 102.46±11.46 to 122.83±8.30 after the placement of PLMA and the endotracheal intubation respectively. The increase in the pulse rate was statistically significant ( P <0.05) in both the groups. The mean pulse rate returned to the base line value after 5 mins of placement of PLMA (Group A). The increase in the pulse rate was statistically significant ( P <0.05) even after 10 min of endotracheal intubation (Group B) [ Table 2 ]. The increase (%) in pulse rate was higher after ET intubation than after placement of PLMA ( P <0.05).
The increase in mean SBP from the baseline after insertion of PLMA or ET was statistically insignificant ( P >0.05) in both group A and group B. There was a statistically significant ( P <0.05) decrease in mean SBP (mmHg) 97.86±8.46 from the baseline value of 105.86±9.78, 5 min after placement of PLMA (Group A). The mean SBP (mmHg) 98.26±11.68 also decreased from the baseline mean SBP of 103.60±12.46, 5 min after ET intubation (Group B) ( P >0.05).
The increase in mean DBP and MBP was statistically insignificant ( P >0.05) in both group A and group B. There was a statistically significant ( P <0.05) decrease in mean DBP and MBP after 5 min of insertion of respective devices in both groups [ Table 3 ].
There was no significant difference in mean SpO 2 (%) and EtCO 2 level recorded at different time intervals between the two groups ( P >0.05).
There was a significant incidence of cough in group B (30%) patients after extubation as compared with the group A (6.6%) patients (PLMA) ( P <0.05). Bronchospasm was seen in two (6.6%) patients after extubation in group B but none of the patients in the group A after removal of PLMA. After removal of PLMA (Group A), blood on the posterior surface of PLMA was noted in six (20%) cases but only in two (6.6%) cases blood on ET tube was observed after extubation (Group B). There was no incidence of aspiration in either group. There was no incidence of hoarseness or sore throat after removal of PLMA or ETT and postoperatively even after 24 h [ Table 4 ]. | DISCUSSION
The LMA TM and other supraglottic airways have radically changed paediatric anaesthesia practice and have become a key component of airway management in children.[ 1 ] There are limitations of Classic LMA (air leak, gastric distension and aspiration). In ProSeal LMA, there is presence of drain tube, integral bite block[ 1 ] and different cuff design, increased depth of the bowl to improve the seal with the larynx,[ 4 ] from CLMA.
In our study we found that the endotracheal intubation was done in 96.67% patients at first attempt where as PLMA was inserted in 83.33% patients at first attempt. Sinha et al . and Misra et al . reported that all patients were intubated at first attempt while the PLMA was placed in 88% patients at first attempt in paediatric and adult laparoscopic surgeries, respectively. Dave et al . reported the success rate to place the PLMA in first attempt was 93.33%. Lim et al . in gynaecological laparoscopy noted that the numbers of attempts for successful insertion were similar for both PLMA and ET tube (86% and 85%, respectively). Misra et al . suggested that laryngoscopy and tracheal intubation are the main forte of a successful anaesthesiologist. Hence, the 100% first attempt success in their study where difficult airways were excluded was an occurrence on expected lines. The different morphology of PLMA from the CLMA and after deflation the semirigid distal end of drain tube[ 1 ] of PLMA may contribute to difficult insertion.
After placement of PLMA the patient were haemodynamically more stable than after ET intubation. The PLMA position was confirmed by the gel displacement test, bilateral chest movements and square wave capnography.[ 6 ] The haemodynamic responses were observed only for short period of time after PLMA insertion than ET intubation.
The findings in our study are comparable with the study of Dave et al . and Misra et al .
After extubation there was significant incidence of cough as compared to after removal of PLMA. Maltby et al . and Sinha et al . also reported that the incidence of cough was higher after extubation.
Bronchospasm was noted in two (6.6%) cases in group B patients and no cases in group A patients. The finding in our study also correlated with the study of other authors.[ 7 ]
Supraglottic airways could be less irritating[ 8 ] to upper or lower airway and associated with less laryngeal stimulation[ 7 ] leading to less significant postoperative complications.
Blood on the posterior surface of PLMA was observed in six (20%) patients in group A but in group B patients only in two (6.6%) cases blood on ET tube was observed after extubation. In these patients there occurred some trauma during laryngoscopy and since uncuffed tubes were used in these patients, probably it may be the reason for this unusual complication. Our findings are comparable with Lim et al . who have reported 7% incidence of blood staining on PLMA and 6% in ET tube.[ 9 ] In one of the studies blood on the tracheal tube is reported more frequently than the PLMA.[ 10 ]
There was no incidence of aspiration in either group of patients during induction of anaesthesia, intraoperative period or after removal of the respective airway device. The other authors[ 2 11 12 ] also reported the similar findings.
There was no incidence of hoarseness or sore throat after removal of PLMA or ETT and postoperatively even after 24 h. | CONCLUSION
Based upon the above observations, results and discussion it is concluded that during routine paediatric surgical procedures of short duration, ProSeal LMA is useful alternative to endotracheal intubation. Though the PLMA is slightly difficult to place, effective airway can be achieved easily by an experienced anaesthesiologist and there are lower incidences of complications. | The laryngeal mask airway (LMA) is a supraglottic airway management device. The LMA is preferred for airway management in paediatric patients for short duration surgical procedures. The recently introduced ProSeal (PLMA), a modification of Classic LMA, has a gastric drainage tube placed lateral to main airway tube which allows the regurgitated gastric contents to bypass the glottis and prevents the pulmonary aspiration. This study was done to compare the efficacy of ProSeal LMA with an endotracheal tube in paediatric patients with respect to number of attempts for placement of devices, haemodynamic responses and perioperative respiratory complications. Sixty children, ASA I and II, weighing 10-20 kg between 2 and 8 years of age group of either sex undergoing elective ophthalmological and lower abdominal surgeries of 30-60 min duration, randomly divided into two groups of 30 patients each were studied. The number of attempts for endotracheal intubation was less than the placement of PLMA. Haemodynamic responses were significantly higher ( P <0.05) after endotracheal intubation as compared to the placement of PLMA. There were no significant differences in mean SpO 2 (%) and EtCO 2 levels recorded at different time intervals between the two groups. The incidence of post-operative respiratory complications cough and bronchospasm was higher after extubation than after removal of PLMA. The incidence of soft tissue trauma was noted to be higher for PLMA after its removal. There were no incidences of aspiration and hoarseness/sore throat in either group. It is concluded that ProSeal LMA can be safely considered as a suitable and effective alternative to endotracheal intubation in paediatric patients for short duration surgical procedures. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):541-545 | oa_package/b0/1e/PMC3016575.tar.gz |
||
PMC3016576 | 21224973 | INTRODUCTION
The sympathetic supply to the head, neck and upper limb is derived from T1-9 segments, and passes through the stellate ganglion (C5-T1). The stellate ganglion is 2.5 cm×1 cm×0.5 cm and lies over the neck of the 1 st rib, between C7 and T1. The most common approach for stellate ganglion block (STGB) is paratracheal, at the level of C6 th Chassaignac’s tubercle.[ 1 2 ] In 1930, efficacy of STGB was well established by White in USA and Leriche in Europe. In 1933, Labat and Greene reported that injection of 33.3% alcohol can produce satisfactory analgesia. In 1936, Putnam and Hompton first used phenol for neurolysis.[ 3 ]
Besides chronic regional pain syndrome (CRPS), sympathetic blockade is found to be useful in circulatory problems of the upper limbs, such as arterial embolism, accidental intra-arterial injection of drugs and Meniere’s syndrome. It has been indicated as immediate therapy for pulmonary embolism.[ 4 5 ] Repeated injections of STGB has become popular for the long-term remissions that it produces in CRPS. Serial blocks disorganize the reflex activity triggered in the internunceal neuronal pool of the spinal cord and in the sympathetics themselves.[ 6 ] The sympathetic blockade produces relaxation of the upper extremity arteries, which increases blood flow and peripheral temperature.[ 7 8 ] Peripheral vascular disease (PVD) of the upper limbs may be due to generalized atherosclerosis, thromboembolism, Buerger’s disease, diabetic angiopathy or Reynaud’s disease. Gradual ischaemia of nerves and tissues activates the sympathetic system, leading to the vicious cycle of pain-vasospasm-ischaemia-gangrene. Treatment is multimodal, with initial trials of alpha blockers, calcium antagonists, pentoxifylline or platelet inhibitors, especially when the obstruction results from spasm. Following failure of the primary line of treatment, patients are usually referred for sympathetic blocks with radiofrequency (RF) electrical thermocoagulation /chemical neurolysis of the ganglion.
Due to non-availability/affordability of RF ablation and to avoid the potential complications of chemical neurolysis, we decided to study the efficacy of ketamine as an adjuvant in enhancing the effects of STGB. Ketamine is known to manipulate the NMDA receptors that trigger the aberrant brain activity in neuropathic pain and control the autonomic dysregulation.[ 9 ] Besides good analgesia, it has got a local anaesthetic effect by blocking the Na-channel.[ 10 ] At low doses (0.1–0.5 mg/kg) of ketamine, psychotrophic effects are less, and can be managed with benzodiazepines.[ 11 ] Considering the fact that opioids have a limited role in established neuropathic pain and for its potential complications,[ 12 ] we felt ketamine to be a rational adjuvant for STGB. This report presents the result of 20 cases of PVD of upper limbs with gangrene of fingers, treated by serial STGB for 5 days with local anaesthetic (LA) and ketamine. | METHODS
A prospective analytical study was performed in 20 patients of PVD of upper limbs during the last 5 years. Approval by the institutional ethical committee and informed consent were obtained. The chief complaints were severe throbbing pain with cold fingers and dry gangrene of 8 days to 1 month duration. Patients were thoroughly assessed and investigated. Laser Doppler flowmetry of the affected extremity was carried out. Patients with h/o recent myocardial infarction /heart blocks and with International normalized ratio (INR) >1.5 were excluded. Pre-block vital parameters, temperature of the normal and affected hand and visual analogue scale (VAS) score for pain were recorded. Diagnostic STGB was carried out with local anaesthetic for 2 days. Patients having pain relief of 50% in the initial magnitude and increase in temperature of the affected hand by 1.5°C were subjected to therapeutic sympatholysis with ketamine. Pre-medication with intravenous midazolam 0.5 mg/kg was given for anxiolysis.
Technique
For classical anterior paratracheal approach at C-6 th level, patient was lying supine with extension of head at the A-O joint and mouth partially open so as to relax the muscles in the neck [ Figure 1 ]. After skin preparation, the chassaignac’s tubercle was palpated at the level of the cricoid cartilage (1.5 cm from midline) and the sternocleidomastoid muscle and the carotid vessels were retracted with one hand. Under C-arm guidance, a 21 G 5-cm needle was inserted perpendicular to the table to hit the C-6 th tubercle approximately at a depth of 2–2.5 cm. The needle withdrawn for 2 mm to prevent periosteal or injection into the longus colli muscles. After −ve aspiration test, lignocaine 2% 2 ml+8 ml of 0.25% bupivacaine was injected in increments. Monitoring of vital parameters during the block, after the block for 30 min and, later, hourly for 4 h was carried out in the recovery room. The block was repeated in a similar way on the 2 nd day to confirm the benefits. Later, for three successive days, STGB was repeated with 1.5 ml of 2% lignocaine +8 ml of 0.25% bupivacaine +0.5 mg/kg of ketamine. Intramuscular Diclofenac 2 mg/kg or Tramadol 2 mg/kg was given as soon as VAS reached >6.
Clinical observations
For pain relief – (0–10 point) VAS 2-hourly for 8 h and later 4-hourly up to16 h and at 24 h. Baseline axillary temperature and temperature of the affected hand before and after block, then 2-hourly for 8 h, later 4-hourly up to16 h and at 24 h. S/O improved circulation – vasodilatation, skin texture and colour. Horner’s syndrome. Immediate complications like hoarseness, weakness in the limb, respiratory insufficiency, etc. Weekly follow-up for 1 month, then monthly for 6 months and later 6-monthly for pain relief, sense of warmth, appearance of line of demarcation, healing of gangrenous lesion and return of peripheral pulsations. For any delayed complications.
Statitistical analysis of the data is performed using data analysis Tool Park in Microsoft Excel. As the sample size of the study group is 20, analysis of the quantitative data is carried out using Student’s unpaired “ t ”-test for unequal variance, for comparison of mean VAS score, temperature, duration of analgesia, pulse rate and mean blood pressure. Differences are considered as statistically significant if probability ( P -value) is <0.05 and as highly significant if P <0.001, whereas probability of >0.05 is considered as insignificant. Level of significance is 0.001, 0.01 and 0.05, as appropriate. Power of the test is almost 1 whereever it is significant. | RESULTS
STGB was performed on 20 patients (M/F=19/1) of age 25–65 years, weighing 40–70 kg. The affecting pathologies are mentioned in Table 1 .
Figure 2 depicts the mean pre-block values of days 1 and 2, where the mean VAS was 7 and the mean surface temperature of the hand was 29.89°C. Following STGB with LA, the mean Post-block VAS was 4.25, with a significant rise in the mean temperature of the affected hand by 0.77°C. ( P <0.001). The mean duration of analgesia observed was 7 h. Later, the STGB was performed for three successive days with LA+ketamine 0.5 mg/kg. The pre-block mean VAS of days 3, 4 and 5 was 4.7 whereas the post-block mean VAS was 1.6, which was significantly less. The temperature rise obtained was mean 1.73°C. Duration of analgesia was significantly more after the addition of ketamine, with a mean of 14 h ( P <0.001). There was no significant variation in the mean pulse rate (PR) and mean blood pressure (MBP) before and after block following STGB and with the addition of ketamine.
Figure 3 shows the comparison of the mean VAS of days 1 and 2 (STGB with LA) vs the mean VAS of days 3, 4 and 5 (STGB with adjuvant ketamine). It refers to the VAS record pre and post-block at 2, 6, 12, 16 and 24 h. The pre-block mean (SD) VAS of days 1 and 2 was 7 (0.96); following STGB, it was 2.67 (0.43) and 3.2 (0.6) at 2 and 6 h, which was highly significant ( P <0.001). At 12 and 16 h, there was significant rise in VAS as 6.7 (0.9) and 6.8 (1.2), respectively, following LA block ( P >0.05). However, the pre-block VAS of days 3, 4 and 5 achieved was mean (SD) 4.67 (0.7). There was a significant drop in the VAS record as 1.62 (0.73), 1.23 (0.68), 1.39 (0.5) and 2.22 (0.78) at 2, 6, 12 and 16 h, respectively, and it was highly significant ( P <0.001).
Figure 4 shows the mean post-block temperature rise following STGB with LA on days 1 and 2 and after addition of ketamine on days 3, 4 and 5. It refers to the pre-block mean (SD) temperature of the hand of days 1 and 2, which was 29.89 (1.15)°C. The rise was highly significant at 2, 6 and 12 h, with a significant rise maintained at 16 h following LA block. On days 3, 4 and 5, the pre-block mean temperature achieved was 32.1 (1.23)°C. The mean rise in temperature observed was highly significant at 2, 6, 12 and 16 h following addition of ketamine for STGB.
Table 2 shows the follow-up record at different times for pain relief, warmth and healing of the gangrenous fingers. Hundred percent pain relief was present at the 12 th week in (18/20) 94.7% of the patients. Later, few patients were lost to follow-up (LF) from 6 to 24 months. Warmth in the hand was maintained in (19/20) 95% of the cases. Greater than 90% healing of the fingers was observed in (18/19) 94.7% of the patients at the 12 th week. The STGB was repeated in two cases, but a diabetic case required amputation of the 2 phalynx at 8 th week due to poor response. Repeat Doppler flow study was not possible in all the cases. At 6 months, 17 patients underwent laser Doppler flowmetry, showing improvement in flow from 50% to 90%. At 24 months, 10 patients had 50–100% improvement in the circulation of the hand.
Table 3 shows that there was transient Horner’s in 12 patients and hoarseness in six patients, which did not require any intervention. Haematoma formation occurred in one case, which subsided after 12 h. Bradycardia occurred in two patients, which responded to 0.2 mg of injection glycopyrrolate. Sixteen patients complained of light-headedness for 10–30 min, which was tolerable following midazolam pre-medication. | DISCUSSION
STGB is a well-accepted therapeutic technique. Wolf et al . observed incidence of minor and short-lived complications in 1.7/1,000 patients, with no major complications in the series of 45,000 cases.[ 13 ] The local anaesthetic blocks provide immediate relief but they are not long lasting. Permanent relief may be obtained with repeated blocks using depo-steroids (5–7 times) or with neurolytic agents. The local anaesthetic interrupts the pain–spasm cycle whereas corticosteroids cause membrane stabilization and inhibition of the synthesis/release of pro-inflammatory substances.[ 14 ]
Racz et al . advocated the injection of 3% phenol (2.5 ml of 6% phenol, 2.5 ml of 0.5% bupivacaine and 80 mg of methylprednisolone) at the stellate ganglion level via the C7 approach under fluoroscopy. They have not observed any long-term Horner’s syndrome.[ 15 ] Harris et al . in 2006 reported profound pain relief following local application with buprenorphine at the stellate ganglion for head and face pain, supporting the presence of endogenous opioids on sympathetic ganglion.[ 16 ] For long-term effects unlike neurolysis of the lumbar sympathetic chain, many clinicians have avoided neurolysis of the stellate ganglion for the risk of producing permanent Horner’s or complications due to spread of neurolytic solution.[ 1 ] For prolonged effects, addition of 40 mg of triamcinalone or fentanyl has been advocated.[ 5 17 ] Alternatively, RF denervation has been performed at the stellate ganglion to diminish the chance of a permanent Horner’s syndrome.[ 1 5 ] However, post-lesioning, neuritis/neuralgias are observed in 10% of the cases and permanent Horner’s or motor paralysis is also reported.[ 3 17 ]
Many opioid and non-opioid drugs are tried by different routes to treat chronic pain syndromes.[ 18 ] Among them, a NMDA receptor antagonist – ketamine – has gained popularity for variety of its action by multiple routes. In CRPS, the peripheral sensitization of pain with persistent inputs to the dorsal horn of the spinal cord from the C fibres causes the “Wind Up” phenomena.[ 19 20 ] The change is either decrease in the inhibitory receptors like GABA or increase in the excitatory receptors like NMDA.[ 21 ]
For neuropathic pain, ketamine is reported to be helpful in a dose range of 0.1–7 mg/kg and by infusion for 30 min to 8 h in both CRPS Type I/II patients.[ 22 ] Long-term pain relief is observed in CRPS following low-dose infusion for few days with minimal haemodynamic or psychomimetic side-effects.[ 23 ]
PVD of the upper limbs present with either acute intractable or chronic ischaemic pain, with discolouration and gangrene of the fingers. To achieve prolonged sympathetic blockade for pain relief and to maintain circulation for healing, ketamine appears to be a good adjuvant for STGB. Its action may be supraspinal by systemic absorption and peripheral through NMDA receptors located on the somatic nerve and dorsal root ganglion. Blockade of peripherally located NMDA receptors is a potential target in the management of neuropathic pain due to vascular insufficiency as well.[ 24 ] As there are changes in Na + and Ca 2+ channels in neuropathic pain, a combination of a Na + channel blocker (local anaesthetic) and NMDA receptor antagonists appears to be rational for the treatment of neuropathic pain. The effect of opioid is decreased in patients with neuropathic pain.[ 25 ] Ketamine, on the other hand, is effective after the onset of hyperalgesic symptoms.[ 21 ]
The aim of our study was to evaluate the effects of ketamine with STGB. For the first 2 days, STGB was given with 10 ml of local anaesthetic. The mean pre-block VAS score was 7 and post-block score attained at 6 h was 3.2, which was significantly less. Following block with the LA agents, duration of analgesia (time from STGB to first request of analgesics at VAS >6) noted was 7 h (mean of days 1 and 2). After the first record of duration of analgesia, patients received intramuscular Diclofenac 2 mg/kg or Tramadol 2 mg/kg as soon as VAS reached >6. For the first 2 days, intensity of pain was higher in spite of parenteral analgesics. Later, for three successive days, STGBs with the addition of 0.5 mg/kg of ketamine resulted in a pre-block mean VAS of days 3, 4 and 5 of 4.7, showing better response to the parenteral analgesics, and the post-block VAS attained was 1.2 with significantly prolonged duration of analgesia of mean 14 h. The results were similar to the observations noted by Rani sunder et al .,[ 24 ] who observed a fall in VAS from 10 to 2.5 with STGB with bupivacaine and VAS <1 following addition of 0.5 mg/kg of ketamine for STGB, with duration of analgesia of 36 h in two cases of CRPS. One of our case of CRPS had dramatic reduction in the oedema of the hand within 24 h after single ketamine block. The temperature rise observed following STGB with LA was 1.6°C. and of 2.7°C. with the addition of ketamine. Temperature rise of >1.5°C. was observed following successful sympatheticolysis.[ 26 – 28 ] There are clinical reports of pre-emptive STGB to increase the patency of radial artery grafts in coronary artery bypass surgery[ 29 ] and as an indication for the treatment of refractory angina.[ 30 ] There are few studies to demonstrate the effect of ketamine on microcirculation when added to sympathetic blockade with LA agents. Probably because of prolonged analgesia due to NMDA antagonism and enhancement of the LA effect by action on Na channels, ketamine helps to maintain the vasodilatation/flow induced by serial STGB. This has resulted in 90% pain relief by the 3 rd week in 17 patients and 100% pain relief in 15 patients over 4 weeks duration and also accelerated healing of the gangrenous fingers, as observed in 18/19 patients. | CONCLUSION
STGB using local anaesthetic with an adjuvant ketamine is a safe and effective technique. Ketamine enhances the effect of sympathetic blockade with relief of pain, ischaemia and maintains circulation. Thus, ketamine is a useful adjuvant to STGB and it obviates the need of permanent destruction of the ganglion by chemical/RF neurolysis in patients with PVD. | Stellate ganglion block (STGB) is commonly indicated in painful conditions like reflex sympathetic dystrophy, malignancies of head and neck, Reynaud’s disease and vascular insufficiency of the upper limbs. The sympathetic blockade helps to relieve pain and ischaemia. Diagnostic STGB is usually performed with local anaesthetics followed by therapeutic blockade with steroids, neurolytic agents or radiofrequency ablation of ganglion. There is increasing popularity and evidence for the use of adjuvants like opioid, clonidine and N Methyl d Aspartate (NMDA) receptor antagonist – ketamine – for the regional and neuroaxial blocks. The action of ketamine with sympatholytic block is through blockade of peripherally located NMDA receptors that are the target in the management of neuropathic pain, with the added benefit of counteracting the “wind-up” phenomena of chronic pain. We studied ketamine as an adjuvant to the local anaesthetic for STGB in 20 cases of peripheral vascular disease of upper limbs during the last 5 years at our institution. STGB was given for 2 days with 2 ml of 2% lignocaine + 8 ml of 0.25% bupivacaine, followed by block with the addition of 0.5 mg/kg of ketamine for three consecutive days. There was significant pain relief of longer duration with significant rise in hand temperature. We also observed complete healing of the gangrenous fingers in 17/19 patients. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):546-551 | oa_package/88/a5/PMC3016576.tar.gz |
||
PMC3016577 | 21224974 | INTRODUCTION
The supraclavicular brachial plexus block provides anaesthesia of the entire upper extremity in the most consistent and time-efficient manner.
Since the ‘80s, clonidine has been used as an adjunct to local anaesthetic agents in various regional techniques to extend the duration of block. The results of previous studies on the usefulness of clonidine on brachial plexus block have been mixed. Some studies have shown that clonidine prolongs the effects of local anaesthetics,[ 1 – 3 ] but other studies have failed to show any effect of clonidine, independently from the type of local anaesthetic used (ropivacaine, bupivacaine and mepivacaine).[ 4 – 7 ] Moreover, others have indicated an increased incidence of adverse effects like sedation, hypotension and bradycardia.[ 4 6 – 9 ] Clonidine has been shown to be of benefit for use in central neuraxial blocks and other regional blocks by increasing the duration and intensity of pain relief[ 10 – 12 ] as also by decreasing the systemic and local inflammatory stress response.[ 13 14 ] Also, there is no reason for it to be ineffective, specifically in brachial plexus blocks. This randomized, double-blind and placebo-controlled study tested the hypothesis that inclusion of clonidine with the local anaesthetic prolongs the duration of analgesia in supraclavicular brachial plexus block. | METHODS
The study protocol of this prospective, randomized, double-blinded, placebo-controlled trial was approved by the Hospital Ethics Committee. All participants gave written informed consent. Fifty patients, ASA physical status I–III, 18 years of age or older, undergoing surgery of the upper limb, were recruited. Excluded from the study were patients for whom supraclavicular brachial plexus block or the study medications were contraindicated or those who had a history of significant neurological, psychiatric, neuromuscular, cardiovascular, pulmonary, renal or hepatic disease or alcohol or drug abuse, as well as pregnant or lactating women. Also barred from the study were patients taking medications with psychotropic or adrenergic activities and patients receiving chronic analgesic therapy. Pre-medication was given with tablet Alprazolam 0.25 mg orally at 22:00 h on the night before surgery and at 06:00 h on the morning of the surgery. No additional sedative medication was administered in the first 60 min after injection of the study dose.
In our study, two groups ( n =25) were investigated: Group I (bupivacaine–clonidine) received 40 ml of bupivacaine 0.25% plus 0.150 mg of clonidine and Group II (bupivacaine) received 40 ml of bupivacaine 0.25% plus 1 ml of NaCl 0.9%. The anaesthetic solution was prepared according to a random-number table by means of a computer-generated randomization list by an anaesthetist not otherwise involved in the study. The anaesthetist performing the block was blinded to the treatment group. All observations were carried out by a single investigator who was also blinded to the treatment group.
Patients’ pulse rate, electrocardiogram and non-invasive blood pressure were recorded and a wide bore intravenous line was established. The patients were administered a brachial plexus block by supraclavicular approach. The site of injection was shaved and disinfected. The injection site was infiltrated with 1 ml of lidocaine 2% subcutaneously. A nerve stimulator (Stimuplex Dig RC; Braun Melsungen AG, Germany) was used to locate the brachial plexus. The location end point was a distal motor response with an output lower than 0.6 mA. During injection, negative aspiration was performed every 6.5–7.0 ml to avoid intravascular injection. Plexus block was considered successful when at least two out of four nerve territories (ulnar, radial, median and musculocutaneous) were effectively blocked.
Sensory and motor block of the musculocutaneous, radial, ulnar and median nerve were determined immediately and at 5, 10, 30, 60, 120, 180, 240, 360 and 480 min after completion of the injection. Patients were asked to note complete recovery of sensation, which was then verified by an anaesthetist or a nurse.
Sensory block was determined by the response to pin prick using a visual analogue scale (VAS) from 100 [no sign of sensory block (maximal pain)] to 0 [complete sensory block (no pain)]. Sensory onset of each nerve was assessed by the pin prick method.
Motor block was determined according to a modified Lovett rating scale, ranging from 6 (usual muscular force) to 0 (complete paralysis) as follows: thumb abduction for the radial nerve, thumb adduction for the ulnar nerve, thumb opposition for the median nerve and flexion of elbow for the musculocutaneous nerve.
The duration of sensory block was defined as the time interval between injection and complete recovery of sensation.
Also measured at the above-mentioned time points were heart rate, non-invasive blood pressure, oxygen saturation and sedation. The sedation score ranged from 1 (alert) to 4 (asleep, not arousable by verbal contact).
Patients were observed for any discomfort, nausea, vomiting, shivering, bradycardia, pain and any other side-effects. Any need for additional medication was noted. Blood loss during surgery was calculated by the gravimetric method with a view to replace the blood loss if it was more than the maximum allowable blood loss.
Results were expressed as mean±SD (SEM). Demographic and haemodynamic data were subjected to statistical analysis by using two sample t -tests. For statistical analysis of modified Lovett rating scale, VAS and sedation score, not normally distributed, a non-parametric test “Wilcoxon Mann Whitney test” was applied. The time of recovery of sensation and adverse effects were analyzed by the Chi square test/Fisher’s exact test. A P -value <0.05 was considered statistically significant. Taking α=0.05 to detect difference in recovery of sensation at 8 h in the two groups as 56% and taking sample size of 25 in each group, the power of the study is approximately 75%. | RESULTS
Demographic data
There were no differences between the clonidine and the control groups regarding age, sex, weight and height [ Table 1 ] or the site of surgery.
Comparison of modified lovett rating scale
The modified Lovett rating scale at baseline and intra-operatively was comparable in both the clonidine and the control group. However, post-operatively, after 240 min, the modified Lovett rating scale was lower in the clonidine group when compared with the control group (0.67±1.61 vs. 2.04±1.67), and it was statistically significant ( P <0.05) [ Figure 1 ].
Comparison of VAS
The VAS of the two groups [ Table 2 ] was consistently lower at all times in the clonidine group during onset till 30 min. From 30 to 240 min, when there was an intense block in both the groups, the VAS score was 0, after which, in the control group, it started rising while remaining low in the clonidine group. Because the VAS score was significantly less from 5 to 30 min ( P -value at 5 min 0.043, at 10 min 0.008 and at 30 min 0.007), we concluded that onset with clonidine was faster. Again, after 240 min, the VAS was significantly lower and thus we concluded that the action was prolonged.
Time to recovery of sensation
There was no recovery of sensation in both groups up to 2 h. From 2 to 4 h, 28% of the patients of the control group had recovery of sensation while none of the patients of the clonidine group had recovery of sensation. The difference was statistically significant ( P <0.05).
Between 4 and 8 h, 72% of the patients of the control group had recovery of sensation as compared with 44% of the patients of the clonidine group, the comparison being statistically significant ( P <0.05) [ Figure 2 ]. In a majority of the patients (56%) of the clonidine group, recovery of sensation occurred after 8 h whereas in the control group, all patients had recovered sensations by 8 h, and the difference was statistically significant ( P <0.05) [ Table 3 ], showing a prolongation of block in the clonidine group.
Comparison of sedation score
The sedation score between the clonidine and the control group was comparable throughout the study period. All the patients were alert (sedation score=1) in both groups at all times of observation.
Comparison of saturation of oxygen
The saturation of oxygen between the clonidine and the control group was comparable throughout the study period. All the patients had saturation of oxygen >99% in both groups at all times of the observation.
Comparison of heart rate
The baseline heart rate was lower in the clonidine group than in the control group. The perioperative and post-operative heart rate was variable at each time interval and was also lower in the clonidine group in comparison with the control group; however, the difference was not significant ( P >0.05).
Comparison of blood pressure
The baseline blood pressure was comparable in both the clonidine and the control group. The maximum fall in systolic and diastolic blood pressures in the clonidine group was noted at 60 min. However, in the control group, this was observed at 10 min for systolic and 30 min for diastolic blood pressures, respectively. The peri- and post-operative blood pressure was variable at each time interval in both groups and was statistically insignificant ( P >0.05).
Side-effects
No side-effects were observed in both the clonidine and the control group throughout the study period. | DISCUSSION
Supraclavicular blocks are performed at the level of the brachial plexus trunks. Here, almost the entire sensory, motor and sympathetic innervations of the upper extremity are carried in just three nerve structures (trunks), confined to a very small surface area. Consequently, typical features of this block include rapid onset, predictable and dense anaesthesia along with its high success rate.
Clonidine and local anaesthetic agents have a synergistic action. Clonidine enhances both sensory and motor blockade of neuraxial and peripheral nerves after injection of local anaesthetic solution, without affecting the onset.[ 10 – 12 ] This is thought to be due to blockage of conduction of A delta and C fibres, increase in the potassium conductance in isolated neurons in vitro and intensification of conduction block achieved by local anaesthetics.
We found a significant difference in the onset of sensory block (as assessed by VAS) between the two groups. The VAS of the two groups was comparable at baseline. Thereafter, the VAS scale was lower in the clonidine group than in the control group (43.60±22.15 vs. 55.20±15.31) up to 180 min. At 360 and 480 min, the VAS score was again lower in the clonidine group (0.00±0.00 vs. 6.80±8.52), and this was statistically significant ( P <0.05). These findings indicate faster onset of sensory block and prolongation of analgesia with use of clonidine. Most authors have reported no effect on the onset of block, which is at variance with our results,[ 12 ] This needs further evaluation. However, the prolongation of analgesia observed is consistent with other trials performed at the brachial plexus,[ 1 – 3 ] popliteal block[ 15 ] and in another study in children undergoing a variety of blocks, which demonstrated that the addition of clonidine to bupivacaine and ropivacaine can extend sensory block by a few hours and increase the incidence of motor blocks.[ 16 ]
Among the studies showing no positive effect[ 4 – 6 ] of clonidine as an additive to brachial plexus blocks, various discrepancies have been discussed.[ 16 ] In one, patients were not followed long enough (12 h) before any effect of clonidine could be detected.[ 6 ]
In another study, the authors found (surprisingly) that the time to first administration of opioids after the nerve block was shorter in patients who received local anaesthetic and clonidine compared with those who received local anaesthetic only.[ 4 ]
The modified Lovett rating scale at baseline and intraoperatively was comparable in both the groups. However, post-operatively, after 240 min, the modified Lovett rating scale was significantly lower ( P <0.05) in the clonidine group (0.67±1.61 vs. 2.04±1.67). Patients in the control group had a recovery of sensations within 8 h whereas only 56% of the patients of the clonidine group had a recovery of sensation after 8 h, this too being clinically highly significant ( P -value <0.001).
Thus, it is evident that the recovery of sensation was prolonged in the clonidine group. Our result concurs with other similar studies.[ 9 17 18 ] Thus, we favor the hypothesis that clonidine exerts an effect directly on the nerve fibre as a result of a complex interaction between clonidine and axonal ionotropic, metabolic or structure proteins (=receptors), which was shown in different laboratory studies.[ 19 20 ]
We also found an enhancement of perioperative analgesia and prolongation of recovery of sensation in the clonidine group, well beyond the pharmacological effect of either clonidine or bupivacaine. Direct modulation of the activity of sensory nerve fibres could conceivably explain the difference between the two groups in our study. Alternatively, this could have been a result of an overall better quality of anaesthesia at all times of surgery. Regardless of the mechanism, clonidine was found to have a valuable advantage in the field of peripheral nerve blocks when added to bupivacaine.
The difference in perioperative heart rate, blood pressure, sedation scores and oxygen saturation in both the groups was statistically insignificant ( P >0.05).
The results of our study showed stable perioperative haemodynamics with the use of clonidine. Moreover, sedation, which is often associated with the use of clonidine,[ 17 18 ] was not apparent in our study.
Most of the studies conducted using clonidine in regional anaesthesia did not report any adverse effects.[ 9 ] However, studies by Buttner et al . and Bernard et al . reported the incidence of hypotension and bradycardia with the use of clonidine.[ 18 21 ] In our study, no side-effects were observed in both the clonidine and the control group throughout the study period.
To summarize, our study suggests that clonidine 0.150 mg in 40 ml of 0.25% bupivacaine significantly enhances the quality of supraclavicular brachial plexus block in upper limb surgeries by a faster onset and prolonged duration of sensory and motor block, enhancing post-operative analgesia. These benefits are not associated with any haemodynamic changes, sedation or other adverse effects.
In conclusion, clonidine added to bupivacaine is an attractive option for improving the quality and duration of supraclavicular brachial plexus block in upper limb surgeries. | We compared the effects of clonidine added to bupivacaine with bupivacaine alone on supraclavicular brachial plexus block and observed the side-effects of both the groups. In this prospective, randomized, double-blinded, controlled trial, two groups of 25 patients each were investigated using (i) 40 ml of bupivacaine 0.25% plus 0.150 mg of clonidine and (ii) 40 ml of bupivacaine 0.25% plus 1 ml of NaCl 0.9, respectively. The onset of motor and sensory block and duration of sensory block were recorded along with monitoring of heart rate, non-invasive blood pressure, oxygen saturation and sedation. It was observed that addition of clonidine to bupivacaine resulted in faster onset of sensory block, longer duration of analgesia (as assessed by visual analogue score), prolongation of the motor block (as assessed by modified Lovett Rating Scale), prolongation of the duration of recovery of sensation and no association with any haemodynamic changes (heart rate and blood pressure), sedation or any other adverse effects. These findings suggest that clonidine added to bupivacaine is an attractive option for improving the quality and duration of supraclavicular brachial plexus block in upper limb surgeries. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):552-557 | oa_package/b5/f8/PMC3016577.tar.gz |
|||
PMC3016578 | 21224975 | INTRODUCTION
“Auto-co-induction”[ 1 2 ] is a technique of giving a pre-calculated dose of induction agent prior to giving the full dose of same induction agent; this technique is also known as “the priming technique”.[ 3 ] Application of priming principle is well documented in relation to the use of muscle relaxants. The priming technique involves giving a small sub-paralysing dose of the non-depolariser[ 4 ] (20% of the ED 95 or about 10% of the intubating dose), 2–4 minutes prior to administering the second large dose for tracheal intubation.
“Co-induction”[ 5 – 7 ] is defined as the concurrent administration of two or more drugs that facilitate induction of anaesthesia documenting synergism .[ 8 9 ]
However, there is a paucity of studies[ 3 ] documenting the application of priming principle in induction agents. This technique, in relation to induction agents, aims at utilising the sedative, anxiolytic and amnesic properties at sub-hypnotic dosage of induction agent when given a few minutes prior to induction. This study was also done to evaluate whether the priming technique reduces the effective dose of induction agent and favourably influences the peri-intubation haemodynamics. Propofol and midazolam is a commonly used combination for induction and it shows synergistic interaction for hypnosis and reflex sympathetic suppression.[ 10 – 12 ] | METHODS
The present study was conducted in our department after obtaining the approval of Institutional Ethical Committee. Ninety patients of age between 18 and 50 years, American Society of Anesthesiologist (ASA) Grade I and II, from both sexes having no history of adverse anaesthetic reaction, were randomly allocated into three equal groups: group I (propofol), group II (midazolam) and group III (normal saline), consisting of 30 patients in each.
In the operating room, routine monitoring, i.e., non-invasive blood pressure (NIBP), pulse oximetry and continuous surface ECG, was used. Along with these, Bispectral Index (BIS) monitor BIS xp model no.A-2000 (Aspect Medical Systems Inc., Norwood, USA) was used. Fronto-temporal (BIS Quatro) surface electrodes were placed on the patient’s forehead, after skin preparation. The impedance of electrodes was checked and smoothing rate was set at 15 seconds. Pre-operative baseline values of heart rate (HR) and blood pressure (BP) (an average of two consecutive readings) were taken 5 minutes apart before the induction of anaesthesia. Baseline BIS value were also recorded. An intravenous line appropriate for the surgical procedure was secured in the left upper limb.
Patients in the three groups (I, II, III) received the priming agent 0.5 mg/kg IV propofol, 0.05 mg/kg IV midazolam and 3 ml of normal saline, respectively, followed by IV induction with propofol 2 minutes later in all the three groups until the BIS value of 45 was achieved. The speed of injecting IV induction dose of propofol in all cases was at the rate of 30 mg/10 seconds. Any complication during this period, i.e., apnoea, vomiting, laryngospasm, involuntary movements, coughing, was noted.
Subsequent relaxation and intubation were accomplished with Inj. Rocuronium 1 mg/kg IV and anaesthesia was maintained on O 2 /N 2 O (35%, 65%); inhalational agent, i.e., Isoflurane and injection Vecuronium (0.02 mg/kg). No stimuli were applied during the 5-minute post-intubation period.
The following parameters were recorded.
Total dose of propofol required in achieving targeted BIS value. SpO 2 , BIS value, HR and NIBP [(systolic blood pressure (SBP) and diastolic blood pressure (DBP)] were measured just before induction, immediately after induction, immediately after intubation, and 5 minutes after intubation. Post-operative recall phenomenon was also inquired for. | RESULTS
The sample size of this study was calculated based on assuming a power of 80% and α (alpha) value of 0.05 as significant using epi info software (version 6.0). All data were reported as mean value ±2 SD.
The data were analysed statistically using the SPSS statistical package (version 10.0). Comparison between the groups for the induction dose and haemodynamic parameters was done using analysis of variance (ANOVA) with Tukey’s post-hoc test. A P value of <0.05 was considered to be significant and P <0.001 was considered to be highly significant.
The demographic data were comparable for age, weight, gender and ASA grading among the three groups, as shown in Table 1 .
A statistically significant difference ( P <0.001) was observed in propofol induction dose requirement in groups I and II compared to the control group. Mean induction dose requirement was found to be 45.37% lesser in midazolam co-induction group and 31.88% lesser in propofol auto-co-induction group as compared to the control group [ Table 2 ].
A statistically significant ( P <0.001) difference was observed in post-priming BIS values among all the three groups. Maximum fall at post-induction interval was found in the propofol group. No variability was observed in BIS values at post-induction, post-intubation and 5 minutes post-intubation for both the study groups, i.e., propofol auto-co-induction and midazolam co-induction groups [ Table 3 ].
No variability was observed in mean SpO 2 value at any interval during the study in all the three groups.
A statistically significant ( P <0.001) fall in HR was observed in propofol auto-co-induction group at the post-priming interval. Post-intubation rise in the HR was observed in all the three groups but the least rise was found in the propofol group (group I).
Mean SBP was observed to be maintained at induction in the control group and a slight fall was observed in other two groups. Maximum rise in SBP after intubation (20.19%) from pre-induction value was observed in the midazolam co-induction group.
Mean DBP was observed to be maintained in control group at induction (with a slight fall observed in other two groups). Maximum fall in DBP (17.99%) from pre-induction value at post-induction interval was observed in propofol auto-co-induction group. Maximum rise in DBP (16.80%) from baseline value at post-intubation interval was observed in midazolam auto-co-induction group [ Table 4 ]. | DISCUSSION
The present study was conducted to evaluate the clinical efficacy of propofol auto co-induction as compared to midazolam propofol co-induction, in terms of reduction in the induction dose of propofol and better haemodynamic stability in peri-intubation period.
In group I, after priming with propofol, mean induction dose requirement of propofol [ Table 2 ] was 75.70 mg as compared to the mean induction dose of 111 mg in the control group. We observed a 31.88% reduction in induction dose of propofol by applying auto-co-induction. Previous studies have[ 1 3 ] supported the above observation that propofol predosing significantly reduces its induction dose. Anil Kumar and colleagues[ 2 ] have found 27.48% reduction in induction dose requirement of propofol after propofol auto-co-induction. The amnesic and sedative action of propofol at sub-hypnotic doses may facilitate the induction of anaesthesia at a lower induction dose of propofol.[ 13 ] In group II, after priming with midazolam, mean induction dose of propofol [ Table 2 ] was 60.70 mg as compared to the mean induction dose of 111 mg in the control group. There was 45.37% reduction in the induction dose of propofol with midazolam co-induction. Earlier studies[ 14 15 ] also support the reduction in the induction dose of propofol after midazolam pre-treatment.
In the present study a predetermined BIS value (i.e., BIS 45) was taken as an endpoint of induction.[ 16 17 ] The maximum reduction in BIS [ Table 3 ] at post-priming interval was found in propofol auto-co-induction group; but contrary to that, reduction in the induction dose requirement of propofol was maximal in midazolam group.
There was a significantly lesser fall in both SBP and DBP in propofol group at the post-induction interval. Propofol reduces BP by reducing vascular smooth muscle tone and total peripheral resistance and also by decreasing sympathetic activity. The lesser fall in propofol group was probably because of reduction in total induction dose of propofol after its auto-co-induction. The finding of less post-induction hypotension with significant dose reduction in propofol auto-co-induction group in the studies carried out by Djaiani et al .[ 1 ] also support the above observation. The rise in HR secondary to intubation [ Table 4 ] was observed in all the study groups but it was significantly lesser in propofol auto-co-induction group. The rise in SBP and DBP [ Table 4 ] immediately after intubation and 5 minutes post-intubation was significantly higher in midazolam co-induction group. The rise in SBP and DBP in propofol auto-co-induction group was comparable to the control group where much higher induction dose of propofol was used. Although propofol pre-treatment does not completely attenuate reflex sympathetic stimulation secondary to intubation, it is definitely more advantageous than the other two groups.
These observations point that although midazolam co-induction significantly reduces the induction dose of propofol, it does not provide haemodynamic stability in peri-intubation period. Similar results were also obtained by Cressy et al .[ 14 ] where significant dose reduction in propofol was found in midazolam pre-treatment group but there were no demonstrable benefits in terms of cardiovascular stability. | CONCLUSIONS
The present study compared the efficacy of propofol auto-co-induction versus midazolam propofol co-induction. The following conclusions and inferences can be drawn from this study:
A significant fall in the induction dose requirement of propofol is found in both the study groups. The priming in relation to propofol provides haemodynamic stability both at post-induction interval and secondary to intubation. The priming in relation to propofol also appears to be cost effective by significantly reducing the total dose of propofol required. However, more studies with larger samples are required before considering these observations as generalised. | Application of priming principle is well documented in relation to the use of muscle relaxants. The aim of the present study was to evaluate the efficacy of priming technique in relation to induction agents. Clinical efficacy in terms of dose reduction and alteration in peri-intubation haemodynamics was compared in propofol auto-co-induction and midazolam propofol co-induction groups along with a control group. The study was carried out in 90 patients scheduled for upper abdominal surgery, who were randomly divided into three equal groups. Group I received 0.5 mg/kg propofol IV (20% of the pre-calculated induction dose), group II received 0.05 mg/kg IV midazolam and group III received 3 ml of normal saline. This was followed by IV induction with propofol 2 minutes later in all the three groups at a predetermined rate till the bispectral index value of 45 was attained. The results showed a significant decrease in induction dose requirement in both the groups but haemodynamic stability during induction and intubation was more in propofol auto-co-induction group. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):558-561 | oa_package/78/2c/PMC3016578.tar.gz |
||
PMC3016579 | 21224976 | INTRODUCTION
Brugada syndrome was first described as a separate clinical entity in 1992 by Brugada and Brugada.[ 1 ] Patients suffering from Brugada syndrome have a characteristic electrocardiogram pattern of right bundle branch block (RBBB) with ST elevation in leads V 1 –V 3 and are at a high risk for malignant dysrhythmia and cardiac arrest. Many factors during general anaesthesia (medications, bradycardia and temperature changes) could precipitate malignant dysrhythmia. We present a case of a 14-year-old male with Brugada syndrome and autism that was posted for an automated implantable cardioverter defibrillator under general anaesthesia. | DISCUSSION
Brugada syndrome is genetic, characterized by abnormal electrocardiogram findings, and is also known as Sudden Unexpected Death Syndrome or Sudden Unexpected Nocturnal Death Syndrome.
The average age of presentation is 40 years, but can vary from 2 to 77 years.[ 2 ] It is more common in men, with a higher prevalence in the Asian populations. An estimated 4% of all sudden deaths and at least 20% of sudden deaths in patients with structurally normal hearts are due to the syndrome.[ 3 ] This condition is inherited in an autosomal-dominant pattern, with incomplete penetrance. It should be suspected in any cardiac arrest or syncope of unknown origin with or without ventricular fibrillation. However, some patients remain asymptomatic and diagnosis is suggested by a routine electrocardiogram showing ST-segment elevation in leads V 1 –V 3 . In about 20% of the patients, atrial fibrillation is an associated arrhythmia.[ 4 ]
It is an example of a channelopathy, a disease caused by an alteration in the transmembrane ion currents that together constitute the cardiac action potential. Specifically, in 10–30% of the cases, mutations occur in the SCN5A gene that encodes the cardiac voltage-gated sodium channel.[ 5 6 ] Loss of function mutations in this gene lead to a loss of the action potential dome of some epicardial areas of the right ventricle. This results in transmural and epicardial dispersion of repolarization. The transmural dispersion underlies ST-segment elevation and the development of a vulnerable window across the ventricular wall, whereas the epicardial dispersion of repolarization facilitates the development of phase 2 re-entry, which generates a phase 2 re-entrant extrasystole that precipitates ventricular tachycardia and/or fibrillation, which often results in sudden cardiac death.
Typical electrocardiogram findings include a RBBB with ST elevation in the precordial leads characteristically “coved or saddleback,” with or without the terminal S waves in the lateral leads and associated with a typical RBBB, with no prolongation of QT interval. Prolongation of PR interval is also frequently seen [ Figure 1 ].
Brugada syndrome has three different electrocardiogram patterns. Type 1 has a coved-type ST elevation with at least 2-mm J-point elevation, a gradually descending ST segment and a negative T-wave. The electrocardiogram picture in this study shows the typical Type 1 pattern.
Type 2 has a saddle back pattern with a least 2-mm J-point elevation and at least 1-mm ST elevation, with a positive or biphasic T-wave. The Type 2 pattern can occasionally be seen in healthy subjects.
Type 3 has a saddle back pattern with <2-mm J-point elevation and <1-mm ST elevation, with a positive T-wave. Type 3 pattern is not uncommon in healthy subjects.
Increased serum potassium and calcium levels may generate a similar electrocardiogram pattern. Laboratory markers such as creatine kinase-MB (CK-MB) and troponin rule out an acute coronary syndrome. Echocardiography and/or magnetic resonance imaging should be performed to exclude structural abnormalities.
Drug challenge with sodium channel blockers is a standard provocative test used to unmask Brugada syndrome.[ 7 ] We used flecainide infused at a dose of 2 mg/kg over 10 min, with continuous cardiac monitoring. It produced an accentuation of ST segment elevation in the precordial leads.[ 7 8 ] The sensitivity and specificity of these tests have not yet been confirmed.
The electrocardiogram readings fluctuate depending on the autonomic balance.[ 9 ] Adrenergic stimulation decreases the ST elevation and drugs such as isoproterenol ameliorate the electrocardiogram manifestations. Fever, vagal stimulation, administration of class Ia, Ic and III drugs and neostigmine also accentuate the ST segment elevation. Acetylcholine, beta antagonists and nicorandil may interfere with the ionic conditions and exacerbate the manifestations.
Local anaesthetics, especially bupivacaine, given by any route (e.g., epidural), which cause a sudden rise in the serum concentration can unmask Brugada syndrome.[ 10 ] Lignocaine, a class IIb anti-arrhythmic drug has no such effect and can be safely used.[ 9 ]
Our patient had documented congenital non-progressive myopathy and, therefore, we avoided inhalation agents as a precaution against malignant hyperthermia. Isoflurane should also be avoided in patients with prolonged QT interval.[ 11 ]
Our patient had received atracurium earlier without any complications and was therefore our drug of choice for neuromuscular blockade. Administration of neostigmine is also known to elevate the ST segment. However, in our patient, it did not cause any detectable cardiac arrhythmia.[ 12 ]
The only treatment is the insertion of an automated implantable cardioverter defibrillator, which continuously monitors the heart rhythm and defibrillates the patient if ventricular fibrillation is noted. No pharmacological therapy is beneficial.
Some recently performed studies had evaluated the role of quinidine, a Class Ia anti-arrhythmic drug, for decreasing VF episodes occurring in this syndrome. Quinidine was found to decrease the number of VF episodes and correcting spontaneous electrocardiogram changes, possibly via inhibiting the Ito channels.
Brugada is an increasingly recognized syndrome. The importance of detecting Brugada is due to its high prevalence in the Asiatic young population. Brugada syndrome may be a significant cause of death, aside from accidents, in men under 40 years. The true incidence is not known due to the reporting biases. Although there is a strong population dependence, an estimated 4% of all sudden deaths and at least 20% of the sudden deaths in patients with structurally normal hearts are due to the syndrome. Those with the syndrome have a mean age of sudden death of 41±15 years.[ 3 ]
We, as anaesthesiologists, have to take care when using alpha agonists and neostigmine in such patients and avoid class I anti-arrhythmic drugs altogether. These patients should be monitored in a high-dependency unit post-operatively so that any cardiac arrhythmias can be timely detected and treated. | A 14-year-old autistic boy presented with acute gastroenteritis and hypotension. The electrocardiogram showed a ventricular fibrillation rhythm – he went into cardiorespiratory arrest and was immediately resuscitated. On investigation, the electrocardiogram showed a partial right bundle branch block with a “coved” pattern of ST elevation in leads v 1 –v 3 . A provisional diagnosis of Brugada syndrome was made, for which an automated implantable cardioverter defibrillator (AICD) implantation was advised. Although the automated implantable cardioverter defibrillator implantation is usually performed under sedation, because this was an autistic child, he needed general anaesthesia. We performed the procedure uneventfully under general anaesthesia and he was discharged after a short hospital stay. | CASE REPORT
A 14-year-old child, weighing 42 kg, was admitted with acute gastroenteritis and hypotension. The electrocardiogram showed ventricular fibrillation and he went into cardiorespiratory arrest; he was revived with immediate cardiopulmonary cerebral resuscitation and shifted to the intensive care unit with inotropic and ventilator support.
The 12-lead electrocardiogram showed a RBBB with a “coved” pattern of ST segment elevation in leads V 1 –V 3 . Cardiac enzymes were not elevated and 2D echocardiography showed normal ventricular function, with no underlying structural cardiac problem. A provisional diagnosis of Brugada syndrome was made.
Once the patient was stable, we allowed a “Flecainide challenge test” under sedation with an intravenous bolus of 2 mg/kg of flecainide. This resulted in 50% accentuation of ST segment elevation in leads v3 and v4, which is consistent with the features of Brugada syndrome. He was posted for an automated implantable cardioverter defibrillator insertion.
A detailed history revealed that the patient was autistic and had a documented non-progressive congenital myopathy. After clinically assessing the child pre-operatively, we obtained the necessary investigations. Because he was autistic and uncooperative, we decided to perform the procedure under general anaesthesia.
A written, informed consent was taken from the parents. During the procedure, the saturation (SpO 2 ), end tidal CO 2 , non-invasive blood pressure, temperature and electrocardiogram readings were monitored and an external defibrillator connected to disposable defibrillation pads was placed on the patient. After establishing an intravenous access, he was given midazolam 1 mg. Anaesthesia was induced with 1.5 mg/kg propofol and 1 mcg/kg fentanyl. Atracurium 0.5 mg/kg was used to intubate the patient. Anaesthesia was maintained with a Datex Ohmeda anaesthesia machine from which we had already removed the vaporizers and flushed the machine with oxygen. We used an oxygen:air mixture with propofol infusion and atracurium. The procedure was uneventful and there was no bradycardia or ventricular fibrillation unrelated to electrophysiological stimulation. At the end of the procedure, neostigmine 0.05 mg/kg and glycopyrrolate 0.008 mg/kg were used to reverse the neuromuscular blockade and the patient was extubated. He was monitored in the intensive care unit for 48 h and discharged following an uneventful hospital stay. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):562-564 | oa_package/93/10/PMC3016579.tar.gz |
||||
PMC3016580 | 21224977 | INTRODUCTION
Due to high incidence of respiratory and cardiovascular complications, anaesthetic management of mediastinal mass is very difficult. Here we describe successful management of an anterior mediastinal mass with help of cardiopulmonary bypass. | DISCUSSION
In this patient, MRI showed the level of tracheal constriction extending from T1 to T4 vertebrae. The minimum tracheal diameter was 7 mm. The tracheal diameter was 18 mm above the level of constriction and below constriction it was bifurcated into bronchi. The tracheal course was tortuous throughout the constriction.
Tumour compressing airways or great vessels may create a critical respiratory and/or haemodynamic situation. Complete airway obstruction and cardiovascular collapse may occur during induction of general anaesthesia[ 2 ], tracheal intubation[ 3 ] and positive pressure ventilation.[ 4 ] Standard anaesthetic management options include an induction of anaesthesia on an adjustable surgical table, use of short acting anaesthetics, avoidance of muscle relaxants, maintenance of spontaneous respiration during intubation and maintenance and awake intubation by a fibreoptic bronchoscope. The idea of primary endotracheal intubation by a fibreoptic bronchoscope was abandoned here due to extreme tortuosity of the trachea to prevent airway injury and further hypoxia.
During anaesthesia for the endoscopic palliative management of the large anterior mediastinal mass, awake intubation was reported and spontaneous ventilation was maintained with Heliox.[ 5 ] Heliox reduces resistance to the airflow through a compressed airway, and maintains oxygenation.[ 6 ] Heliox is not available in our centre. Emergency CPB was established in a patient with mediastinal mass when attempts for awake fibreoptic intubation failed.[ 7 ] In case of severe clinical symptoms and large mediastinal tumours, cannulation of femoral vessels preoperatively under local anaesthesia and availability of CPB is absolutely essential.[ 8 ] A temporary extracorporeal jugulo-saphenous bypass was reported for peri-operative management of a patient with superior vena cava (SVC) obstruction.[ 9 ]
Loss of control of the airway was reported by inhalational and intravenous inductions for patients with mediastinal masses. Induction of anaesthesia and muscle relaxation may reduce chest wall tone and may exacerbate airway compression.[ 10 ]
So, it was decided to initiate CPB via the femoral route here. The induction of anaesthesia was done with intravenous propofol just before the onset of CPB to prevent awareness. Injections of thiopentone and vecuronium were added to the reservoir after initiating CPB to prevent deoxygenation and awareness at any point of time. LMA was not used before CPB to prevent the loss of airway control with extremely difficult ventilation by general anaesthesia. It was inserted to have an airway access, and to oxygenate blood flowing through the pulmonary circulation, which was a possibility as heart was never arrested and was allowed to beat during CPB. A flexometallic ETT was not used under spontaneous ventilation to prevent airway injury. The ETT could be inserted only after the trachea was released from the mediastinal mass. The patient was ventilated electively till next morning and extubated on the following day. He did not develop tracheomalacia but extubation was delayed to settle down oedema surrounding the trachea following surgery. He was comfortable after extubation, maintained 100% SpO 2 in room air, and had no memory of surgery.
A CT scan is performed in these cases to identify location, relation to adjacent structures, extent of tracheal or vascular compression and calcification. CT and MRI in a supine position, in pre-anaesthetic evaluation, detect position related compression syndromes. General anaesthesia is safe when CT measured minimum tracheobronchial diameter is >50% of normal in asymptomatic adults; unsafe when <50% of normal regardless of symptoms in children; uncertain in mild/moderate symptomatic children with >50% of normal and mild/moderate symptomatic adult with <50% of normal.[ 11 ] MRI is superior to the CT scan as it distinguishes soft tissues from vascular structures, and identifies vascular compression and tissue invasion.[ 1 ] Angiography and echocardiography are done if obstruction to the pulmonary artery and/or SVC is suspected. Echocardiography diagnoses pericardial effusion. The peak expiratory flow rate (PEFR) reflects the central airway diameter. A PEFR less than 50% of the predicted in a supine position signifies anaesthetic complication. Maximum inspiratory and expiratory flow volume loops are performed in the supine and standing position of a patient to see fixed and variable airway obstruction. In patients with variable intrathoracic lesions, the inspiratory flow is well preserved and the expiratory flow is diminished, with characteristic flattening of the expiratory loop.
CT guided needle biopsy is done with local anaesthesia in adults. It is difficult in children. In the case of tumours related to vascular structures, biopsy is done under GA by mediastinoscopy, thoracoscopy and mediastinotomy.
If the trachea cannot be intubated beyond the lesion, microlaryngeal endotracheal tube, distal jet ventilation, rigid bronchoscopy along with Venturi injector, intubation of the proximal trachea and temporary stenting or intubation in prone, semi-erect or lateral position may help. In vascular compression of the right heart or pulmonary artery, inotropes and vasoconstrictors with intravascular volume loading are useful. Bleomycin is used sometimes to reduce the size of tumour preoperatively, but may cause pulmonary toxicity. So, pulmonary function test must be done before giving bleomycin. In patients treated with bleomycin, oxygen concentration must be kept low during surgery to prevent pulmonary toxicity. | CONCLUSION
Preoperative control of the airway seemed to be impossible here, so CPB was used. Femoro femoral CPB was initiated electively because during emergency it is not only difficult to institute but also increases mortality. After the institution of bypass, the LMA was inserted to have a control over the airway and after the removal of the mass, the patient was intubated. He was kept intubated electively for 2 days to settle down oedema of the surrounding soft tissue.
So, when the preoperative control of the airway is apparently impossible, femfem bypass under local anaesthesia may be instituted before induction, to overcome airway obstruction, to get control of ventilation as well as oxygenation without doing any airway injury and without having any incidence of hypoxia. | The perioperative management of patients with mediastinal mass is challenging. Complete airway obstruction and cardiovascular collapse may occur during the induction of general anaesthesia, tracheal intubation, and positive pressure ventilation. The intubation of trachea may be difficult or even impossible due to the compressed, tortuous trachea. Positive pressure ventilation may increase pre-existing superior vena cava (SVC) obstruction, reducing venous return from the SVC causing cardiovascular collapse and acute cerebral oedema. We are describing here the successful management of a patient with a large anterior mediastinal mass by anaesthetizing the patient through a femoro-femoral cardiopulmonary bypass (fem-fem CPB). | CASE REPORT
A 65 year old male patient was admitted with severe dyspnoea, stridor and cyanosis. In spite of treatment with intravenous antibiotics, corticosteroids, bronchodilator, chest physiotherapy and 100% oxygen inhalation, his SpO 2 , PaO 2 and PaCO 2 were 88-91%, 78 torr and 48 torr, respectively. His chest X-ray revealed a large mass on the right side of the chest, shifting and compressing the trachea with widening of the upper mediastinal shadow [ Figure 1 ] The trachea was deviated to left with compression of the right side. His chest MRI [Figures 2 and 3 ] revealed a retro-sternal, anterior mediastinal solid mass compressing the trachea with critical narrowing of tracheal lumen. Immediate surgical excision of the anterior mediastinal mass was necessary.
It was decided to establish a femoro-femoral cardiopulmonary bypass (CPB) under local anaesthesia, to induce general anaesthesia via CPB for excision of the mass, as endotracheal intubation on induction seemed extremely hazardous, if not impossible even over a fibre-optic bronchoscope.[ 1 ]
In the OT, intravenous and arterial cannulae were inserted in the left hand and the central venous cannula was inserted in the left subclavian vein under local anaesthesia. Due to oedema in both lower limbs, IV line could not be inserted in feet. The groins were left for femoral cannulation. The patient was allowed to inhale 100% oxygen continuously. The femoral artery and vein were cannulated in right groin under local anaesthesia. Injection of rocuronium 1 mg/kg, fentanyl 2 μ g/kg and thiopentone 2 mg/kg were added to the priming fluid in the CPB reservoir. Normothermic CPB was established with a 2.4 L/min flow initially via the femoral route after full heparinization. Just after the onset of CPB, propofol 100 mg was given intravenously. Anaesthesia was maintained with oxygen, air, midazolam, fentanyl and propofol infusion via CPB. Assisted ventilation was possible via an LMA which was introduced after the onset of CPB.
Median sternotomy was done. A firm mass was found in the upper anterior mediastinum and the lower part of the neck engulfing the lower trachea, brachiocephalic trunk and left innominate vein. The mass was subtotally removed to relieve these structures and the trachea was freed from the surrounding mass. After relieving tracheal compression, the LMA was replaced with an endotracheal tube of 8 mm internal diameter. Ventilation was continued via the endotracheal tube thereafter with oxygen and sevoflurane. The patient was weaned off CPB after the completion of surgery.
Monitoring included electrocardiography, pulse oximetry, capnography, thermometry, arterial blood gas, serum electrolytes, activated clotting time, spirometry, respiratory gas analysis, continuous invasive arterial blood pressure and central venous pressure recording.
He was shifted to the intensive care unit and electively ventilated till next morning, was on T-piece for another 24 h and was extubated thereafter. He recovered uneventfully and was discharged from the intensive care unit after 4 days. | Dr. Rejaul Karim, Associate Prof., Department of Radiology, I.P.G.M.E & R, Kolkata for interpretation of images. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):565-568 | oa_package/77/c9/PMC3016580.tar.gz |
||
PMC3016581 | 21224978 | INTRODUCTION
Ventriculo-peritoneal shunt (VPS) placement is a routine procedure in daily neurosurgical activity.[ 1 ] Although potentially fatal, postoperative intracerebral haematoma has not found its due mention in the literature.[ 1 ] Savitz and others reported a 4% incidence of delayed intracerebral haematoma or intraventricular haemorrhage after VPS placement.[ 2 ] The intracerebral event can adversely affect the outcome following general anaesthesia in the form of altered sensorium, delayed emergence, or non emergence. In such a situation, the postoperative computerized tomography (CT) scan is the best tool to detect an intracerebral pathology.[ 2 ] Further management depends on the location of bleed and other factors. | DISCUSSION
Early emergence from anaesthesia is essential following neurosurgery for neurological evaluation. Zelcer and Wells[ 3 ] found a 9% incidence of unresponsiveness at the end of 15 min after general anaesthesia among 443 mixed surgical patients. Arousal beyond 15 min[ 3 ] and 30 min[ 4 ] is labelled as delayed emergence. Residual anaesthesia may either give the false impression of a neurological deficit[ 5 ] or prevent the early diagnosis of a developing intracranial lesion like haematoma, herniation and cerebral infarction. A patient with altered sensorium is also at greater risk for airway obstruction, hypoxaemia, hypercarbia and aspiration.[ 6 ]
The most common cause for delayed awakening following anaesthesia is medications and anaesthetic agents used in the perioperative period.[ 7 8 ] There may be an overdose (absolute or relative in susceptible patients) of medications. Emergence from anaesthetic agents depend on the tissue uptake of the drug, average concentration used and the duration of exposure.[ 9 ] Certain underlying metabolic disorders such as hypoglycaemia, severe hyperglycaemia, electrolyte imbalance (especially hyponatraemia), hypoxia, hypercapnia, central anticholinergic syndrome, chronic hypertension, liver disease, hypoalbuminemia, uraemia and severe hypothyroidism may also be responsible for delayed recovery after anaesthesia.[ 9 ] Preoperative medications such as opioids and sedatives and hypothermia can further interfere with postoperative recovery.
Intraoperative cerebral hypoxia, haemorrhage, embolism or thrombosis also can manifest as delayed awakening from anaesthesia.[ 10 ] Although pupil size is not always a reliable indicator of central nervous system integrity, a fixed and dilated pupil in the absence of anticholinergic medication or ganglionic blockade may be an ominous sign. Therefore, patients with delayed emergence from anaesthesia after intracranial surgery undergo emergency CT scan or cerebral angiography.[ 9 ]
Small haemorrhages into the ventricles, subependymal area and around the ventricular catheters are frequently seen following VPS surgery.[ 11 ] Udvarhelyi and others,[ 12 ] first reported two cases with intracerebral haemorrhage after VPS insertion. The possible mechanisms of intracerebral haemorrhages after shunt insertion include a bleeding disorder, anticoagulant therapy, surgery induced disseminated intravascular coagulation, disruption of an intracerebral vessel by the catheter, haemorrhage into an intracerebral tumour, multiple attempts at localizing the ventricles, haemorrhage from a vascular malformation or spontaneous vascular rupture secondary to progressive degenerative vascular changes.[ 2 11 12 ] Bleeding secondary to ventricular cannulation may be readily detected on intraoperative ultrasonography and postoperative CT or magnetic resonance imaging studies. Performing an urgent CT scan following non-emergence from general anaesthesia in patient requiring repeated attempts at ventricular catheter insertion is highly desirable to exclude any intracranial event and address treatable cause.[ 2 ] | CONCLUSION
Early emergence from anaesthesia is highly desirable following neurosurgical procedures. Delayed emergence, often blamed on the anaesthetic agents may not always be the culprit as seen in the current report. When other causes are excluded the possibility of an acute intracranial event as an aetiology for delayed awakening should be strongly considered. | Emergence from general anaesthesia has been a process characterized by large individual variability. Delayed emergence from anaesthesia remains a major cause of concern both for anaesthesiologist and surgeon. The principal factor for delayed awakening from anaesthesia is assumed to be the medications and anaesthetic agents used in the perioperative period. However, sometimes certain non-anaesthetic events may lead to delayed awakening or even non-awakening from general anaesthesia. We report the non-anaesthetic cause (acute intracerebral haemorrhage) for non-awakening following ventriculo-peritoneal shunt surgery. | CASE REPORT
A 40-year-old 79 kg male presented with the diagnosis of pineal region tumour and hydrocephalus. He had a history of frontal and occipital headache of 1 year duration, gradually decreasing vision and history of one episode of seizure 2 months back. He was on dexamethasone 4 mg 6 th hourly and phenytoin sodium 100 mg 8 th hourly. On first examination, his blood pressure was 132/80 mmHg, heart rate was 67 beats/min and respiratory rate was 17/min. Preoperative investigations were within normal limits including coagulation profile (Prothrombin time 13.9s with control of 12.5s, international normalized ratio of 1.14, activated partial thromboplastin time of 28s) and platelet count. He was scheduled for a VPS surgery under general anaesthesia. In the operation theatre, his blood pressure was 128/80 mmHg, heart rate 68/min, and Glasgow coma scale (GCS) score was 15/15. He was premedicated with glycopyrrolate 0.2 mg, ondansetron 4 mg and midazolam 1 mg. Anaesthesia was induced with thiopentone sodium 250 mg and fentanyl 150 μ g. Vecuronium 8 mg was used to facilitate tracheal intubation. Anaesthesia was maintained with nitrous oxide and isoflurane 1% to 1.2% concentration with an inspired fraction of oxygen (FiO 2 ) of 0.4. The surgeon inserted the ventricular end of the shunt in third attempt. Non-haemorrhagic cerebrospinal fluid was drained under high pressure. Rest of the surgical procedure was uneventful. Throughout the operative period, patient remained haemodynamically stable. Residual neuromuscular blockade was reversed with neostigmine 3.0 mg and glycopyrrolate 0.4 mg. The respiratory parameters and the core temperature were within normal limits and pupils were reactive and normal in size. However, the patient remained unconscious even after 10 min with only localizing response to painful stimuli, hence the patient was shifted to intensive care unit (ICU). In the ICU, he was stable with blood pressure of 126/78 mmHg and heart rate of 77/min. Arterial gases and glucose were within normal limits. One hour after the surgery, pupils were dilated bilaterally and sluggishly reacting. Therefore, 200 ml of mannitol 20% and furosemide 40 mg were administered. CT scan of brain showed massive intracerebral bleed with bleeding into the left basal ganglia, brain stem and intraventricular area [ Figure 1 ]. The patient remained unconscious and died on the second postoperative day due to this intracranial catastrophe | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):569-571 | oa_package/07/22/PMC3016581.tar.gz |
|||
PMC3016582 | 21224979 | Sir,
A 14-year-old patient underwent sub-aortic membrane resection with cardiopulmonary bypass support. Post induction, right internal jugular vein (IJV) cannulation was done with 16-G, 18-G, and 18-G triple lumen catheter in a single attempt. Intravascular placement of the catheter was confirmed after aspiration of blood from the proximal port and was connected to the transducer for central venous pressure (CVP) monitoring. The middle port was connected to the intravenous fluid and the distal one was connected to inotropes. The CVP measured low (3 mm Hg) and, wave form not being clear, it was assumed that the catheter tip might be stuck in the vessel wall. Intraoperatively, fluid was given through the peripheral line. Towards the end of the procedure, right pleural cavity was sucked and a chest drain was given. Surgery was completed uneventfully. Patient was shifted to the intensive care unit (ICU) and elective ventilation was started. Postoperatively, CVP was still on lower side (3 mm Hg) and 500 colloid (Hestar 6%) was infused through the middle port of the CVP line. Suddenly, the right chest drain started collecting with serous type collections and about 500 ml was collected over half an hour. The possibility of misdirected catheter tip at the pleural cavity was suspected, for which the whole colloid was drained out through the right-sided chest drain. Urgent chest X-ray confirmed the position of the catheter tip at the right pleural cavity [ Figure 1 ]. Then, it was confirmed that half of the catheter was inside the vessel and half outside in the right pleural cavity, resulting in the erroneous CVP measurement.
Then, removal of the catheter was planned from the right side. As soon as the catheter was removed, BP went down to 50 mm Hg and the chest drain started collecting blood (400 ml). As an emergency, a 16-G cannula was inserted at left external jugular vein for rapid volume infusion. Bleeding from the torn end of the vessel was suspected after removal of the catheter. Urgent re-exploration was done, and intraoperatively, a rent in the lower end of the IJV was found, which was profusely bleeding through the apex of the right lung. Rent was repaired and about 3 l of blood was evacuated from the right pleural cavity. Once haemostasis was confirmed, blood transfusion was started and the BP reached 110 mm Hg.
Central venous catheters (CVC) are an essential component of modern critical care. Despite their utility, placement of CVCs is often associated with complications such as[ 1 2 ] malposition of the catheter and complications relating to perforation and/or injury of nearby blood vessels and structures.
Cannulating the left internal jugular vein has a higher chance of malposition because the left brachiocephalic vein has a more transverse lie, thus making the catheter more prone to angulation. There has also been a case reported of inadvertent placement of a jugular venous catheter into the left superior intercostal vein.[ 3 ]
Accidental puncture and perforation of blood vessels or injury to nearby structures usually results in a more catastrophic outcome and is not uncommon. In one study, carotid artery puncture occurred in 8.3% of patients undergoing internal jugular vein cannulation.[ 4 ] Schummer et al .[ 5 ] reported a case similar to our patient, with unrecognised stenosis of the superior vena cava (SVC). Perforation occurred in the SVC after catheterisation of the left internal jugular vein with a haemodialysis catheter. Extravascular positioning of the catheter was unrecognised and the patient subsequently died of complications.
Partial placement of catheter (half of the catheter inside the vessel and half outside) with distal part extravasation is seen very rarely, and till now, such a case has not been reported in the literature. In our case, the catheter up to the proximal port was partially inside the vessel, which we could aspirate, but rest of the catheter was outside the vessel and pierced the vessel wall, leading to migration in the right apical pleural cavity. Our diagnosis became late because the middle and the distal ports were not aspirated intraoperatively. Postoperatively, when the collection in the right chest drain became profuse after administering fluid through the middle port of the catheter, such a situation was suspected and later confirmed by radiography. A post-procedural chest radiograph is generally considered essential in identifying malposition of the catheter.
Once such vascular complications are diagnosed, the management becomes important. Our patient suddenly developed hypotension as he started bleeding profusely from the rent in the vessel after removing the catheter. Although re-exploration and surgical haemostasis was done, we realised that such chaotic situations could have been avoided by removing the catheter inside the operation theatre.
We recommend from this experience that free venous outflow must be carefully checked in all the ports of CVP catheter, and following placement of such catheter, chest radiograph should be completed to confirm the position. If such partial vessel extravasation is there, it should be removed in operation theatre with blood products ready in hand for resuscitation. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):572-573 | oa_package/6e/5f/PMC3016582.tar.gz |
|||||||
PMC3016583 | 21224980 | As most of us are aware, ventilator support came to stay after the polio epidemic in Denmark in the ‘50s. Many of us are also aware that Peter Safar, an Anaesthesiologist, is credited with pioneering cardiopulmonary resuscitation (CPR), who also wrote a book titled “The ABC of Resuscitation” in 1957 for training the public in CPR. It was later adopted by the American Heart Association. He also started the first intensive care unit (ICU) in 1958 in the USA. Ten years later, from 1968, the specialty grew from strength to strength in our country and, in 1992, the Society of Critical Care Medicine was formed. | In 1968, going back 40 and odd years, my colleagues and I were registrars in the Department of Anaesthesia at the Maulana Azad Medical College and associated Irwin and Pant Hospitals. Irwin Hospital, now known as Lok Nayak Jaya Prakash Narayan Hospital, was the largest hospital in Delhi, with more than 1,500 sanctioned beds, and many more on the floor. The Head of the Department of Anaesthesia was Prof. N P Singh. At the time that I worked with him, he was keen to get an ICU started in the Irwin Hospital. He met resistance at every stage – the Administration of the Hospital, Delhi Administration, the Ministry of Health and even from his senior colleagues at Irwin Hospital. It was not easy to get an appointment with the powers that be.
By a stroke of luck, he discovered that the Lt. Governor of Delhi was to inaugurate a Road Safety Week Exhibition conducted by the Chief of the Traffic Police. He took two of his staff (suitably kitted out) and went to the venue of the exhibition. Cornering the Chief of the Traffic Police, he told him that he came to see what all the activity was about and was all praises for the concept of public awareness of road safety. He described how Irwin Hospital dealt with 100s of accidents and what it cost the Government. Educating the public to obey traffic signals and avoid drinking and driving was important he said, but it could not totally prevent an accident (although, of course, it would reduce the number). In a matter of few minutes, he convinced the Traffic Chief that public awareness should be created on an “immediate, on the spot action” in the event of an accident and that this exhibition needed that one element for its complete success. The Chief was enthusiastic about the concept, but reluctantly admitted that his staff knew nothing about resuscitation. The Professor declared that he had come to cover this lacuna and make the Police Exhibition a grand success.
He was allotted a 10’×10’ stall for display of posters and equipment. The police provided furniture and stationary and gratefully gave the anaesthetic team a royal salute! On the day of the inauguration, the stall looked impressive. Doctors were costumed in their coats and stethescopes, with important-looking badges; there were drip stands and IV bottles and other equipment to grip the imagination and attention of the spectators. A complete full-length skeleton graced the stall, with two bags occupying the rib cage. The bags were ventilated by a Radcliffe ventilator of World War 2 vintage. Needless to say, for the wide-eyed public, this was far more interesting than the red, green and amber traffic lights and zebra crossings put up by the Traffic Police!
When the Lt. Governor came to inaugurate the exhibition and cut the ribbon, Prof. N P Singh was right in front to greet him and deftly led him to the resuscitation stall! A nonstop lecture-demonstration followed on all aspects of first aid – basic life support (BLS), advanced life support (ALS) and ambulance service. He stressed on the fact that because accidents could not be avoided, there was an urgent and immediate need for a resuscitation ward/ICU in what was one of the premiere institutions in the capital. The Lt. Governor, who was the ultimate authority to approve major changes, sanction major equipment and create new posts for medics and paramedics in Irwin Hospital, was suitably impressed. Very importantly, the press reporters who had come to cover the event were cornered and lectured on first aid, resuscitation and much more! The next day, the press reports on the inauguration of the Road Safety Week Exhibition looked and sounded like a plea for both educating the public in first aid and the acute need for a resuscitation ward/ICU in the largest hospital in Delhi, the Irwin Hospital.
After impressing the Lt. Governor of Delhi, Prof. N P Singh shifted his activity to Irwin Hospital. He literally “grabbed” a location near the Emergency Area (a strategic place adjacent to Casualty, Emergency OR and the Emergency Ward); equipment was procured by diverting funds allocated to other departments with the blessing of the Delhi Administration (and much to the anger of the affected departments). The day of inauguration of the resuscitation ward/ICU finally arrived, with junior doctors combing the entire hospital for patients to fill the beds. Finally, one patient was found who was waiting to be discharged after an overdose of barbiturate tablets. He, along with his case sheet, were transferred to the resuscitation ward/ICU. He was then instructed to lie still with his eyes closed. He played his part well! After the VIPs had departed, Prof. N P Singh returned to the ICU to retrieve his bag and found his entire staff along with the erstwhile “comatose” patient celebrating the occasion with Coca Cola. Being a good sport, he enjoyed the scenario.
Prof. N P Singh was born on 15.07.1931 and died on 04.12.2006. Although his obituary did mention the various professorial appointments he held, and the fact that he had organized 50 ambulances for the city of Delhi, I believe that he went to his grave “Unwept, Unhonoured and Unsung.” Being an MD (Anaesthesia), post-graduate student at Maulana Azad Medical College and affiliated Hospitals at a time when an ICU was being started was interesting, stimulating and hilarious, thanks to Prof. N P Singh.
I wish to point out here that, in 1968, the going was rough, even in the capital city of Delhi. It was the passion and persistence of one man – Prof. N P Singh – that brought about the advancement of resuscitation and intensive care in Delhi. | The author thanks Prof. K M Rajendran Retd. Director, Prof. and HOD Anaesthesia, JIPMER, Pondicherry, Prof. Pramod Kohli, HOD, Lady Hardinge Medical College, New Delhi and Prof. Ravi Shankar, Prof & HOD, Mahathma Gandhi Medical College, Pondicherry for their valuable informations. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):574-575 | oa_package/cd/d9/PMC3016583.tar.gz |
|||||
PMC3016584 | 21224982 | Sir,
A 17-year-old male, weighing 50 kg, of American Society of Anesthesiologists physical grade I, was scheduled for exploration and repair of right panbrachial plexus injury under general anaesthesia (GA). Inside the operation theatre, we connected routine monitors to the patient. GA was induced with propofol 100 mg and fentanyl 100 mcg. Patient’s airway was secured with a size 3 proseal Laryngeal Mask Airway (LMA). Anaesthesia was maintained with infusion of propofol 100 mcg/kg/min and fentanyl 1 mcg/kg/ hr using a single intravenous cannula of 18G on the dorsum of left hand, along with nitrous oxide in oxygen (2:1 ratio). As the surgery needed intraoperative use of nerve stimulator, use of muscle relaxant was avoided. Mechanical ventilation was adjusted to maintain an end-tidal carbon dioxide (EtCO 2 ) of 35–37 mmHg. After 50 minutes of stable proceeding, we noticed a sudden drop in maximum and mean airway pressure along with a ventilator alarm of “high drive gas pressure” [ Figure 1a ], but the EtCO 2 reading remained unaltered. We tried looking for any leak or obstruction in the breathing system, inadequate fresh gas flow, displaced LMA, the position of the bag/ventilator selector valve, proper connections and functioning of the ventilator. When all possibilities were ruled out, we noticed that the breathing tubes were squeezed by the patient’s hand [ Figure 1b ]. Immediately, a bolus of 40 mg of propofol was administered. The patient’s grip over the breathing tube loosened and the airway pressures returned to normal. Our search for the cause revealed that there was an obstruction in the extension line of the propofol and fentanyl infusion was blocked. At the same time, the infusion pump failed to give alarm. The obstruction was cleared and rest of the surgery went uneventful.
High or low airway pressure conditions in the breathing system have been a major cause of anaesthesia related mortality and morbidity.[ 1 ] Conditions that can cause low airway pressure alarm include: a disconnection or major leak in the breathing system; inadequate fresh gas flow; leaking tracheal tube cuff; extubation; the bag/ventilator selector valve in bag position; faulty or unconnected ventilator; gas or power supply failure to the ventilator or obstruction upstream to the pressure sensor.[ 2 ] In our situation, EtCO 2 reading was unaltered in spite of altered airway pressure In the presence of dangerously abnormal airway pressures the exhaled carbon dioxide may remain relatively normal.[ 1 ] The airway pressure alarms detect the high or low air way pressure conditions in the breathing system. In our case, because of the airway pressure alarm, the obstruction of the propofol and fentanyl infusion was timely detected and subsequent events of inadequate anaesthetic depth could be avoided. We suggest a careful observation of airway pressure alarms in the intraoperative period where we avoid the use of muscle relaxants. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):576a | oa_package/e5/8e/PMC3016584.tar.gz |
|||||||
PMC3016585 | 21224981 | Sir,
Priapism after neuraxial or general anaesthesia is rare and may delay or even cancel the planned urological procedure. Our patient was a 66-year-old gentleman with hypertension, diabetes, coronary artery disease (double vessel disease, anterolateral wall myocardial infarction 2 years back), and chronic obstructive pulmonary disease with benign prostatic hypertrophy (70 g prostate). He was posted for LASER prostatectomy under spinal anaesthesia in view of coexisting diseases. Level of block was T10 dermatome. Thirty minutes into the surgical procedure, the patient started having penile engorgement which became maximal over the next 10 min forcing us to stop the surgery. After achieving haemostasis and waiting for 15 min in hope of spontaneous detumescence, intravenous glycopyrrolate 0.2 mg followed by incremental doses of ketamine to a total of 50 mg was given. Throughout this period, patient was relaxed and pain free. Intracavernous injection of agonists was decided against in view of his cardiovascular status. After 1 hr of waiting and informing patient and attendants, further surgery was called off. Gradual spontaneous detumescence was observed in the third postoperative hour.
Intraoperative penile erection when observed is more common in patients younger than 50 years, with epidural anaesthesia or general anaesthesia with propofol.[ 1 ] It is difficult to perform transurethral procedure during penile erection because attempts to do so may lead to complications, such as excessive bleeding and urethral trauma.
The commonly quoted techniques for treatment of penile erection under anaesthesia are intravenous ketamine, glycopyrrolate and terbutaline;[ 2 ] increasing the depth of anaesthesia with inhalational anaesthetics; intracavernous injection of agonist (epinephrine,[ 3 ] Phenylephrine[ 1 ]) and dorsal nerve block. Intravenous glycopyrrolate was shown to be an effective drug especially because of its stable cardiovascular profile.[ 4 ]
Imbalance between sympathetic and parasympathetic nervous systems is considered as an underlying mechanism for intraoperative erection, although local stimulation before complete sensory blockade can contribute to the problem. Detumescence is mediated by adrenergic stimulation that causes a constriction of penile venous sinusoids and opening of emissary veins leading to increased blood drainage.[ 5 ] Psychogenic and reflex erections may occur during the early stages of spinal anaesthesia when the pathways involved are still incompletely blocked.[ 6 ]
Therapy must be quickly initiated to enhance venous drainage of the engorged corpora cavernosa before prolonged venous stasis leads to increased viscosity associated with slugging and impairment of the routes of venous egress.[ 7 ] It must be emphasized that for the successful detumescence of the penis, the relationship of treatment to the duration of erection is the critical factor and therapy should be tailored to the patient’s condition. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):576b-577 | oa_package/d2/35/PMC3016585.tar.gz |
|||||||
PMC3016586 | 21224983 | Sir,
A 61-year-old man presented with central chest tightness. Although the Electrocardiogram (ECG) did not show ischemic changes, the troponin was raised. He was in fast atrial fibrillation with haemodynamic compromise. He was given two Direct Current (DC) shocks after which it converted to sinus rhythm. Cardiac risk factors included Diabetes Mellitus (DM), hypercholestrolaemia and smoking. A Transthoracic Echocardiography (TTE) and a Coronary Angiography was planned.
Transthoracic Echocardiography revealed the following findings — 1. Left Ventricle (LV) borderline dilated; 2. LV systolic function moderate to severely reduced; 3. Severe posterior wall hypokinesia; 4. Ejection Fraction (EF) of 25 to 35%; 5.Heavily calcified aortic valve with poor cusp excursion. Max systolic gradient=20 mmHg.
Cardiac Catheterisation confirmed these findings and showed significant three-vessel coronary artery disease.
It was planned to do a Coronary Artery Bypass Graft with / without Aortic Valve Replacement (AVR). Transoesophageal Echocardiography (TOE) was performed (four days after performing the TTE), on the evening prior to the surgery, to assess whether an AVR should be done. It was decided to do a coronary artery bypass graft (CABG) plus AVR. Incidentally a thrombus in the left atrial appendage was found [ Figure 1 ].
On the operation day the patient was taken to the theatre. Along with the usual monitoring for cardiac surgery a transoesophageal echo (TOE) probe was also inserted. Surgery started, sternotomy and pericardiectomy were done and the heart was exposed. Prior to handling of the heart, the presence of a thrombus in the Left Atrial Appendage (LAA) was confirmed by TOE. The surgeons aimed to minimise the manipulation of the heart to avoid the thrombus from dislodging. Aortic cannulation was done and prior to the right-sided cannulation the surgeons were considering bicaval cannulation rather than right atrial cannulation, in an attempt to minimise the manipulation of the heart. During this time they enquired regarding the thrombus. The thrombus was visible in the Left Atrial Appendage at this time. Suddenly the thrombus dislodged from the appendage and disappeared from our view on the echocardiogram [ Figure 2 ]. We then saw it tumbling in the left ventricle for a few seconds [ Figure 3 ] after which it disappeared into the circulation. The surgery was continued.
As the thrombus had dislodged into the circulation a search was made for it after the CABG plus AVR had been done. A carotid ultrasound and examination of peripheral pulses was performed. The left leg below knee was found to be cold and the pulse was less. Vascular surgeons were called to review and they decided to do a Popliteal Embolectomy. The thrombus was found in the Popliteal Artery and was removed. This was followed by a return of good pulse in Popliteal A and Anterior Tibial A.
Postoperatively the patient was transferred to the intensive care unit (ICU), weaned and woken up. He was alert and oriented. There was no focal neurological deficit. Haemodynamically he was stable with peripheral pulses present. The left foot was warm and well perfused. On post operative day one he was transferred to the ward. A few days later he was discharged home.
There are no guidelines for the management of a left atrial thrombus seen pre-operatively.
Theoretically there are a few management options
Heparin Thrombolysis — Carries a high risk of systemic embolism Surgical removal
The patient recently had a myocardial infarction (MI) and also had an aortic stenosis. Giving heparin the night before surgery is not a feasible option. Although postponing surgery would have carried a high risk to the patient.
On the other hand thrombolysis causes the thrombus to lyse into small particles that can embolise into the system causing a stroke and organ and limb ischaemia. As the patient was going for surgery the very next day it was decided to proceed without any change in the plan.
During the surgery the surgeons would try to minimise the manipulation of the heart with the anaesthetist keeping an eye on the thrombus with the help of TOE. Unfortunately the thrombus did embolise, but only to the leg.
As we were able to visualise this on TOE we started looking for signs of embolisation as soon as the surgery was over. A carotid ultrasound and examination of the peripheral pulses were done in the diagnostic workup.
Echocardiography has proven to be a useful tool in the diagnosis and evaluation of cardiac masses. TOE, when used during cardiac surgery, has been shown to influence surgical and medical management.[ 1 – 3 ]
In this case TOE helped us to actually see the thrombus migrating. This made us look for the signs of embolism and manage them early. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):577-579 | oa_package/fc/b3/PMC3016586.tar.gz |
|||||||
PMC3016587 | 21224984 | Sir,
I read with interest the article titled “Role of ketamine in fiberoptic era” by Chand et al .[ 1 ] I congratulate the authors for successful airway management of a case of post-burn contracture presenting with fixed flexed neck deformity. The authors used intravenous ketamine along with lidocaine 2% for the release of neck contracture before an LMA (Laryngeal Mask Airway) was placed for ventilation.
According to the authors, release of contracture under local anaesthesia could not be done fearing that the safe dose of local anaesthetic would have been exceeded. I do not agree to them on this point. “Tumescent anaesthesia” is a technique for delivery of local anaesthesia that maximises safety by using pharmacokinetic principles to achieve extensive regional anaesthesia of skin and subcutaneous tissue. The subcutaneous infiltrations of a large volume of very dilute lidocaine (as low as 0.1%) and epinephrine causes the targeted tissue to become swollen and firm, or tumescent, and permits procedures to be performed on patients without subjecting them to the inherent risks of local anaesthetic toxicity. The use of diluted lidocaine allows administration of doses upto 35–55 mg/kg. This technique has been used safely for procedures like harvesting skin grafts,[ 2 ] liposuction and post burn neck contracture release.[ 3 ] | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):579-580 | oa_package/89/87/PMC3016587.tar.gz |
|||||||
PMC3016588 | 21224985 | Sir,
There is a misconception among many of our surgical colleagues that air conditioning in operating theatres is only for our comfort. If it was so, all working areas in the hospitals would have got this facility. But all of us would agree that this is not the case in most of the hospitals in India. This means there has to be much more than just comfort for which air conditioning is required in operating theatres.
Air conditioning is not synonymous with cooling. An optimal air conditioning system filters air, maintains a certain number of air exchanges per hour, and maintains temperature and humidity of the area. Thus, besides providing acceptable indoor climate for the patients and the personnel, air conditioning removes odour and anaesthetic gases, and reduces the risk of infection to the patient by controlling airborne microorganisms in the room. There are standards for operating room air conditioning. Air supply of 0.24 m 3 per minute per person is the critical level for odour suppression.[ 1 ] At least 20 air changes per hour should be maintained to control microorganisms and for comfort. The operation theatre (OT) temperatures should be between 20 and 24°C and the relative humidity 40–60%.[ 2 ]
Despite the above facts, which are well mentioned in the books, journals and articles on internet, many surgeons and anaesthetists still get into hot arguments regarding taking up cases in the elective OTs when air conditioning of the hospital is not working. Surgeons are willing to operate even at OT temperatures of 32–34°C. Sometimes they sweat due to excessive heat and the sweat falls into the patient’s wound. Many times, patients become hyperthermic during the surgery. However, the surgeons have a great satisfaction of performing the surgery under adverse circumstances.
Proper air conditioning increases the productivity of staff. Moreover, prevention of hospital acquired infection reduces the cost of antibiotics, hospital stay and loss of productive man hours.[ 1 ]. It is very sad that when publications from all over the world comparing different types of air conditioning systems to be used to reduce infection rates are pouring in,[ 3 4 ] we, Indians, are still trying to convince each other that air conditioning is mandatory for operating theatres and we should not give in to pressure from our surgical colleagues to allow elective cases to be done if there is a problem with the hospital air conditioning system. | CC BY | no | 2022-01-12 15:30:18 | Indian J Anaesth. 2010 Nov-Dec; 54(6):580 | oa_package/18/43/PMC3016588.tar.gz |
|||||||
PMC3016589 | 21224986 | Sir,
Arterial blood pressure (ABP) is a basic haemodynamic index often utilized to guide therapeutic interventions, especially in critically ill patients. Inaccurate ABP measuring creates a potential for misdiagnosis and mismanagement. I would like to share an important and interesting experience with you that would be valuable for the students.
A 74-year-old male (43 kg, ASA status II)) patient was posted for parotid gland excision. The patient’s medical history was significant for chronic obstructive lung disease. He had fair effort tolerance (New York Heart Association-II). All pre-operative investigations were normal except for moderate obstruction in spirometry and typical hyperinflation features in chest X-ray. His physical examination and airway assessment were unremarkable.
The patient was induced with i.v. inj fentanyl 2 ug/kg inj propofol 2 mg/kg and inj atracurium 0.5 mg/kg after pre-oxygenating with 100% oxygen. His non-invasive systolic blood pressure was between 80 and 90 mmHg. We placed a radial arterial cannula with 100 cm of stiff tubing that was free of air bubbles and used a FlotracTM sensor for arterial pressure monitoring. Invasive BP was 176/114 mmHg. There was no ringing or resonance and arterial tracing was absolutely normal. Non-invasive BP showed 82/46 mmHg. We cross-checked our BP reading and found it to be correct. Then, we changed the disposable transducer. After changing the disposable transducer, invasive BP correlated with non-invasive BP. The remaining intra-operative period was uneventful and the patient was extubated successfully.
Ideally, the pressure waves recorded through the intravascular catheter should be transmitted undistorted to the transducer and then to the amplifier, display or recording system.[ 1 2 ] Unfortunately, the mechanical transmission system oscillates (rings or resonates) after being set in motion by the arterial pressure wave. These oscillations produce small pressure waves that are superimposed on those caused by the pressure pulse itself, thereby introducing artefacts that distort the measured pressure.[ 1 3 ] In our case, there was no ringing or resonance as the arterial waveform was not distorted. Most disposable transducers have natural frequencies of several hundred hertz, but the addition of saline-filled tubing and stopcocks that may trap tiny air bubbles results in a monitoring system with a markedly reduced natural frequency.[ 1 4 ] We used 100 cm of stiff tubing that was free of air bubbles and a FlotracTM sensor for arterial pressure monitoring. In our case, the disposable transducer was faulty, probably having a natural frequency in the unacceptable low range [ Figure 1 ].[ 1 ] Most monitoring systems are underdamped but have a natural frequency high enough such that the effect on the monitored waveform is limited.[ 2 ]
In conclusion, invasive BP should always be correlated with non-invasive BP, and any discrepancy noted should be rectified to avoid misdiagnosis and mismanagement. | CC BY | no | 2022-01-12 15:30:19 | Indian J Anaesth. 2010 Nov-Dec; 54(6):581-582 | oa_package/bb/be/PMC3016589.tar.gz |
|||||||
PMC3016590 | 21224988 | Sir,
I read with interest the case report by Raiger et al ,[ 1 ] titled “Non-cardiogenic pulmonary oedema after neostigmine for reversal: A report of two cases.” The authors had described two cases of pulmonary oedema after routine surgeries. One patient developed it after tracheal extubation and required a very short period of mechanical ventilation. A laryngeal pack was used in this patient. The other patient developed it before tracheal extubation and required a relatively longer period of mechanical ventilation. This patient was reported to have difficulty in breathing and hence tracheal extubation was delayed.
Post-obstructive pulmonary oedema is caused by significant fluid shifts resulting from changes in intrathoracic pressure[ 2 ] Negative intrathoracic pressure generated, when a patient attempts to inspire against a closed glottis or obstructed airway, leads to increase in venous return and a consequent rise in pulmonary venous pressure. This leads to a hydrostatic gradient with fluid moving from high pressure (pulmonary venous system) to low pressure (pulmonary interstitium and airspaces)[ 3 ] The negative intrathoracic pressure, along with the resultant hypoxia, also depresses the cardiac output by increasing myocardial wall stress and systemic vascular resistance, which increases the pulmonary venous pressure further.[ 4 ] Laryngospasm has been reported to be the cause in >50% of cases, whilst other causes include tracheal secretions, hiccups, and biting the endotracheal tube.
Drug-induced non-cardiogenic pulmonary oedema may be due to a pulmonary venoconstriction, capillary leak syndrome, intravascular fluid volume overload, and/or reduced serum oncotic pressure.[ 5 ] Drugs known to cause the above have been enumerated by Reed and Glauser.[ 5 ] Neostigmine has never been reported to cause pulmonary oedema in contemporary literature.
Millions of anaesthetics have been delivered with the use of neostigmine but development of postoperative pulmonary oedema has not been attributed to it. It would be too farfetched to attribute the occurrence of pulmonary oedema, in the two cases reported, to the use of neostigmine, without clear evidence or a viable argument. | CC BY | no | 2022-01-12 15:30:19 | Indian J Anaesth. 2010 Nov-Dec; 54(6):582a | oa_package/7f/24/PMC3016590.tar.gz |
|||||||
PMC3016591 | 21224987 | Sir,
A 75-year-old, American Society of Anesthesiology Grade II male patient, a known case of carcinoma of the urinary bladder, who had previously undergone trans-urethral resection of the bladder tumour, received intravesical BCG and six cycles of radiotherapy, was referred to our hospital with persistent hematuria for seven months. A diagnosis of post-radiotherapy haemorrhagic cystitis was made. The patient twice underwent cystoscopy with clot evacuation for the same under subarachnoid block. However, the hematuria persisted, necessitating repeated transfusions of packed Red Blood Cells. As a last-ditch effort to stop the diffuse bleeding, an old technique of intravesical instillation and irrigation with formalin was planned.[ 1 ]
The patient was a diabetic on insulin. He had no other comorbid disease. His preoperative haemoglobin was 7.4 g% and coagulation profile was within normal limits.
In the view of the short duration of the procedure, general anaesthesia was planned. Following intravenous fentanyl (60 μ g) and propofol (100 mg), a Pro-seal laryngeal mask airway was inserted. Anaesthesia was maintained with O 2 , N 2 O, isoflurane. The patient was placed in lithotomy position and 4% formalin was instilled intravesically. Formalin was kept in situ for 20 min, after which the bladder was evacuated. There was an immediate hypertensive response following formalin instillation, the blood pressure rose from a baseline value of 110/80 mmHg to 190-180/120-100 mmHg with a pulse rate of 86 beats/min. The hypertensive response persisted despite repeating fentanyl (90 μ g). After the procedure, on awakening, the patient complained of severe, unbearable pain in the suprapubic region despite repeated doses of IV fentanyl and morphine. His blood pressure continued to remain high. To alleviate his pain an epidural catheter was inserted in the L2-3 space and 10 ml of 0.125% bupivacaine administered. After 20 min the pain subsided and his vitals returned to normal.
Formalin has a dessicating effect when applied to living tissue; it hydrolyzes proteins and coagulates superficial tissue. This effect controls the haemorrhage from telangiectatic capillaries in the mucosal and submucosal layers.[ 1 ] Sloughing of the urothelium, local oedema and inflammation cause severe pain. Regeneration can take up to threeweeks. Suprapubic pain, dysuria, a reduction in bladder capacity, urgency and incontinence are known complications.[ 1 ] The severity of the pain requires that, where possible, the procedure should be carried out under regional anaesthesia. Not many anaesthesiologists may be aware of the severe pain engendered by the intravesical instillation of formalin and we wanted to share our experience. We came across only one other report where suprapubic pain following formalin instillation was managed with intravesical lidocaine.[ 2 ] However, the duration of pain relief with such a technique would be limited. An epidural catheter offers the advantage of repeated dosing which can be beneficial, as our patient needed epidural morphine for two days. | CC BY | no | 2022-01-12 15:30:19 | Indian J Anaesth. 2010 Nov-Dec; 54(6):582b-583 | oa_package/d8/77/PMC3016591.tar.gz |
|||||||
PMC3016592 | 21224989 | Sir,
We like to report a case of haemothorax which occurred after removal of subclavian venous catheter. A 35 year old male patient, a case of left Cerebello pontine angle tumour was posted for craniectomy and excision. On the day of surgery after induction of general anaesthesia, subclavian venous catheterization was done on the right side through standard infraclavicular approach. Central venous pressure monitoring was done intraoperatively and intraoperative haemodynamics and vital parameters remained normal. After surgery patient was shifted to the intensive care unit for postoperative ventilatory support. Chest X-ray taken in the postoperative period with the catheter in situ was normal [ Figure 1 ].
In the ICU, patient was weaned from ventilator support and extubated on the first postoperative day. Patient was conscious and oriented and was maintaining stable haemodynamics after extubation.
On the third postoperative day subclavian catheter was removed and dressing applied. Within 2 h after removal of catheter, patient started complaining of respiratory difficulty with pain on right side of chest. Patient was maintaining an SpO 2 of 91–93% and clinical examination revealed reduced air entry on the right side of the chest. Chest X-ray taken showed effusion on right chest with collapse of right lung [ Figure 2 ]. Intercostal drain insertion was done in the right side of the chest and about 1 litre of blood got collected in the ICD bag. Flow of blood through ICD gradually got reduced and stopped after a few hours [ Figure 3 ]. Patient became comfortable with stable vitals after ICD insertion. ICD was retained for three days and then removed. Repeat chest X-rays showed no further collections and the patient was discharged.
Various complications like pneumothorax and haemothorax have been reported to occur during insertion of subclavian venous catheter.[ 1 2 ] But, haemothorax occurring after subclavian catheter removal is an unusual complication. Massive haemothorax after subclavian catheter removal in a patient who had undergone renal transplant was reported previously by Collini in 2002.[ 3 ] The probable mechanism behind this complication could be injury to the pleura during insertion and a communication could have occured between the vein and right pleural cavity after catheter removal. This complication has been reported to emphasize that careful monitoring is necessary after subclavian venous catheter removal. | CC BY | no | 2022-01-12 15:30:19 | Indian J Anaesth. 2010 Nov-Dec; 54(6):583-584 | oa_package/56/e7/PMC3016592.tar.gz |
|||||||
PMC3016593 | 21224990 | Sir,
A 60-year-old male patient weighing 82 kg was posted for emergency evacuation of a subdural haematoma. Pre-anaesthetic evaluation of the patient revealed no abnormalities, with the patient being semiconscious, GCS 10 and showing no neurological deficit. On examination of the airway, all parameters such as mouth opening, lip biting, Tempero mandibular joint subluxation and thyromental and mentohyoid distance were within normal limits and were Mallampati grade II.
The patient was pre-medicated with 1 mg of Midazolam, 0.2 mg of Glycopyrrolate and 100 μ g of Fentanyl intravenously (IV). After induction with thiopentone sodium 300 mg IV, 6 mg of Vecuronium bromide IV was given and it was noted that although mask ventilation was possible, there was some resistance to air entry into the trachea, as suggested by reservoir bag resistance and inflation of the stomach. To relive the airway resistance, Guedel’s airway was inserted. After insertion of airway, airway resistance was decreased and saturation was maintained. On attempting endotracheal intubation, the epiglottis could be visualized but the glottis could not be visualized even after external laryngeal pressure was applied. The third time, we attempted the gum elastic bougie for intubation of the patient, with the epiglottis as a guide. When this attempt also failed, we put a call to the ENT surgeon to establish a surgical airway. While waiting for the surgeon, because the patient was already anaesthetized, we could attempt an emergency retrograde intubation.
Cricothyriod membrane puncture was performed with a 16 G needle and a guide wire used in central venous cannulation was advanced cephalad through the needle, the larynx and out of the mouth; the tracheal tube was passed over the guide wire. We went ahead with the procedure and timed ourselves till we successfully observed equal bilateral air entry after endotracheal intubation. The entire procedure took only 145 s. The surgical procedure and anaesthesia were uneventful. On completion, the residual neuromuscular blockade was reversed and the patient was shifted to the Neuro-ICU and extubated after full recovery of consciousness.
This technique was first reported by D.J. Waters in 1963.[ 1 ] The use of retrograde wire technique to assist in the management of difficult airway was first reported in 1981. Since then, modifications of this technique have included the use of the fiberoptic bronchoscope to permit tracheal intubation under direct visual control.[ 2 ] Difficult intubation is defined as inadequate visualization of the glottis and failed tracheal intubation as inability to insert a tracheal tube from the oropharynx into the trachea.[ 3 ] This technique may be useful in trauma patients requiring cervical spine immobilization as well as in patients with facial trauma, trismus, ankylosis of the jaw and cervical spine, upper airway masses and bleeding.
Fiberoptic intubation, a more recent technology, but technically demanding, is considered the safest and the most-effective method in known or suspected cases of difficult intubation. Its primary advantage is that it permits direct visual control of the intubation procedures.[ 4 ] However, in developing countries, the flexible fiberoptic laryngoscope usage is rare and, even when present, it requires expertise to use. Bleeding in the oropharynx can obscure the airway with a fiberoptic laryngoscope, even for an experienced anaesthetist. Alternative methods may be needed.[ 5 ] Retrograde intubation is the better alternative airway management in difficult airway conditions where there is no fiberoptic laryngoscope. | CC BY | no | 2022-01-12 15:30:19 | Indian J Anaesth. 2010 Nov-Dec; 54(6):585 | oa_package/d6/3d/PMC3016593.tar.gz |
|||||||
PMC3016594 | 21224991 | Sir
Subcutaneous tunnelling of the epidural catheter is routinely practiced for anchoring the epidural catheter. There are different techniques for subcutaneous tunnelling which are associated with complications like needle stick to the clinician or shearing of the epidural catheter. Rose GL (2009) reported the efficacy of needle sheath for prevention of these complications.[ 1 ]
Still, the likelihood of shearing of the epidural catheter cannot be ruled out. Following shearing of the epidural catheter it has to be pulled out, as it cannot be used for administering medication.[ 2 ] Recently, we came across shearing of epidural catheter during subcutaneous tunnelling [ Figure 1 ]. We did not pull out the epidural catheter; rather we cut the catheter at the point of shearing and attached the cut end of the catheter (near the point of exit from the back of the patient) to the catheter connector and filter assembly. Thereafter we administered medication through this assembly that was later fixed to the back of the patient [ Figure 2 ]. Subsequently a highpressure, low-volume extension tube was attached to the assembly, which was used for administration of drugs for the management of postoperative pain. We could safely and effectively manage the patient intra-operatively and took care of his postoperative pain (three days) without the need of replacing the epidural catheter thereby saving time and money.
Fortunately, shearing of the catheter happened at an extracutaneous site so we got extra length to attach the filter and other assembly with it. Although, short length of catheter or slightly bulky assembly (filter system) or absence of loop may pose a possibility of dislodgement it is a safe and cost-effective method. We therefore suggest that this technique could be employed in cases of shearing of epidural catheter during the process of subcutaneous tunnelling. | CC BY | no | 2022-01-12 15:30:19 | Indian J Anaesth. 2010 Nov-Dec; 54(6):586a | oa_package/02/bb/PMC3016594.tar.gz |
|||||||
PMC3016595 | 21224992 | Sir,
It is well known that contamination of the anaesthesia work area with potential bacterial pathogens and blood occurs intraoperatively[ 1 ] following general anaesthesia. It has been demonstrated that bacterial contamination occurs early (within as little as 4 min) and is unrelated to factors of case duration, urgency, or patient American Society of Anaesthesiologists physical status. Contamination with saliva represents a potential risk, since saliva is the main vehicle of infection for nonparenteral transmission of hepatitis-B.[ 2 ] Bacterial transfer to patients is associated with the variable aseptic practice of anaesthesia personnel. Placing the laryngoscope blade in a container (e.g. a kidney tray) following intubation with subsequent contamination of the anaesthesia work area is unwanted but not an uncommon feature in the operation theatre.
We propose the use of the plastic cover of the disposable PVC endotracheal tube to keep the contaminated laryngoscope blade to avoid soiling the top of the anaesthesia machine and work area. The laryngoscope blade can easily be kept inside this plastic cover which had been opened before intubation [ Figure 1 ]. The contaminated blade can then safely be transported and cleansed without leading to any untoward contamination, and the plastic cover safely discarded. One needs to be careful that the two sides of the cover (one plastic, the other paper) do not come apart during the whole procedure, lest it contaminate other unwanted areas.
The complex intraoperative environment has been theoretically associated with the development of nosocomial infections and may contribute to the emerging pattern of increasing bacterial resistance in the hospital setting. Currently, there is no consensus regarding a satisfactory method for the routine cleaning, disinfection and sterilization of laryngoscope blades and handles. It has been shown that 33% of anaesthesia work surfaces[ 3 ] and 38% of laryngoscope blades are contaminated with blood with a high proportion of both blades and handles showing evidence of microbial contamination.[ 4 ] Tobin MJ and colleagues[ 5 ] suggested the use of commercially available plastic to cover the laryngoscope handle during use to prevent contamination. This involves extra effort and cost; also, laryngoscope blade is the most contaminated are rather than the handle. We think that it is more appropriate to handle the contaminated blade carefully to prevent cross-contamination
Our method of safeguarding the anaesthesia work area from soiling with the contaminated laryngoscope blade is very simple, does not involve any extra expenditure and has the potential to reduce iatrogenic transmission of infection in the perioperative setting. | CC BY | no | 2022-01-12 15:30:19 | Indian J Anaesth. 2010 Nov-Dec; 54(6):586b-587 | oa_package/a9/c8/PMC3016595.tar.gz |
|||||||
PMC3016596 | 21224993 | Sir,
Central venous (CV) catheterisation is a routine procedure in intensive care units (ICU), with an overall complication rate of 12%. Loss of a complete guide wire into the circulation is one of its rare and preventable complications.[ 1 ] A 65-year-old male patient in our neurosurgical ICU required CV access on the second postoperative day, following an intracranial tuberculoma excision. CV catheterisation of the right femoral vein with the Seldinger technique was attempted in him by a first year resident doctor without supervision. During cannulation, an inadvertent leg movement by the drowsy patient caused the guide wire to slip out of the resident’s fingers, into the circulation. On radiography, the guide wire was seen to have ascended in the inferior vena cava and entered the heart [Figures 1 and 2 ]. The migrated wire did not produce any cardiac manifestations. Heparinisation of the patient for guide wire extraction was deferred for 2 days due to the recent neurosurgery. The guide wire was then removed percutaneously using a gooseneck snare under fluoroscopic guidance in the cardiac catheterisation laboratory.
Migration of a guide wire into the circulation can occur from any of the usual CV catheter insertion sites.[ 1 – 3 ] A complete guide wire may not necessarily produce any symptoms and its loss may remain unnoticed for long.[ 2 ] However, intravascular migration of a broken guide wire has the potential of causing adverse effects like vascular damage, thrombosis, embolism and arrhythmias; embolism from guide wire fragments can be fatal in up to 20% instances.[ 1 ] Cardiac tamponade manifesting 3 years after a guide wire loss has been reported as a late complication, highlighting the importance of wire extraction as soon as a diagnosis is made.[ 3 ] Retrieval is usually done by interventional radiology using gooseneck snares, endovascular retrieval forceps or Dormia baskets; surgical removal is also reported.[ 4 ]
The usual attributes for a guide wire loss are operator inexperience, inattention and inadequate supervision during catheterisation.[ 1 ] In our patient too, these were the main causes, compounded by the sudden movement of our disoriented patient. Expert operator skills and compliance with the catheterisation protocol are mandatory to prevent this complication. Firmly holding on to the tip of the guide wire at all times during catheterisation is the mainstay of prevention.[ 1 ] Prior sedation of disoriented patients may help achieve a smoother cannulation though sedatives need to be used cautiously in neurosurgical patients. | CC BY | no | 2022-01-12 15:30:19 | Indian J Anaesth. 2010 Nov-Dec; 54(6):587-588 | oa_package/50/fa/PMC3016596.tar.gz |
|||||||
PMC3016597 | 21224994 | Sir,
Naso-gastric tube (NGT) is usually indicated for enteral feeding or aspiration of intestinal secretions or stomach wash in cases of suspected poisoning. But before using this tube for any procedure, it is imperative to check and confirm the correct position of the distal end of the tube. Because, occasionally the tube may inadvertently enter the airway instead of the gastrointestinal tract.[ 1 ] One can easily imagine the unfavorable outcome (from aspiration pneumonitis, pneumothorax, collapse of alveoli to even death), once the tube remained undiscovered in trachea and enteral feeding or stomach wash is started.[ 2 3 ] We report two patients scheduled for emergency exploratory laprotomy having acute onset hoarseness of voice.
A 28-year-old male with perforation peritonitis and a 40-year-old female with intestinal obstruction were scheduled for emergency exploratory laprotomy, with NGT (16 no size) in situ . While doing preoperative assessment, hoarseness of voice was noticed in both the cases. It was found to be acute in onset. As we could not get any other reason other than possibility of NGT in trachea, NGT was removed without confirmation of position in first case. As soon as the tube was taken out, patient regained normal voice. With previous experience, it was planned to confirm position of NGT with the help of direct laryngoscopy in second case. After explaining the procedure to the patient, direct laryngoscopy was done and it was found that the tube was entering into the larynx instead of oesophagus. The tube was taken out following confirmation of wrong positioning. NGT was repositioned in oesophagus under direct laryngoscopy and guiding with the help of Magill’s forcep after intubation, in both the cases.
Hoarseness is usually caused by a problem in the vocal cords. During phonation, the vocal cords meet in midline and as air leave the lungs; they vibrate producing sound [ Figure 1 ]. Anything (FB, tumour, polyp, inflammatory reaction, vocal cord palsy), which prevents proper approximation of cords, leads to difficulty in producing sound when trying to speak or change in pitch or quality of voice. Voice may sound weak, scratchy or husky. Possible common causes of sudden hoarseness of voice can be: Tonsillitis, Adenoiditis, Heavy smoking, Alcoholism, Excessive crying, Singers, Irritant gas inhalation, Viral illness, Ingestion of caustic liquid, Foreign body, Allergies.
Although awake, healthy persons have protective airway reflexes that prevent entry of any FB into the trachea, but general debility and weakness lead to partial suppression of the laryngeal reflexes.[ 2 ] Galley states, “It is well known that even in the conscious patient the larynx has a greater tolerance for foreign bodies which do not move”.[ 4 ] Respiratory distress, coughing, straining, and retching are not normal reactions to the passage of a Ryle’s tube. When these occur to excess in any patient, particularly in the exhausted, toxic, or otherwise debilitated, the tube should be under suspicion until proved, by positive gastric aspiration, to be in the stomach.[ 2 ] So one must always be suspicious about the incorrect position of the Ryle’s tube.
There are various methods to ascertain the position of the tube in the stomach, like aspirating 2 mls of stomach content with a syringe and pour on litmus paper (turn blue litmus paper red), injecting 5 ml of air into the tube (whooshing sound over the epigastrium with stethoscope), X-ray chest and upper abdomen, pH testing of feeding tube aspirates, capnograph, calorimetric CO 2 detector.[ 5 ] Although radiologic confirmation of tube placement remains the “gold standard”, there is growing evidence that pH testing of feeding-tube aspirates can reduce (although not totally eliminate) reliance on X-rays used for this purpose.[ 5 ]
We, therefore, recommend that any case of Naso-gastric tube in situ having acute onset hoarseness of voice, one must reconfirm exact position of Naso-gastric tube, with any of the above mentioned technique before starting procedure to avoid fatal consequences. | CC BY | no | 2022-01-12 15:30:19 | Indian J Anaesth. 2010 Nov-Dec; 54(6):588-590 | oa_package/e5/15/PMC3016597.tar.gz |
|||||||
PMC3016603 | 20828336 | Introduction
Arginine vasopressin (Avp) is a cyclic nonapeptide that exerts diverse biological effects through a number of receptors (R), three of which have been cloned to date in mammals; the vasopressin Avprla, Avprlb and Avpr2—Avp also binds to the structurally related oxytocin (Oxt) receptor (Oxtr) with high affinity. Each member of the Avp receptor family has a discrete peripheral distribution and function; however, data for the central distribution of these receptors are still incomplete. The Avp receptor family are G protein-coupled receptors: the Avprla and Avprlb subtypes are both coupled to G q/11 and signal via phospholipase C ( Jard et al. 1987 ; Thibonnier et al. 2001 ). The Avpr2 receptor subtype is coupled to G s which, when activated, elevates cAMP levels by recruiting adenylate cyclase. It should be noted that many G protein-coupled receptors (GPCRs) couple to multiple signal transduction pathways, including the Avprlb ( Thibonnier et al. 2001 )—this may be related to the number of receptors and signal transduction components expressed in a given cell. The Oxtr predominantly signals via G q/11 but is a promiscuous receptor in that it may signal through a variety of different α subunits ( Reversi et al. 2005 ; Chini and Manning 2007 ). The tissue distribution and physiological function of each receptor demonstrates Avp's primary function in fluid balance and homeostasis. The Avprla is predominantly found in vascular smooth muscle where it is involved in maintaining blood pressure via its classical pressor action ( Koshimizu et al. 2006 ). The renal Avpr2 is responsible for water resorption in kidney collecting ducts by promoting the translocation of aquaporin-2 channels to the plasma membrane ( Knepper 1997 ). The Avprlb is primarily located in the anterior lobe corticotrophs of the pituitary gland ( Jard et al. 1986 ; Lolait et al. 1995 ). Avp in hypophysial portal blood acts on pituitary Avpr1bs to release adrenocorticotrophic hormone (ACTH) as part of the neuroendocrine response to stress ( Antoni 1993 ). The contraction of uterine smooth muscle during parturition and milk let down during lactation are actions mediated by Oxt and the Oxtr. The distribution and function of Avp receptors is not limited to those sites mentioned above and all receptors except the Avpr2 are expressed in the brain. The central role each of these receptors plays has not yet been fully elucidated but the growing amount of data from pharmacological and knockout (KO) studies suggests some functional overlap (e.g. in modulating some social behaviour).
The availability of several specific ligands for these receptors has considerably aided in the characterisation of the Avpr1a, Avpr2 and Oxtr but, until recently, research on the Avpr1b has been hampered by a lack of Avpr1b-specific ligands. Avpr1b research has focussed primarily on the detection of receptor transcript levels (indirectly inferring receptor protein levels), genetic KO studies and experiments with the naturally Avp-deficient Brattleboro (di/di) rats. This review details recent pharmacological and KO data on the role of the Avpr1b in brain, pituitary and peripheral tissues with particular emphasis on its function in the hypothalamic–pituitary–adrenal (HPA) axis response to stress. | The distribution, pharmacology and function of the arginine vasopressin (Avp) lb receptor subtype (Avprlb) has proved more challenging to investigate compared to other members of the Avp receptor family. Avp is increasingly recognised as an important modulator of the hypothalamic–pituitary–adrenal (HPA) axis, an action mediated by the Avprlb present on anterior pituitary corticotrophs. The Avprlb is also expressed in some peripheral tissues including pancreas and adrenal, and in the hippocampus (HIP), paraventricular nucleus and olfactory bulb of the rodent brain where its function is unknown. The central distribution of Avprlbs is far more restricted than that of the Avprla, the main Avp receptor subtype found in the brain. Whether Avprlb expression in rodent tissues is dependent on differences in the length of microsatellite dinucleotide repeats present in the 5′ promoter region of the Avprlb gene remains to be determined. One difficulty of functional studies on the Avprlb, especially its involvement in the HPA axis response to stress, which prompted the generation of Avprlb knockout (KO) mouse models, was the shortage of commercially available Avprlb ligands, particularly antagonists. Research on mice lacking functional Avprlbs has highlighted behavioural deficits in social memory and aggression. The Avprlb KO also appears to be an excellent model to study the contribution of the Avprlb in the HPA axis response to acute and perhaps some chronic (repeated) stressors where corticotrophin-releasing hormone and other genes involved in the HPA axis response to stress do not appear to compensate for the loss of the Avprlb. | V1B receptor distribution
The Avpr 1 a and the Oxtr have been well characterised in the rodent brain and are thought to be the likely substrates for central actions of Avp and/or Oxt ( de Wied et al. 1993 ; Burbach et al. 1995 ; Barberis and Tribollet 1996 ). More recent data from KO animals suggest specific behavioural deficits in social memory and aggression are directly due to the absence of central Avpr1bs ( Wersinger et al. 2002 ). The search for central Avpr1bs has proved more elusive than that of the Avpr 1as and Oxtrs as the shortage of specific ligands has prevented binding studies to visualise the Avpr1b protein. Nevertheless, a number of studies utilising immunohistochemistry and in situ hybridisation histochemistry (ISHH) have shown that the Avpr1b is expressed centrally, while reverse transcription-polymerase chain reaction (RT-PCR) and functional studies have demonstrated Avpr1bs in a number of peripheral tissues (e.g. Lolait et al. 1995 ; Vaccari et al. 1998 ; Hernando et al. 2001 ; O'Carroll et al. 2008 ).
Although the highest concentration of Avpr1bs is found within anterior pituitary corticotrophs ( Jard et al. 1986 ; Antoni 1993 ; Lolait et al. 1995 ), several studies suggest a wide central ( Barberis and Tribollet 1996 ; Vaccari et al. 1998 ; Hernando et al. 2001 ) and peripheral ( Lolait et al. 1995 ; Saito et al. 1995 ; Ventura et al. 1999 ; Oshikawa et al. 2004 ) distribution in rodents. Analysis of various brain regions and peripheral tissues suggests that Avpr1b transcript levels may be too low to be reliably detected by Northern blot analysis and often depends on RT-PCR to detect possible Avpr1b expression ( Lolait et al. 1995 ). A recent distribution study using ISHH with probes directed to 5′ or 3′ untranslated regions of the Avpr1b mRNA details a more restricted pattern of Avpr1b mRNA in mouse brain than previously reported ( Young et al. 2006 ). The riboprobes used by Young and co-workers had low sequence identity to other Avp receptors to minimise cross-reactivity with related mRNA sequences. This study shows Avpr1b mRNA to be most prominent in the CA2 pyramidal neurons of the mouse and human HIP while receptor transcripts are also found in the paraventricular nucleus (PVN) and amygdala, albeit at a lower level. All of these studies infer receptor expression by determining mRNA transcript levels rather than receptor protein levels. However, in the absence of specific radiolabelled ligands of high-specific activity (which may not provide detailed anatomical resolution), mRNA expression coupled with immunohis-tochemical techniques can accurately reflect receptor protein distribution and quantity.
The peripheral distribution of the Avpr1b is much more restricted than that of the Avpr1a, which is ubiquitously expressed ( Oshikawa et al. 2004 ; Fujiwara et al. 2007a ). Avpr1b mRNA has been detected by RT-PCRin the rodent pancreas ( Saito et al. 1995 ; Ventura et al. 1999 ; Oshikawa et al. 2004 ), adrenal gland ( Grazzini et al. 1996 ; Ventura et al. 1999 ; Oshikawa et al. 2004 ), spleen ( Lolait et al. 1995 ; Oshikawa et al. 2004 ), kidney ( Lolait et al. 1995 ; Saito et al. 1995 ), heart ( Lolait et al. 1995 ; Saito et al. 1995 ), liver ( Saito et al. 1995 ) and lung ( Lolait et al. 1995 ; Saito et al. 1995 ). Additionally, the thymus ( Lolait et al. 1995 ), colon ( Ventura et al. 1999 ), small intestine, bladder ( Saito et al. 1995 ), breast and uterus ( Lolait et al. 1995 ) and white adipose tissue ( Fujiwara et al. 2007a ) reportedly contain Avpr1b mRNA, however, these have not been ratified by all studies (e.g. only one out of five studies observe Avpr1b mRNA in rodent liver). In any event, the functional significance of varying amounts of low levels of Avpr 1b mRNA detected by RT-PCR in whole brain or peripheral tissue samples is unknown. Disparities between laboratories may reflect methodological (e.g. detection of amplified PCR products) and/or strain differences. The mRNA-expressing tissues where there appears to be strong functional correlations are the pancreas and adrenal gland.
In the pancreas, Avp has been shown to act on Avpr1bs present in islets to stimulate the secretion of insulin from β cells where it may act synergistically with corticotrophin-releasing hormone (Crh) ( Oshikawa et al. 2004 ; O'Carroll et al. 2008 ). While Avp acts on the Avpr1b to decrease blood sugar levels through insulin release, it can also act in opposition to this by stimulating glucagon release ( Yibchok-anun and Hsu 1998 ) and promoting hepatic glycogenolysis ( Kirk et al. 1979 ). Importantly, Avp-mediated glycogenolysis acts via the Avpr1a subtype present in hepatic tissue ( Morel et al. 1992 ) suggesting bifunctional but opposite roles of Avp in glucose homeostasis by employing two different receptor subtypes. Exactly which role Avp plays in regulating glucose balance at the pancreatic level is dependent on the local glucose concentration present in this tissue ( Abu-Basha et al. 2002 ). One study on a glucagon-secreting hamster α-pancreatic cell line suggests that Avp-induced glucagon secretion is mediated via the Avpr1b since the Avpr1b antagonist SSR149415 potently antagonises Avp's effects in these cells ( Folny et al. 2003 ). The specificity of SSR149415 has, however, been questioned with evidence of activity in Chinese hamster ovary cell lines expressing recombinant human Oxtrs ( Griffante et al. 2005 ; Hodgson et al. 2007 ). This may be an important consideration, as Oxtrs in addition to Avpr1bs are apparently present in pancreatic islets ( Oshikawa et al. 2004 ) and have been shown to cause insulin/glucagon release ( Jeng et al. 1996 ; Yibchok-anun et al. 1999 ). One laboratory that used agonists and antagonists to Avp/Oxt receptors (but notably none selective for Avpr1bs) in α-cell lines ( Yibchok-anun and Hsu 1998 ) and perfused rat pancreas ( Yibchok-anun et al. 1999 ) produced contrasting results, the latter study suggesting glucagon secretion in response to Avp, and Oxt is mediated through the activation of Avpr1bs and Oxtrs, respectively, rather than any activity of Avp on Oxtrs. On the other hand, studies in Avpr1b KO mice suggest both Avp and Oxt can stimulate glucagon secretion through Oxtrs ( Fujiwara et al. 2007b ). Avp may play a role in the hypersecretion of glucagon from the pancreas of diabetics ( Yibchok-anun et al. 2004 ), which is likely to involve the Avpr1b. Pancreatic β cells isolated from mice lacking functional Avpr1bs unsurprisingly display a blunted insulin secretion ( Oshikawa et al. 2004 ), reinforcing the role of Avpr1bs in this tissue. Interestingly, further studies with this Avpr1b KO line show an increased sensitivity of Avpr1b KO mice to the metabolic effects of insulin ( Fujiwara et al. 2007a ). Together with the discovery of Avpr1b mRNA by RT-PCR in white adipose tissue, Fujiwara and co-workers suggest that the disruption in insulin–adipocyte signalling may lead to altered metabolism of glucose in Avpr1b KO mice. Whether this is due to a lack of Avpr1b influence in the pancreas, white adipose tissue, or both, is unclear.
Grazzini and co-workers demonstrated the presence of Avpr1b mRNA by RT-PCR in the medulla but not in the cortex of the rat adrenal gland. The cortex expresses Avpr1a transcripts primarily in the zona glomerulosa ( Guillon et al. 1995 ; Grazzini et al. 1996 ) where this receptor regulates steroid secretion in vitro ( Grazzini et al. 1998 ). These studies also show that Avp precursor mRNA and Avp peptide are present in the adrenal medulla suggesting Avp can be released within the tissue, possibly acting in an autocrine/paracrine manner to regulate adrenal function. Stimulation of the Avpr1b in the rat adrenal medulla causes catecholamine secretion ( Grazzini et al. 1998 ). The presence of Avpr1as in the adrenal cortex and Avpr1bs in the chromaffin cells of the medulla provides strong evidence of an independent modulatory role of each receptor in discrete regions of adrenal tissue. The presence, however, of both Avpr1s in the human ( Grazzini et al. 1999 ) adrenal medulla indicates a possible co-expression of Avpr1 receptors in some species. This suggests a possible overlap of function distinct from the roles already noted and may reflect the action of Avp originating from different sources (e.g. pituitary vs. local tissue release ( Gallo-Payet and Guillon 1998 )), although the medullary cell type that expresses Avpr1as has yet to be identified. Notably, the plasma catecholamine response to forced swimming and social isolation stress is attenuated in Avpr1b KO mice ( Itoh et al. 2006 ).
The central, pituitary and peripheral expression of the Avpr1b gene may be influenced by activity of elements in the upstream Avpr1b promoter region. In vitro studies using cells transiently transfected with a rat Avpr1b gene promoter sequence have identified regulatory GAGA repeats that influence Avpr1b gene transcription ( Volpi et al. 2002 ). This provides a possible mechanism of physiological Avpr1b gene regulation that may enable different levels of Avpr1b expression in different tissues or species. When we compared the 5′ microsatellite region in the 5′ Avpr1b promoter sequence of different mouse strains, a major size difference in microsatellite length between the C57BL6J/OlaHsd strain and Balb/cOlaHsd and 129S2/SvHsd strains was observed (see, Figure 1A ). Further analysis of the sequence details differences in the number of CTand C A repeats between strains (see, Figure 1B ) that may confer changes in promoter activity. Basal promoter activity of a Balb/c 5′ fragment is threefold greater than that of the C57BL/6 strain confirming an increase in Avpr1b promoter activity with the “long” form of microsatellite (see, Figure 1B and C ), in a reporter assay in COS-7 cells. The impact of microsatellite DNA sequences on receptor expression and behavioural phenotypes has been examined in studies on the effects of Avpr1a expression on social behaviour in voles. Affiliative behaviours such as pair bonding have been attributed to changes in Avpr1a expression patterns caused by microsatellite length variations in the 5′ Avpr1a regulatory region ( Hammock and Young 2002 ; Hammock and Young 2005 ). Differences in Avpr1b protein expression that result from variances in gene promoter activity between mouse strains may be a contributing factor to varying susceptibility to stress. Several neurogenic, psychogenic and systemic stressors have been tested in different strains of mice to reveal a strain-dependant stress response ( Anisman et al. 2001 ). Interestingly, C57BL/6ByJ mice display higher levels of plasma CORT as well as increases in stress-related behaviours compared to the Balb/cByJ strain, strengthening support for Avprlb's involvement in mouse stress susceptibility ( Anisman et al. 2001 ). It is important to note, however, that similar 5′ microsatellite sequences are not present in the human Avprlb gene.
The relatively recent emergence of data from genetic KO studies and the development of promising pharmacological compounds have given the task of characterising the function of central and peripheral Avprlb renewed vigour. Avprlb KO mice together with the long-standing subject of Avp research, the Brattleboro rat, serve as robust systems with which to study the role of the Avprlb and Avp in the HPA axis response to stress.
The HPA axis and stress
The complex homeostatic control that constantly acts to resist challenges and fluctuations in the internal environment that may threaten the survival of an organism is necessary for life. As a result of any deviation in conditions, a host of physiological and behavioural changes occur that allow an organism to adapt to such challenges and restore the homeostatic balance. One such neuroendocrine system that is activated in stressful circumstances is the HPA axis. The end product of HPA axis activation is an increase in circulating glucocorticoids that, together with other stress mediators, act on target cells to enable the organism to cope with the stress. Consequences of elevated glucocorticoids are widespread as cytosolic glucocorticoid receptors are present in most central and peripheral tissues. The most profound effects of elevated glucocorticoid levels are immunological and metabolic changes. Glucocorticoid secretion, in concert with catecholamine release due to rapidly activated sympatho-adrenomedullary system, prepares tissues for the physical “load” that may be required by the body as part of a coordinated response to manage or tackle the stress.
The HPA response to stress relies on several key mediators that fine-tune glucocorticoid release, which is dependent on a number of factors such as the nature of the stressor and the immunological state of the organism, tailoring the response to the specific stressor concerned. When subjected to a stressor, the challenge is perceived in brain regions appropriate to the nature of the stressor. Interpretation of these signals in relevant brain areas, such as various limbic or hindbrain regions, leads to activation (or inhibition) of the PVN ( Herman et al. 2003 ).
The parvocellular subdivision of the PVN (pPVN) is the most important site among several hypothalamic nuclei that regulate the HPA response, as this is where stress signals are integrated and adjusted before peripheral signalling is initiated. Once activated, pPVN projections that terminate in the external zone of the median eminence release Avp and Crh into the hypophysial portal blood ( Antoni 1993 ). These two ACTH secretagogues act on Avpr1bs and Crh type 1 receptors (Crhr1; coupled to G s and adenylate cyclase activation) present in pituitary corticotrophs causing the release of ACTH into the peripheral blood system. ACTH via melanocortin receptors present in zona fasciculata cells of the adrenal cortex stimulates a rapid secretion of glucocorticoids into the peripheral blood supply. Glucocorticoids such as corticosterone ((CORT) in rodents) and cortisol (in humans) in turn provide negative feedback control of the HPA axis via pituitary, pPVN and higher brain centre glucocorticoid receptors (e.g. HIP).
The dominance of Crh as the primary ACTH secretagogue is still the prevailing view; however, numerous direct and indirect neuronal inputs into the pPVN, as well as humoral influences from blood or cerebrospinal fluid-borne stress signals, dynamically modulate the contribution of Crh to the stress response. The influence of Avp/Avpr1b may be of greater significance than that of Crh/Crhr1 in some stress circumstances, such as in response to some chronic (repeated) stressors ( Ma et al. 1999 ) or to novel stressors superimposed on a repeated stress paradigm ( Ma et al. 1997 ). Evidence of a switch from Crh-ergic to vasopressinergic pPVN drive in response to some chronic stressors reinforces the concept that each response is tailored to that specific stressor and that Avp may be an increasingly important mediator in these circumstances ( Aguilera 1994 ; Ma et al. 1999 ; Aguilera et al. 2008 ). Moreover, studies of some specific acute stressors indicate that Avp may be preferentially released in favour of Crh (e.g. insulin-induced hypoglycaemia (IIH) in rats ( Plotsky et al. 1985 ), ketamine anaesthesia and IIH in sheep ( Engler et al. 1989 )).
It is important to emphasise that in some species (e.g. sheep and horse) Avp rather than Crh appears to be the main ACTH secretagogue ( Engler et al. 1989 ; Alexander et al. 1997 ).
V1B receptor KO mice
With the generation of Avpr1b KO mice by us in 2002, attention was initially directed towards the deficits observed in cognitive and behavioural tests ( Wersinger et al. 2002 ; Wersinger et al. 2004 ). It was noted that mice lacking functional Avpr1bs showed impairments in social recognition and reduced responses in some aggression paradigms whereas other physiological and behavioural test responses were normal ( Wersinger et al. 2002 , 2004 , 2007a ). Findings from a second Avpr1b KO line generated by Tanoue and co-workers initially focussed on the disruption of the HPA axis ( Tanoue et al. 2004 ), and they reported that KO mice have lower circulating ACTH levels under basal and acute stress conditions. This is in contrast to basal measurements of HPA axis activity seen in our Avpr1b KO mouse colony, which maintain normal resting ACTH levels ( Lolait et al. 2007a ). Notwithstanding basal HPA axis differences, both Avpr1b KO lines have been used to generate a large amount of compatible evidence supporting the involvement of the Avpr1b in stress and aggression, which is also largely consistent with in vivo Avpr1b antagonism with SSR149415 ( Griebel et al. 2002 ; Blanchard et al. 2005 ; Stemmelin et al. 2005 ).
Reduced aggression in KO mice
Avp has been implicated as a moderator of several central behaviours that were initially thought, based on pharmacological profiles and due to its much higher prevalence in the brain, to be mediated by the Avpr1a. Experiments in rodents, particularly hamsters, with Avpr1a antagonists have consistently shown that Avpr1as facilitate some aggressive behaviour ( Ferris et al. 1997 , 2006 ; Caldwell and Albers 2004 ), although this is yet to be verified in Avpr1a KO mutants ( Wersinger et al. 2007b ). It is possible that the neural circuitry underlying aggression compensates for the loss of the Avpr1a in the global Avpr1a KO. In contrast, evidence of Avpr1b involvement in aggression comes from both pharmacological and KO data, e.g. antagonism of the Avpr1b with SSR149415 lowers the frequency and duration of aggressive behaviour in hamsters ( Blanchard et al. 2005 ) while Avpr1b KO mice display reduced attack number and longer attack latency compared to wild types. This latter observation has been further categorised as a deficit in the attack component of aggression, as defensive aggression remains intact in mutant animals ( Wersinger et al. 2002 , 2007a ). Furthermore, the reduced aggression phenotype persists when Avpr1b KO mice are crossed with a more aggressive wild-derived mouse strain, confirming that the reduced aggression observed is not simply a peculiarity of laboratory mouse strains ( Caldwell and Young 2009 ). The specific neural substrate(s) vital for Avpr1b's role in aggression is not known, nor is the possible interaction between Avpr1a and Avpr1b (or Oxt and the Oxtr for that matter—see, Winslow et al. 2000 ) in vivo. It should be noted that the central distribution of Avpr1a- and Oxtr-binding sites and Avpr1b mRNA expression are clearly distinct but may overlap in some brain regions (e.g. olfactory system) (see, Table III in Beery et al. 2008 ; Caldwell et al. 2008a ).
The changes in aggression, as well as differences in social motivation ( Wersinger et al. 2004 ) or social memory evident in Avpr1b KO mice may be due to deficits in the processing of accessory olfactory stimuli that are needed to evoke such behaviours ( Caldwell et al. 2008b ). It is suggested that the role of central Avpr1bs may be to couple socially relevant cues detected in the accessory olfactory system to the appropriate behavioural response ( Caldwell et al. 2008b ). Intriguingly, both Avpr1a and Avpr1b genes are expressed in the forebrain olfactory system ( Ostrowski et al. 1994 ; Hernando et al. 2001 ). Evidence of pyramidal CA2 Avpr1bs ( Hernando et al. 2001 ; Young et al. 2006 ) also suggests a relationship between the deficits in social memory and the uncoupling of social cues from the accessory olfactory system and the formation or recall of relevant memories ( Caldwell et al. 2008b ). Based on studies in Avpr1b KO animals, the central Avpr1b may also be involved in a number of other behaviours (summarised in Table I ). Prepulse inhibition of the startle reflex is attenuated in Avpr1b KO mice ( Egashira et al. 2005 ) suggesting that this mouse may be a suitable model to investigate sensorimotor gating. In contrast, no major changes in anxiety-like or depression-like behaviours are observed in Avpr1b KO animals ( Wersinger et al. 2002 ; Egashira et al. 2005 ; Caldwell et al. 2010 ). These results conspicuously differ from some of those obtained in Brattleboro rats ( Mlynarik et al. 2007 ) or with Avpr1b antagonist administration, mainly in rats (see below), and strongly suggest that the Avpr1b KO mouse is not an appropriate model for examining stress-induced anxiety or depression (e.g. see Kalueff et al. 2007 ).
The stress response in KO mice
The impact of Avp in ACTH release is often regarded as ancillary to Crh as Avp alone is a weak ACTH secretagogue but together with Crh it acts synergis-tically to facilitate ACTH secretion ( Gillies et al. 1982 ; Rivier and Vale 1983 ; Antoni 1993 ). Not only do the corticotroph Avpr1b- and Crh-signalling pathways converge to increase ACTH release ( Abou-Samra et al. 1986 ), the two receptors may physically heterodimerise ( Young et al. 2007 ). We, and others, have subjected adult Avpr1b KO animals to a number of acute and chronic (repeated) stressors of varying severity and nature ( Tanoue et al. 2004 ; Itoh et al. 2006 ; Lolait et al. 2007a , b ; Stewart et al. 2008a , b ). These studies clearly demonstrate that a functional Avpr1b is essential for mounting a normal HPA response, as manifested by increased plasma ACTH levels, to most acute stressors (summarised in Table II ). The one exception is the response to “severe” restraint where there is no difference in either plasma ACTH level or CORT level between Avpr1b KO and wild-type mice ( Lolait et al. 2007a ). Studies in “severely” restrained Brattleboro rats reveal a similar picture ( Zelena et al. 2004 ). It is likely that the restraint procedure employed was sufficiently stressful as to override any contribution from Avp (acting via the Avpr1b), e.g. ACTH secretion may be entirely dependent on Crh acting alone or in concert with other “minor” ACTH secretagogues such as angiotensin II, vasoactive intestinal peptide or serotonin ( Carrasco and Van de Kar 2003 ). When comparing the reduced HPA axis response to other acute stressors in Avpr1b KO mice, we consistently find a decreased ACTH response but not always a corresponding lower CORT level. The nature of stressors to which both ACTH and CORT, ACTH but not CORT and neither ACTH or CORT responses are reduced in Avpr1b KO mice does not fall into any existing classification of stress. This work is in agreement with experiments in Brattleboro rats which suggest that the magnitude of Avp contribution is dependent on the context of the stressor ( Zelena et al. 2009 ). As shown in Table II , we find some acute and chronic stressors are not influenced by the loss of the Avpr1b (e.g. acute severe restraint), some have a reduced ACTH response only (e.g. acute forced swimming stress) and some have both a reduced ACTH and CORT response in Avpr1b KOs compared to wild types (e.g. acute and repeated novel environment stress). Thus, how Avp influences the HPA axis response to stress could provide a further basis of stressor classification.
Our studies in Avpr1b KO mice ( Lolait et al. 2007a , b ; Stewart et al. 2008a , b ) and similar studies on acute stress in Brattleboro rats ( Domokos et al. 2008 ; Zelena et al. 2009 ) both often note a disparity between stress-induced ACTH versus CORT release in Avp/Avpr1b-deficient animals. It is important to recognise that the CORT response to ACTH saturates at low circulating ACTH levels ( Dallman et al. 2002 ). In addition, plasma hormones levels were only measured at one time point in our studies so any incremental change in CORT levels may have been missed. Furthermore, we have assumed that the dynamics of stress-induced Avp, Crh and/or other ACTH secretagogue is similar in Avpr1b and wild-type mice. Where stress-induced ACTH release in Avpr1b KO mice is not always followed by a proportional CORT attenuation, suggests that CORT may be released independently of ACTH in some circumstances. Interestingly, this pattern of ACTH/CORT profiles in response to acute stress also appears to be present in neonatal Brattleboro rats ( Zelena et al. 2008 ). Discrepancies between plasma stress hormone levels appear stressor specific suggesting that a particular stressor may possess specific characteristics that could activate a number of pathways to promote CORT release, e.g. via direct splanchnic or other neural innervation of the adrenal cortex or medulla ( Ehrhart-Bornstein et al. 1998 ). Alternatively, adrenal sensitivity to locally synthesised or humoral factors may be altered permitting CORT release even when low levels of circulating ACTH are present. Adrenal hypersensitivity and ACTH-independent pathways of CORT release would likely bypass the normal feedback controls that govern ACTH release from the pituitary (for review see Bornstein et al. 2008 ).
As mentioned above, chronic (repeated) stress has often been associated with an alteration in the control of ACTH release from what is a predominantly Crh-mediated drive, seen in acute stress, to an Avp-mediated drive speculated to maintain ACTH levels during adaptation to repeated stress ( Harbuz and Lightman 1992 ; Aguilera 1994 ; Ma et al. 1997 ; Aguilera and Rabadan-Diehl 2000 ). Adaptation to repeated stress (i.e. lower ACTH and/or CORT following repeated stress compared to a single episode of acute stress) is species and stressor specific and is not always observed ( Marti and Armario 1998 ; Armario et al. 2004 ). The apparent flip in control of ACTH release from Crh to Avp that is seen in some repeated stressors in rats (e.g. restraint) has made Avp and the Avpr1b attractive targets for pharmacological intervention in conditions of repeated or chronic stress, although it should be noted that Avp does not play a role in HPA axis responses to all chronic stressors (e.g. chronic morphine injection ( Domokos et al. 2008 )). The preferential expression of Avp in the pPVN in adaptation to chronic stress observed in rats is associated with an upregulation of Avpr1bs (but not Crh receptors) in the anterior pituitary gland ( Aguilera 1994 ). Chronic stress from repeated (daily) exposure to an acute stressor leading to ACTH hyperresponsiveness to a single, novel stressor episode may also involve an increase in PVN Avp ( Ma et al. 1999 ) and pituitary Avpr1b expression, and pituitary ACTH hyperresponsiveness ( Aguilera 1994 ). However, Avp/Avpr1b does not appear to be responsible for HPA axis hypersensitivity to novel stressors ( Chen et al. 2008 ). The mechanism(s) by which hypothalamic Avp and pituitary Avpr1b responsiveness is maintained during stress adaptation suggests the existence of numerous transcriptional and translational regulatory components involved in PVN and pituitary plasticity that dynamically alter Avp and Avpr1b levels according to demand ( Volpi et al. 2004a , b ). One view is that Avp/Avpr1b may alter corticotroph proliferation and pituitary remodelling during prolonged activation of the HPA axis ( Subburaju and Aguilera 2007 ). However, studies of repeated stress in Avpr1b KO mice and Brattleboro rats, suggest that the role of the Avpr1b and its cognate ligand in the adaptation of ACTH/CORT levels to chronic stress may not be as convincing as first thought.
For the chronic (repeated) stressors tested in Avpr1b KO mice, there is a reduction in the ACTH response to a final acute stress following 10–14 days of stress repeated once daily. The responses of Avpr1b KO animals and wild-type animals exposed to repeated stress are summarised in Table II . These studies have a number of salient features: firstly, as observed in male Brattleboro rats subjected to repeated restraint ( Zelena et al. 2004 ), the reduction in the ACTH response following repeated stress in Avpr1b KO animals is not often accompanied by a similar reduction in CORT responses—this mirrors what we have observed in these animals' responses to acute stress (see above). Secondly, with the exception of the ACTH response to repeated, severe restraint, the ACTH and CORT responses to acute or repeated stress are of equivalent magnitude. This suggests that the Avpr1b participates in the fast ACTH secretion (as seen in the response to acute stress) in repeatedly stressed mice. And finally, of all the repeated stressors studied in wild-type mice from our Apr1b KO colony, adaptation in ACTH responses was only seen with repeated exposure to shaker stress (SS) ( Figure 2 ). No adaptation in CORT responses was observed in Avpr1b or wild-type mice following any repeated stress paradigm. There is a robust plasma ACTH and CORT increase in response to a single, acute SS episode in wild-type mice; however, after 10 days of repeated SS, the ACTH response is reduced in these animals (see, Figure 2 —graph A: plasma ACTH levels, wild-type acute stress response vs. wild-type chronic stress response). The acute ACTH response in Avpr1b KO mice, while reduced from that seen in wild-type animals, is the same for repeated SS. Furthermore, the ACTH response to repeated SS in Avpr1b KO mice is not different from that seen in wild-type mice. It is tempting to speculate that there is a loss of adaptation to repeated SS in Avpr1b KOs but such a conclusion would be tenuous since ACTH levels in acutely stressed Avpr1b KO mice are already very low. The lack of functional Avpr1b in the KO mice has such a profound effect on the ACTH response to SS in these mice that any non-Avpr1b-mediated adaptation to repeated SS is probably negligible. The studies in Avpr1b KO mice and Brattleboro rats highlight the discrepancy between Avp/Avpr1b-mediated ACTH and CORT secretion during acute and repeated stress—this may have implications on the potential use of Avpr1b antagonists to ameliorate symptoms of HPA axis hyperactivity in stress-related disorders.
Gene expression in Avpr1b KO mice
Whether the deficits observed in Avpr1b KO mice outlined above are a direct result of disruption of Avpr1b signalling pathways or the result of an altered compensatory expression profile is not known. As far as HPA axis function is concerned, clearly Crh (or Oxt) does not compensate for the loss of the Avpr1b in Avpr1b KO mice. There are some direct changes that occur in KO mice that give rise to phenotypes such as altered glucose metabolism and attenuated ACTH release; however, indirect changes that affect behavioural systems that may lead to the deficits seen in Avpr1b KOs have yet to be identified. We have used ISHH to assess basal gene transcript levels of a number of genes that are closely linked with HPA axis function and find no significant differences in gene expression between Avpr1b KO and wild-type mice ( Figure 3 ). Comparisons of Oxtr mRNA levels in Avpr1b KO and wild-type mouse anterior pituitaries appear to suggest an upregulation of Oxtrs in Avpr1b KO mice ( Nakamura et al. 2008 ). As Oxt at high concentrations can elicit ACTH release via the Avpr1b, and the Oxtr may be expressed in corticotrophs, it has been suggested that increased expression of Oxtrs may be a compensatory mechanism through which Avpr1b KO mice and Brattleboro rats can, to some degree, make up for the lack of Avpr1b/Avp-mediated ACTH release ( Nakamura et al. 2008 ). We cannot totally rule out the possibility that mechanisms compensating for the loss of Avpr1b are active in the Avpr1b KO. Changes in neurochemical networks active centrally (e.g. projections to the PVN or signals within the PVN itself) or at the level of the anterior pituitary and adrenal may have occurred. We also cannot exclude a role of central Avpr1bs (or for that matter Avpr1as) in directly or indirectly influencing HPA axis activity. Nevertheless, stress-induced ACTH levels in our Avpr1b KOs are consistently lower (often to basal levels) than wild-type controls, confirming any compensation (e.g. from the action of Crh) is not sufficient to fully counteract the loss of the Avpr1b ( Lolait et al. 2007a , b ; Stewart et al. 2008a , b ). The role of potential Oxtr-mediated ACTH release in Avpr1b KO mice and Brattleboro rats remains to be clarified, however, as the Oxtr is predominantly located in lactotrophs rather than corticotrophs in the anterior pituitary ( Breton et al. 1995 ) it is likely that the Oxtr plays a minor role compared to the Avpr1b in wild-type animals. The use of conditional KOs and emerging pharmacological developments will no doubt help clarify the contribution of the pituitary Oxtr in acute and chronic stress.
Emerging pharmacological data
A number of agonists (such as desmopressin, an Avpr2 agonist) and antagonists (such as relcovaptan, an Avpr1a antagonist) are available and frequently used to study the Avpr1a, Avpr2 and Oxtr (for review see Lemmens-Gruber and Kamyar 2006 ; Manning et al. 2008 ). Some of these are in clinical trials or have been approved for use as treatments in disorders such as nocturnal enuresis and neurogenic diabetes insipidus. The search for Avpr1b pharmacological tools has gained much impetus due to their potential as treatments for conditions associated with chronic stress such as anxiety and depression (for short review see Griebel et al. 2005 ; Arban 2007 ). The most widely used non-peptide Avpr1b antagonist, SSR149415, has been used extensively in research since its characterisation in 2002 ( Serradeil-Le Gal et al. 2002 ) (see, Tables III and IV ). SSR149415 is orally active and inhibits some but not all acute stressor-induced ACTH release in rats ( Serradeil-Le Gal et al. 2002 ; Ramos et al. 2006 ), and does not affect HPA hypersensitivity to novel stressors ( Chen et al. 2008 ). The compound acts at the human Avpr1b and also has some antagonist activity at the human Oxtr in vitro ( Griffante et al. 2005 ) but has high selectivity and nanomolar affinity for rodent forms of the Avpr1b ( Serradeil-Le Gal et al. 2002 ). In mice and rats, SSR149415 has been tested in a variety of classical models of anxiety (e.g. elevated plus maze, light/dark box test) and depression (e.g. forced swim test, chronic mild stress) as well as other models (e.g. olfactory bulbectomy, Flinder's sensitive line) that are used to determine the efficacy of potential anti-depressant and anxiolytic drugs ( Griebel et al. 2002 ; Overstreet and Griebel 2005 ; Stemmelin et al. 2005 ; Louis et al. 2006 ; Salomé et al. 2006 ; Shimazaki et al. 2006 ; Hodgson et al. 2007 ; Iijima and Chaki 2007 ). Peripheral and central pretreatment with SSR149415 reduces anxiety and depressive-related behaviour in these tests with high compatibility between the findings of these studies. SSR149415 also reduces aggression in hamsters ( Blanchard et al. 2005 ), significantly reverses the reduction in dentate gyrus cell proliferation caused by chronic mild stress in mice ( Alonso et al. 2004 ) and blocks stress-induced hyperalgesia in rats ( Bradesi et al. 2009 ). It has also been radiolabelled with tritium and used in receptor autoradiography to reveal low-resolution binding in the human and rat pituitary—no Avpr1b binding sites
were observed in sections of rat brain ( Serradeil-Le Gal et al. 2007 ). Recently, SSR149415 has failed phase II clinical trials ( Kirchhoff et al. 2009 ). Overall, the results of studies with SSR149415 evidence a possible role for the Avpr1b in affective disorders and point to animal model-validated targets with which to treat them.
Around the same time, SSR149415 was first reported as an Avpr1b antagonist, the Avp peptido mimetic [1-deamino-4-cyclohexylalanine] arginine vasopressin (d[Cha 4 ]Avp) was described ( Derick et al. 2002 ). This agonist was the first to show efficacy at nanomolar concentrations and stimulate the release of ACTH/CORT without exhibiting vascular or renal activity ( Derick et al. 2002 ). Other peptide agonists selective for the rat Avpr1b have been generated by modifying positions 4 and 8 of the Avp analogue deamino-[Cys] arginine vasopressin ( Pena et al. 2007a ). Many of these modified Avp analogues display high selectivity for the Avpr1b and bind with sub-nanomolar affinities ( Pena et al. 2007a ) and thus could well be useful in the study of the Avp receptors in rodents; however, they may be of limited use as human therapeutics due to their peptidergic nature. The agonists created since d[Cha 4 ]Avp do, however, have an increasingly refined agonist profile. One such member of the recent modified range of Avp analogue agonists, d[Leu 4 ,Lys 8 ] Avp, is noted to be a full agonist at human, rat and mouse Avpr1bs in vitro as well as stimulates ACTH and insulin release at low doses from mouse pituitary and perfused rat pancreas, respectively ( Pena et al. 2007b ). This effect of d[Leu 4 ,Lys 8 ]Avp on mouse and rat tissue is blocked when co-administered with SSR149415 ( Pena et al. 2007b ).
Since the development of SSR149415, there have been some recent reports of a non-peptide antagonist (Org) that is highly selective for the human and rat Avpr1b ( Craighead et al. 2008 ). Pretreatment of rats with this compound causes a significant reduction in ACTH release after restraint stress or lipopolysaccharide (LPS) challenge ( Spiga et al. 2009a ) and to a heterotypic stressor after repeated restraint ( Spiga et al. 2009b ) (see Table III ). However, Org does not affect repeated restraint stress-induced ACTH/-CORTadaptation in rats ( Spiga et al. 2009a ). Another set of Avpr1b antagonists (ABT-436 and ABT-558) have subnanomolar affinity for the human Avpr1b with approximately 30-fold lower affinity for rat and mouse Avpr1bs ( Wernet et al. 2008 ). These compounds attenuate acute restraint stress-induced ACTH release ( Behl et al. 2008 ) and appear to have increased anxiolytic-like and antidepressant-like potency and efficacy compared to SSR149415 ( van Gaalen et al. 2008 ). More recently, other non-peptide antagonists have been described: “p”, a tetrahydroquinoline sulphonamide derivative, with high selectivity for the rat and human Avpr1b (K i s approximately 21 nM and 44 nM, respectively) ( Scott et al. 2009 ), and compounds generated from a series of pyrrole-pyrazinone and pyrazole-pyrazinone derivatives which also appear to show good selectivity and high potency (e.g., compound 11 pIC 50 = 8.4) for the human Avpr1b expressed in Chinese hamster ovary cells ( Arban et al. 2010 )—to our knowledge the effects of these compounds on HPA axis activity or behaviour have not been reported to date.
In conclusion, the distribution and functional studies of the Avpr1b have established its major role in the pituitary where it plays a pivotal part in the regulation of the HPA response to stress, and in particular to acute stress. Additionally, we see several, perhaps minor, metabolic and endocrine roles in the periphery. Behavioural changes generated from experiments in Avpr1b KO animals, together with recent Avpr1b antagonist data, have highlighted a more elusive central role for this receptor. The behavioural implications of Avp, acting via the Avpr1b, in aggression and stress, and the integral connection between stress, anxiety and depression make the Avpr1b an attractive target for pharmacological intervention. Increased Avp secretion and enhanced pituitary responsiveness to Avp have been reported in some subtypes of depression (e.g. melancholic depression)(see, Dinan and Scott 2005 for a review). Furthermore, polymorphisms in the Avpr1b gene have been associated with depression ( van West et al. 2004 ), childhood-onset mood disorder ( Dempster et al. 2007 ) and attention-deficit hyperactivity disorder ( van West et al. 2009 ). The progress made in generating compounds selective for this receptor may have considerable implications for potential treatments for a number of disease states as well as for Avp research in general. The development of new compounds that can be radiolabelled to high specific activity is a critical step in Avpr1b research as determination of its central distribution may well provide an anatomical template to assign how changes in behaviours or disease states are influenced. Further developments in molecular (e.g. conditional Avpr1b KOs; the use of Avpr1b-specific small-interfering RNAs to selectively silence Avpr1b gene expression in specific brain regions) and pharmacological tools for use in rodents and primates (e.g. positron emission topography (PET) ligands, see Schonberger et al. 2010 ) will help elucidate the full function of the Avpr1b and thus the therapeutic value research into this receptor may hold. | J.A.R received funding from the Neuroendocrinology Charitable Trust (UK) and the Schering-Plough Corporation; A.M.O'C received funding from the Wellcome Trust and BBSRC. This research was supported by the NIMH Intramural Research Program (Z01-MH-002498-21). S.J.L received funding from the Wellcome Trust.
Declaration of interest
J.A.R and S.J.L have used the Avpr1b antagonist Org (supplied by the Schering-Plough Corporation) in unpublished studies in mice. The authors alone are responsible for the content and writing of the paper.
Notice of correction
Please note that the following amendments have been made to this paper, first published Online Early 9 September 2010: the removal of a duplication of text within Table I caption on page 6; in Table IV on page 12, the heading ‘Anxiety related behaviour’ has been moved up to the 2nd line in column one from its previous position at line 12; and a spelling error, incorrect year of publication and incorrect volume details have been amended in the Zhou et al. 2008 reference list and citations in Tables III and IV . | CC BY | no | 2022-01-12 15:21:54 | Stress. 2011 Jan 10; 14(1):98-115 | oa_package/e6/2a/PMC3016603.tar.gz |
||||
PMC3016617 | 21221212 | The National Health Personnel Licensing Examination Board (NHPLEB) of the Republic of Korea completed the clinical skill test for the Medical Licensing Examination in 2009 and 2010 that was introduced for the first time in Asia. This was only possible due to the great amount of time devoted by the medical professors and staff of NHPLE. To improve the evaluation of what examinees can do instead of only what they know, a clinical skill test should also be administered in other health professional fields. Recently, the NHPLEB has dealt with 160,000 examinees including licensed practical nurses, caregivers, and certified health education specialists. The NHPLE will apply to the ISO 9001 certification process in 2011. Also, the work of the ad hoc team for computer-based testing will be extended in 2011. On December 14, 2010, during the meeting with the President of the Association for Medical Education in the Western Pacific Region (AMEWPR), Dr. Duck-Sun Ahn, it was agreed that the Journal of Educational Evaluation for Health Professions will be an official journal of the AMEWPR beginning in July 2011 after going through a scheduled process. Workshops for the capability on how to develop test item will be continued specific to each field so that the pool of examiners can be expanded. The year 2012 is the 20th anniversary of the NHPLEB. Thus preparations for the international conference to celebrate the 20th anniversary will begin in 2011. I appreciate all readers and the health personnel educators for their support of the NHPLEB in 2010. I will persevere in my effort to develop the National Health Personnel Licensing Examination in every way in 2011.
Happy New Year! | CC BY | no | 2022-01-12 17:45:04 | J Educ Eval Health Prof. 2011 Jan 3; 8:1 | oa_package/03/d6/PMC3016617.tar.gz |
|||||||
PMC3016691 | 21234154 | “Love and work are the cornerstones of our humanness.” Freud was perhaps one of the first to recognize the connection between work and mental health. Since his time (1856–1939), there has been research evidence of an important link between person’s mental wellbeing and productivity. Unfortunately, the worldwide trend of rise in mental ill health is alarming. Mental illness and addiction rank first and second in terms of causing disability in Canada, the United States and Western Europe when compared to all other diseases (e.g., cancer and heart disease). Mental health disability is the primary issue in 60–65% of disability insurance claims in Canada. (Stewart, W., Matousek, D., and Verdon, C. (2003). However, in India the insurance sector has not addressed this disability.
A healthy workplace leads to a good morale, and high motivation. There are linkages between mental health and other physical conditions. Some of them i.e. depression is linked to other physical and chronic conditions such as asthma, diabetes and hypertension. Employees experiencing job stress can triple the risk of disability associated with mental health illness, anxiety, substance use, back pain, injuries and infections.
In any organization, the workforce is its biggest asset. Without a mentally healthy workplace, the team will experience low morale, people will become cynical, stressed and anxious; physical health problems will increase, sickness levels will rise, productivity will drop and organizational climate will be affected. A mentally healthy workplace is one which is seen as a happy and friendly place to work. It has high productivity levels and is efficient, and is open to discussions about mental health issues.
The close investigation of the micro and macro level of organizational climate indicates that the process of illness begins at the micro level. Lewinsohn et al .,(1980) commented that the process from stressors to illness was interfered with by personal social skills and that social skills were important for mental health. In Japan, Takahashi (2003) suggested that social skills were important to perform actual health behaviors. Bandura (1982) also found that mental health status of employees is influenced by self-efficacy, self-management skills, and communication with superiors. Hence the role of individual factors cannot be overemphasized.
Proposed self-management skills by authors (Takahashi H, Kinoshita T, Masui S, Nakamura M, 1999) are considered the base of coping strategies because they include collecting information needed to carry out tasks, identifying core problems, and feasible pace and planning of such task. Irie et al ., (1997) found that difficulty in dealing with stress and negative and malfunctional coping strategies were relative to the negative mental health of Japanese workers.
Bartel and Taubman(1979) were among the first to examine the relationship between mental health and labor market behavior. They explored the relationship between several diseases (including mental disorders) and individual earnings, wages, weekly hours worked, the probability of being out of the labor force, and the probability of being unemployed, using a sample drawn from a twins’ panel maintained by the National Academy of Science–National Research Council (NAS–NRC). Bartel and Taubman(1986) found that individuals diagnosed as either psychotic or neurotic had lower earnings, wages and weekly hours worked, and a greater probability of being out of the labor force. These results suggest that mental illness has a substantial impact in the labor market. Research questions in this area have examined the relationship between mental illness and labor market variables.
French MT, Zarkin GA (1998) carried out a study to explore the relationship between symptoms of emotional and psychological problems and employee absenteeism and earnings among employees at a large US worksite. The study revealed the role of the effects of emotional/psychological symptoms on two important labor market variables: absenteeism and earnings. Several specifications of the absenteeism and earnings equations were estimated to test the independent effect of emotional symptoms and the joint effects of emotional symptoms and other co-morbid conditions. The results suggest that employers should consider the productivity losses associated with workers’ mental health when designing worksite-based programs such as employee assistance programs (EAPs). Hence productivity is related with the mental health of the employees.
There is a negative relationship between absenteeism and mental health. In extreme cases, long-term stress or traumatic events at work may lead to psychological problems and be conductive to psychiatric disorders resulting in absence from work and preventing the worker from being able to work again. When under stress, people find it difficult to maintain a healthy balance between work and non-work life. The experience of work stress is a challenge to the health and safety of workers and to the healthiness of their organizations. The related terms which have been identified and studied are “Presenteeism”, presenteeism is defined as lost productivity that occurs when employees come to work but perform below par due to any kind of illness. While the costs associated with the absenteeism of employees have been long studied, the costs of presenteeism are newly being studied. The cost of absenteeism is obvious—100% of the worker’s productivity is lost each day the worker is not on the job; the cost of presenteeism is a more “hidden” cost because the worker is on the job but is not accomplishing as much (Lovell V 2004).
However, similar evidence in the Indian scenario is lacking. Some strategies are proposed to counter the ill effects of mental health. Some of the mental health intervention strategies are known as Mental Health First Aid techniques. “Mental Health First Aid’ are health enhancers based on solution-based strategies for coping with stress at the workplace. This was originally developed by the Centre for Mental Health Research at the Australian National University. Mental health awareness is the concept proposed by Mental Health First Aid techniques. This has been implemented in countries like England and Australia. The Mental Health First Aid refers to understanding how to preserve life where persons may be a danger to themselves or others and to provide help to prevent a mental health problem developing into a more serious state. Some of the objectives include raising awareness of mental health issues in the community and promoting the recovery of good mental health, providing comfort to a person experiencing a mental health problem and reducing stigma and discrimination.
Mental health first aid has been adapted and regulated by the National Institute for Mental Health in England (NIMHE) and England’s Care Services. There is a need to increase awareness in the area of mental health in the Indian scenario will help managers and executive cadre to identify signs of the mental ill health of employees and further help to intervene appropriately. Hence enhancement of awareness about mental health issues is essentially required for effective management and productive environment of the organization. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):1-2 | oa_package/2d/12/PMC3016691.tar.gz |
|||||||
PMC3016692 | 21234156 | MATERIALS AND METHODS
The study was carried out in a medical college teaching hospital. The medical college teaching hospital is located in the industrial township of Pimpri, Pune. Patients admitted for surgery during a two-month period were the subjects of study. A cross-sectional study design was used. Ethical clearance was obtained from the institutional committee.
Inclusion criteria
Patients who were undergoing surgery during this period and were willing to participate in the study, were included.
Exclusion criteria
All unconscious patients, patients with psychiatric disorders or on psychoactive drugs were excluded.
Consent
After the selection of patients, informed consent was obtained.
Collection of data
Those who agreed to take part in the study were asked to fill up a pretested questionnaire, the day before the surgery prior to pre-anesthetic evaluation. The main content of the questionnaire contained detailed demographic, socioeconomic and health status of the patient as well as questions relating to anxiety. Doctor-patient communication was assessed by ascertaining the knowledge of the patient regarding the surgical procedure, satisfaction about the information received, response to queries by the patient and trust in the treating physician. The level of anxiety among the patients was assessed using hospital anxiety and depression (HAD) scale (Hicks and Jenkins, 1988). This includes seven questions, each with four possible answers. As originally described, the HAD scale had 14 questions, seven scoring anxiety and seven scoring depression. We omitted the questions relating to depression. This scale has high sensitivity and specificity (Goldberg 1985), and has been used successfully in assessing anxiety in medically compromised patients (Tang et al ., 2008). Interpretation of anxiety using the HAD scale: Each answer to the multiple choice questions had 0 to 3 points. Therefore the possible range of score was from 0 to 21. The interpretation was done as follows: A score of ≤ 7 meant that anxiety was not present, score of 8-10 indicated doubtful presence of anxiety, while scores ≥ 11 proved that anxiety was definitely present.
Statistical analysis
Statistical analysis was carried out using EPI INFO 2002. | RESULTS
Out of 79 patients 50(63.2%) patients reported anxiety levels ≤7 on the HAD scale, eight (10.1%) reported between 8-10 and 21(26.5%) fell in the category of ≥11.
Knowledge about surgical procedure and anxiety levels
Patients who were well informed about the surgical procedure in advance had significantly less preoperative anxiety than those unaware of the procedure [ Table 1 ].
Satisfaction about the information given and anxiety levels
Similarly, those who were better satisfied with the information given by the physician suffered from significantly less anxiety levels [ Table 2 ].
Association between queries answered by the doctor and anxiety levels
It will be seen from Table 3 that when the doctor answered the queries to the patient’s satisfaction, the anxiety was significantly less.
Association between trust in the doctor and anxiety levels
Similarly, those patients whose physicians evinced trust suffered significantly less from preoperative anxiety [ Table 4 ]. | DISCUSSION
With the advancement of technology and expertise in the field of medicine we are now able to give answers to many unsolved questions of the past. In the midst of all these answers we are now facing hurdles of new questions. In this era of many new surgical interventions, where even space can’t bind man, just the thought of going through the unknown may provoke intense worry and anxiety. Preoperative anxiety may go unnoticed in an environment that stresses increased productivity. However, preoperative anxiety has been reported to be associated with poor psychosocial outcome after surgery (Chaudhury et al ., 2006). Preoperative anxiety is also related to postoperative pain. In a study of 80 patients, aged 18 to 70 years undergoing laparoscopic cholecystectomy, postoperative pain perception intensity was primarily predicted by sex with an additional role of depression and anxiety (De Cosmo et al ., 2008). Mitchinson et al ., (2008) studied 605 patients undergoing surgery and concluded that patients should be screened preoperatively for pain and anxiety because these are strong predictors of a more difficult postoperative recovery. It has also been observed that patients with a high level of preoperative anxiety respond worse to analgesic medication than patients with a low level of preoperative anxiety. Therefore actions undertaken to reduce patients’ anxiety may reduce patients’ need of analgesic medications (Greszta & Siemińska, 2008).
Hospitalization, even in patients who are not faced with the prospect of surgery, is known to cause anxiety. One may, therefore, expect some degree of anxiety in patients attending for surgery. In industrial belts this anxiety is compounded by the stress of migration from a rural background to an urban environment in search of jobs and opportunities provided by industries.
Better doctor-patient communication can improve mutual understanding of such symptoms (Zastrow et al ., 2008). The hospital anxiety and depression scale (HADS) identifies patients with psychological distress who may benefit from early counseling (Awsare et al ., 2008). The present study indicates that better doctor-patient communication which involves information sharing about the surgical procedure, patient satisfaction, attention to queries by the patient and trust in the physician was associated with lower anxiety levels. Similar findings have been reported by Herrera-Espineira et al ., (2008), who reported that a greater anxiety level in patients was associated with greater dissatisfaction with information received. When a patient comes to a doctor for treatment he bonds with the doctor in such a way that he attributes almost divine powers to the physician. Therefore it becomes the duty of the doctor to help the patients allay his fears. This can be best done by developing a good rapport with the patient by listening patiently to his queries and answering them in an understandable manner. This would help the patient to trust the doctor and therefore feel satisfied. Hence good communication skills demonstrated by the doctor would reduce preoperative anxiety to a significant level.
Limitation
The small sample size was an apparent limitation of the study. | CONCLUSIONS
Preoperative anxiety is a common phenomenon in patients undergoing surgery. It can be reduced by better doctor-patient communication. | Background:
Anxiety may not be recognized by physicians though they affect a large number of patients awaiting surgery as reported in some studies. Good doctor-patient communication may have an impact on preoperative anxiety.
Aim:
To find out the incidence of anxiety in patients awaiting surgery and its association with good doctor-patient communication.
Materials and Methods:
The study was undertaken in a medical college hospital situated in an industrial township, for the duration of two months. It was a cross-sectional study. The study included 79 patients admitted to various surgical wards of a teaching hospital. Data was collected on a pretested questionnaire, which included a set of questions on various aspects of doctor-patient communication. The level of anxiety was assessed using the Hospital Anxiety and Depression Scale (HADS). Statistical analysis was carried out using the WHO/CDC package EPI INFO 2002. Though preoperative anxiety was collected on an ordinal scale, later during analysis, it was collapsed to give a categorical scale. Aspects of doctor-patient communication associated with preoperative anxiety were explored by Chi square tests.
Results:
Out of the total 79 patients, 26.5% reported definite anxiety levels. Good doctor-patient communication was found to be inversely associated with anxiety levels in the preoperative period.
Conclusions:
Preoperative anxiety is a common phenomenon among indoor surgical patients. A lot can be done to alleviate this anxiety by improving doctor-patient communication. | Anxiety is a complicating comorbid diagnosis in many patients with medical illnesses (Cukor et al ., 2008). Persons who undergo surgical procedures are under strong preoperative distress (Chaudhury et al ., 2006). It has been described that the incidence of preoperative anxiety varies from 11 to 80% in adults (Maranets and Kain, 1999). Preoperative anxiety and depression can also cause reactions that result in an increase in the intraoperative consumption of anesthetics and in a greater postoperative demand for analgesics (Caumo et al ., 2001). Besides, preoperative anxiety and depression seem to have a profound influence on the immune system and on the development of infections. For prevention of preoperative anxiety there is a need to identify the associated factors which can be modified. This will help in postoperative recovery and patient satisfaction. In view of this the present study was carried out firstly, to find out the incidence of anxiety in preoperative patients and secondly to identify its association with good doctor-patient communication. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):19-21 | oa_package/38/c3/PMC3016692.tar.gz |
||
PMC3016693 | 21234157 | MATERIALS AND METHODS
The study was carried out in the city of Ludhiana and its surrounding areas/satellite towns in a radius of 20 km. The sample consisted of 158 non-psychiatrist medical practitioners (100 Allopathic, 33 Homeopathic and 25 Ayurvedic). Only qualified registered medical practitioners were included in the study. The data was collected over a period of 12 months.
All the practitioners were contacted and visited personally. The aims and objective of carrying out the survey was explained to them through a written letter as well as personally, on meeting.
A detailed questionnaire prepared for the survey was administered and got filled by the medical practitioner, at his/her convenience and in case it was required, the questionnaire was left with the medical practitioner, to be returned back by post in a pre-addressed and stamped envelope. Strict confidentiality was employed in carrying out the survey and in the use of information provided by each respondent.
The information collected was analyzed in the domains of the knowledge, attitude and treatment practices of non-psychiatrist medical practitioners with regard to mental health (psychiatric) problems. The observations from this study were used to comment on the availability of the existing mental health services and to find the adequacy or the deficiencies in these services. | DISCUSSION
A total of 158 non-psychiatrist practitioners were surveyed, out of which 133 practitioners answered the questionnaire, forming a response rate of 84.17% which is much more as compared with 41% and 62% in works of similar kind (Fauman, 1981 & 1983) and also more than that reported by Chad and Shome (1996) in which the response rate was 63.9% [ Table 1 ].
Around 88.7% of the practitioners admitted seeing patients with mental health problems in their practice which is in consistent with a survey done on general practitioners in Jaipur city (Gautam S, Gupta I.D, Kamalpreet, 1992).
While tapping the knowledge and attitudes [ Table 2 ] of the general practitioners, 70.68% felt mental health problems are very common. 89.47% reported that these are due to a combination of stress, social, cultural, individual, biological and organic factors, which is in accordance with findings of Gautam S (1992) where the response rate was 90%. Almost all the practitioners reported that mental illness is of serious concern. This confirms the findings of Verghese and Beig (1974) who reported that the majority of the people have a positive attitude towards mental illnesses.
The majority of patients (82.7%) seen by general medical practitioners were having psychosomatic problems like sleep problems (84.2%); appetite disturbances (84.2%); and abnormal and irrational fear (84.2%); followed by mood disturbances and problems in sexual activity (72.9%) [ Table 3 ]. A considerable number of drug addicts (62.41%) and problems of forgetfulness (66.17%) were also seen by general practitioners.
Majority of non-psychiatrist medical practitioners (79.7%) do not know any diagnostic criteria neither have they any exposure or training to deal with mental illness [ Table 4 ]. They treat their psychiatric patients on their own intuitions. This finding is in accordance with Gautam and Kapur (1980) where they reported 71.7% of general medical practitioners without knowledge and training.
As far as psychiatric referrals are concerned, 38.3% practitioners reported that they refer the patient to a psychiatrist when required, but 34.6% reported that they refer only if it is unavoidable and symptoms are not controlled, whereas 27% refer occasionally, at will. Majority of the practitioners sought psychiatric consultation when needed which was in accordance with the findings reported by Narang, R.L. and Gupta, Rajeev (1987) as 75%, and Chadda and Shome (1996) reported 66%. Approximately half of the practitioners (50.3%) reported that patients accept their advice about psychiatric referral with reluctance, showing social stigma. This finding is in accordance with the findings of Chadda and Shome (1996) where 45% patients were reluctant regarding referrals. Some of the practitioners (4.5%) reported that patients used to refuse to go to a psychiatrist and 2.25% of the practitioners reported dropout of the treatment when referred. This finding also supports the finding reported by Chadda and Shome (1996), where the patients refused to comply with advice about psychiatric referral according to 8% of practitioners [ Table 5 ].
Regarding the feedback from the psychiatrist of the referred patient, less than half (49.6%) of the practitioners received the feedback. Somewhat similar results have been described in earlier literature (Pullen, 1993) and also in a survey done on practitioners by Chadda and Shome (1996). Regarding the usefulness of the psychiatric referral, majority (72.2%) of the practitioners reported that the referral is always helpful. Somewhat similar findings were seen in a survey by Chadda and Shome (1996), when the response rate was 90%.
Almost all the non-psychiatrist medical practitioners in this study agreed that the incidence of mental health problems is increasing in the general population which is also in accordance with the findings reported by Narang, R.L. and Gupta Rajeev (1987).
Regarding the availability of psychiatric services, majority of the practitioners (66.9%) reported that they are not sufficient. Many epidemiological studies (Sethi et al ., 1967) and Neki’s (1973) findings support this notion. Majority (98.5%) of the practitioners felt that they themselves and other practitioners need to know more about the psychiatric problems and the treatment available as the training received by them during their training period was only for two to three weeks. These findings are in accordance with the findings of a survey done on family physicians where 46% of the respondents felt that they were dissatisfied with their competence to treat psychiatric illness (Fisher et al ., 1973). The finding is also in accordance with Narang R.L. and Gupta Rajeev (1987) where about 70% practitioners reported the same [ Table 6 ]. | CONCLUSION
In this study, it was found that the majority of the non-psychiatrist medical practitioners see patients with mental health (psychiatric) problems in their practice. Majority of them (79.7%) do not know any diagnostic criteria used for diagnosis of mental health problems. They are aware of the etiology, increasing incidence and treatment facilities available for mental health problems. They treat the patients with medication and counseling but the majority of them have not received any formal training in these fields. Majority of the practitioners felt that the existing mental health services are not sufficient to meet the needs of the people. So, we concluded that there is a lack of training of general practitioners in dealing with patients having mental health (psychiatric) problems and there is need for further improvement of the existing mental health services. | Background:
Mental health problems account for 12% of global disease burden and non-psychiatrist medical practitioners deal with a large proportion of this burden. This study was planned to assess the knowledge, attitude and treatment practices of non-psychiatrist medical practitioners regarding mental health problems.
Materials and Methods:
One hundred Allopathic and 25 each of Homeopathic and Ayurvedic medical practitioners were interviewed and assessed using a semi-structured performa.
Results:
Majority (95%) of them were aware regarding etiology, increasing incidence and treatment facilities available for mental health problems. Treatment modalities include counseling and medication but 69.9% of them had not received any formal training in administering them.
Conclusions:
98.5% practitioners providing mental health services at the primary level feel the need to be properly trained and oriented in the management of these patients to improve quality of healthcare. | As per the World Health Report of 1995, about 500 million people are believed to suffer from neurotic, stress-related and somatoform disorders. World Health Report 2001, which is dedicated to the theme of mental health, shows that disorders are estimated to account for about 12% of the global burden of disease and also represent four of the ten leading causes of disability worldwide. As WHO has shifted its emphasis from prevalence rates to the concept of DALY -“Disability Adjusted Life Years”, the Neuropsychiatric Disorders rank very high on the list of global burden of disease.
India with a population of more than a billion, houses one of the highest number of mentally ill persons who require long-term care. With less than 10% availability of the inpatients’ care required for patients with mental health problems and less than one psychiatrist available for one lakh Indians, the gap between resources and requirements remains too broad (Trivedi, 2002).
Due to this wide gap, a large number of psychiatric patients do not receive adequate treatment and suffer from longstanding illness and resulting disability. A large portion of the patients who do ultimately reach the psychiatry outdoor, reach late, when the illness has already become chronic and resistant to therapy. The issue of less number of psychiatrists is further compounded by the striking ignorance about and lack of adequate skills for treating patients with mental health problems among the general care physicians and members of other medical sub-specialties.
There is a wide gap between the mental health needs of the community and the available psychiatric services in India (Neki, 1973). The psychiatric morbidity among the clients of general practitioners has been reported to be ranging between 10 to 36% (Murthy et al ., 1981) and 27% among clients of general hospital outpatients (Murthy and Wig, 1977). In a study of 200 GPs in Bangalore, (Shamasunder, 1978) reported that 65% general practitioners (GPs) found psychiatric morbidity less than 10% in their practice while 24% reported a figure of less than 20%. This reflects the degree to which the GPs are aware about mental illness.
Not much work has been done in India regarding the opinions and attitudes of non-psychiatrist medical practitioners towards mental illness. Keeping this in view, the present study was planned. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):22-26 | oa_package/62/60/PMC3016693.tar.gz |
|||
PMC3016694 | 21234158 | MATERIALS AND METHODS
Sample
Two hundred two (78 men and 124 women) Hindi speaking students studying in the Banaras Hindu University completed the Hindi translation of the EPQR-S. The age of the respondents ranged from 18 to 30 years with mean age of 22.27 years and SD of 2.37.
Tool
Eysenck Personality Questionnaire Revised-Short Form (EPQR-S)
EPQR-Short (Eysenck, Eysenck& Barrett, 1985) is a self-reported questionnaire. It has 48 items, 12 for each of the traits of neuroticism, extraversion, and psychoticism, and 12 for the lie scale. Each question has a binary response, ‘yes’ or ‘no’. Each dichotomous item was scored 1 or 0, and each scale had a maximum possible score of 12 and minimum of zero.
Procedure
For the present study the questionnaire was translated into Hindi by a bilingual Indian national and then back-translated into English by a second bilingual Indian national in order to test for inaccuracies and ambiguities. Where there were inconsistencies in the retranslated English version, both translators were consulted as to the best possible solution. This content-based checking provided clear support for scoring the neuroticism, extraversion and lie scale items as suggested by Eysenck et al . (1985). After the content-based analysis the Hindi version of EPQR-S (here after referred as EPQRS-H) it was administered on all the participants (N=202) in order to examine its psychometric properties.
Statistical analysis
The internal consistency of the four subscales of EPQRS-H was calculated using Chronbach’s alpha method (Cronbach, 1951). | RESULTS
Table 1 presents the Corrected Item-Total Correlations for each of the four subscales of the EPQRS-H (i.e., extraversion, neuroticism, psychoticism and the lie scale). The reliability of the extraversion, neuroticism, psychoticism and lie score subscales were found to be 0.766; 0.772; 0.238; 0.624, respectively. The results indicated that none of the items were psychometrically poor. The corrected item-total correlation ranged from 0.201 to 0.538 for extraversion, from 0.196 to 0.556 for neuroticism, from 0.109 to 0.449 for lie scale and from —0.020 to 0.284 for psychoticism subscale of EPQRS-H. Moreover, none of the ‘alpha-if item deleted’ values exceeded the overall alpha except for two items of psychoticism subscale and one item of lie score subscale which were when deleted exceeded the overall alpha level of that subscale. When those items were thoroughly checked by the experts they suggests that the items are congruent with remaining items of that factor therefore these items should not be dropped from the scale. The psychometric analyses further shows that the neuroticism, extraversion and lie subscales perform well in this sample, but not the psychoticism subscale. | DISCUSSION
The present study was aimed to evaluate the internal consistency of the Hindi translation of the EPQR-S. Both the extraversion and the neuroticism subscales of the Hindi translation of the EPQR-S achieved satisfactory alpha coefficients well in excess of 0.7, the level recommended by Kline (1993). The lie scale with an alpha coefficient of 0.624 is also reached very close to Kline’s criterion of 0.7. The psychoticism scale, however, performed poorly with an alpha coefficient of only 0.238.
When evaluated in general, it can be proposed that due to satisfactory internal consistency scores, the EPQRS-H is a reliable scale for the measurement of various personality traits. With regards to low internal consistency coefficients for the psychoticism subscale, various studies conducted in other countries also found the similar results (Francis et al ., 1992, 2006; Ivkovic et al ., 2007; Katz and Francis, 2000; Lewis et al ., 2002). It was therefore concluded that the low alpha score of psychoticism scale was not related to the Hindi translation (that this subscale can be problematic). At the same time these data emphasize the need for further research and development to produce a more reliable short index of psychoticism.
The fact that the sample of the study included college participants from an Indian university supports the psychometric properties (i.e., reliability) of EPQRS-H. It would be beneficial to repeat the study with heterogeneous sample and to examine its discriminative value with clinical population. An attempt should also be made in future to investigate the temporal consistency and factorial validity of the EPQRS-H with larger sample. | Background:
There is a growing consensus about the validity of human personality traits as important dispositions toward feelings and behaviors (Matthews, Deary, & Whiteman, 2003).
Materials and Methods:
Here we examine the reliability of the Hindi translation of the Eysenck Personality Questionnaire-Revised Short Form (EPQR-S; Eysenck, Eysenck, & Barrett, 1985), which consists of 48 items that assess neuroticism, extraversion, psychoticism, and lying. The questionnaire was first translated into Hindi and then back translated. Subsequently, it was administered to 202 students (78 men and 124 women) from Banaras Hindu University. The internal consistency of the scale was evaluated.
Results:
The findings provide satisfactory psychometric properties of the extraversion, neuroticism and lie scales. The psychoticism scale, however, was found to be less satisfactory.
Conclusion:
It can be proposed that due to satisfactory internal consistency scores, the EPQRS-H is a reliable scale for the measurement of various personality traits. | In its preliminary version, the Eysenck personality theory involved neuroticism-stability and extraversion-introversion dimensions; subsequently, the psychoticism dimension was added to the theory (Lewis et al ., 2002). As the extraversion dimension represents sociality and impulsivity, individuals in this dimension were defined as enjoying social interactions, energetic, and preferring social situations to loneliness. It was proposed that the neuroticism dimension indicated emotional instability and reactiveness, and that individuals who score high on this dimension tend to be anxious, depressive, overly emotional, shy, and have low self-esteem. The psychoticism dimension highlights more bizarre personality characteristics, such as being distant, cold, insensitive, absurd, and unable to empathize with others (Eysenck& Eysenck, 1975).
Since the development of Eysenck personality theory, various measures were developed in order to assess the various personality traits. One of the consequences of this process has been a progressive increase in their length. The early Maudsley Medical Questionnaire (MMQ) contains 40 items (Eysenck, 1952), the Maudsley Personality Inventory (MPI) contains 48 items (Eysenck, 1959), the Eysenck Personality Inventory (EPI) contains 57 items (Eysenck& Eysenck, 1964a), the Eysenck Personality Questionnaire (EPQ) contains 90 items (Eysenck& Eysenck, 1975) and the Revised Eysenck Personality Questionnaire (EPQR) contains 100 items (Eysenck, Eysenck,& Barrett, 1985). This increase in length can be accounted for by the introduction of an additional dimension of personality within Eysenck’s scheme (Eysenck& Eysenck, 1976) and by the psychometric principle that greater length enhances reliability (Lord& Novick, 1968). Neuroticism and extraversion, especially, appear in most trait models of personality (Matthews et al ., 2003). An important part of the validation of any trait-based model of personality and its associated measurement instrument is to investigate its applicability to other cultures. This tends to be done in two ways: emic and etic. Emic research typically uses the lexicon of the local culture to investigate the structure and content of the personality-related terms (Saucier& Goldberg, 2001). Etic research applies personality measures devised in one culture to new cultures and asks whether they show the same psychometric structure and reliability and validity (McCrae, 2001). A large amount of etic research has been completed on the Eysenck Personality Questionnaire. The research has been done mostly on the original 90-item EPQ. Generally, its psychometric structure has been well-reproduced in at least 34 countries (Barrett& Eysenck, 1984; Barrett, Petrides, Eysenck,& Eysenck, 1998).
Although all these questionnaires were reliable and valid there are, however, some practical disadvantages in using long tests. In particular, they caused certain clinical application problems due to their length. Therefore, the need for shorter personality scales resulted in shorter versions of the mentioned instruments. One of these shorter personality scales is the Eysenck Personality Questionnaire Revised - Short Form (EPQR-S; Eysenck et al ., 1985). EPQR-S includes 48 items and 4 subscales: Extraversion (12 items), Neuroticism (12 items), Psychoticism (12 items), and Lie (12 items). The lie subscale is a control scale in which the whole scale is tested for social desirability bias. Eysenck et al . (1985) reported reliabilities for males and females respectively of 0.84 and 0.80 for neuroticism, 0.88 and 0.84 for extraversion, 0.62 and 0.61 for psychoticism, and 0.77 and 0.73 for the lie scale. The EPQR-S has now been used quite widely (Aleixo& Norris, 2000; Blagrove& Akehurst, 2001; Chan& Joseph, 2000; Chivers& Blagrove, 1999; Creed, Muller,& Machin, 2001; Francis, 1999; Francis& Wilcox, 1998; Glicksohn& Bozna, 2000; Glicksohn& Golan, 2001; Halamandaris& Power, 1999; Linton& Wiener, 2001; Martin& Kirkaldy, 1998; Robbins, Francis& Rutledge,1997).
In a cross-cultural study, Francis, Brown, and Philipchalk (1992) compared the psychometric properties of the EPQR-S in four English-speaking countries among a total of 685 undergraduate students, including 59 men and 153 women in England, 57 men and 92 women in Canada, 51 men and 81 women in the USA and 53 men and 139 women in Australia. According to this study the short form extraversion scale achieved alpha coefficients of 0.78, 0.83, 0.85 and 0.87 in the four samples. The short form neuroticism scale achieved alpha coefficients of 0.79, 0.80, 0.81 and 0.83 in the four samples. The lie scale performed less well than the extraversion and neuroticism scales, but proved to be adequate. The short form lie scale achieved alpha coefficients of 0.65, 0.66, 0.70 and 0.71. However, for the psychoticism scale, alpha coefficients were very low (0.33-0.52).
While the EPI, EPQ and EPQR were originally developed in England and then extended to other English-speaking areas, the cross-cultural extension of this field of personality research quickly led to the translation and testing of the instruments in non-English speaking environments (Barrett& Eysenck, 1984; Eysenck& Eysenck, 1983). For example, Francis and associates (Francis, Lewis,& Ziebertz, 2006) have developed the German edition of the EPQR-S. Similarly, Ivkovic et al . (2007) have developed and checked psychometric properties the Croatian edition of the EPQR-S.
Against this background, the aim of the present study was to examine the psychometric properties of the Hindi translation of the EPQR-S for Indian Hindi speaking college going adult population. | Authors are thankful to Mr. Gaurav Kumar Rai, Mrs. Upagya Rai and Ms. Richa Singh for their help in questionnaire administration. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):27-31 | oa_package/bd/10/PMC3016694.tar.gz |
||
PMC3016695 | 21234155 | CONCLUSION
Delusions are a key clinical manifestation of psychosis and have particular significance for the diagnosis of schizophrenia. Although common in several psychiatric conditions, they also occur in a diverse range of other disorders (including brain injury, intoxication and somatic illness). Delusions are significant precisely because they make sense for the believer and are held to be evidentially true, often making them resistant to change. Although an important element of psychiatric diagnosis, delusions have yet to be adequately defined. The last decade has witnessed a particular intensification of research on delusions, with cognitive neuroscience-based approaches providing increasingly useful and testable frameworks from which to construct a better understanding of how cognitive and neural systems are involved. There is now considerable evidence for reasoning, attention, metacognition and attribution biases in delusional patients. Recently, these findings have been incorporated into a number of cognitive models that aim to explain delusion formation, maintenance and content. Although delusions are commonly conceptualized as beliefs, not all models make reference to models of normal belief formation. It has been argued that aberrant prediction error signals may be important not only for delusion formation but also for delusion maintenance since they drive the retrieval and reconsolidation-based strengthening of delusional beliefs, even in situations when extinction learning ought to dominate. Given the proposed function of reconsolidation, in driving automaticity of behavior it is argued that in an aberrant prediction error system, delusional beliefs rapidly become inflexible habits. Taking this translational approach will enhance our understanding of psychotic symptoms and may move us closer to the consilience between the biology and phenomenology of delusions. | Delusion has always been a central topic for psychiatric research with regard to etiology, pathogenesis, diagnosis, treatment, and forensic relevance. The various theories and explanations for delusion formation are reviewed. The etiology, classification and management of delusions are briefly discussed. Recent advances in the field are reviewed. | A delusion is a belief that is clearly false and that indicates an abnormality in the affected person’s content of thought. The false belief is not accounted for by the person’s cultural or religious background or his or her level of intelligence. The key feature of a delusion is the degree to which the person is convinced that the belief is true. A person with a delusion will hold firmly to the belief regardless of evidence to the contrary. Delusions can be difficult to distinguish from overvalued ideas, which are unreasonable ideas that a person holds, but the affected person has at least some level of doubt as to its truthfulness. A person with a delusion is absolutely convinced that the delusion is real. Delusions are a symptom of either a medical, neurological, or mental disorder. Delusions may be present in any of the following mental disorders: (1) Psychotic disorders, or disorders in which the affected person has a diminished or distorted sense of reality and cannot distinguish the real from the unreal, including schizophrenia, schizoaffective disorder, delusional disorder, schizophreniform disorder, shared psychotic disorder, brief psychotic disorder, and substance-induced psychotic disorder, (2) Bipolar disorder, (3) Major depressive disorder with psychotic features (4) Delirium, and (5) Dementia.
HISTORY
The English word “ delude ” comes from Latin and implies playing or mocking, defrauding or cheating. The German equivalent Wahn is a whim, false opinion or fancy and makes no more comment than the English upon the subjective experience. The French equivalent, delire is more empathic; it implies the ploughshare jumping out of the furrow (lira), perhaps a similar metaphor to the ironical ‘unhinged’. Since time immemorial, delusion has been taken as the basic characteristic of madness. To be mad was to be deluded. What is delusion is indeed one of the basic questions of psychopathology. It would be a superficial and wrong answer to this question just to call a delusion a false belief which is held with incorrigible certainty. We may not hope to resolve this issue quickly with a definition. Delusion is a basic phenomenon. It is the primary task to get this into view. The subjective dimension within which delusion exists is to experience and think our reality (Jaspers, 1973). Whether we like it or not, this is the unavoidable field of tension in which research on delusions is situated: A tight, objectivity-oriented conceptualization on the one hand and the basic anthropological dimensions of subjectivity and interpersonality (i.e. human interdependence or “universal fraternity”) on the other hand. Even if one is skeptical about these “basic” aspects, Jaspers’ central idea should be kept in mind: Delusion is never a mere object which can be objectively detected and described, because it evolves and exists within subjective and interpersonal dimensions only, however “pathological” these dimensions may be. This reminds us of a central topic of psychiatric research: There are two fundamentally different approaches to research on complex mental phenomena, be they normal or pathological.
The first approach—the “naturalistic” one—regards the complexity and heterogeneity of scientific means to study delusion as a temporary phenomenon, as the second-best solution. This solution, according to the naturalistic perspective, will only be used until a strictly empirical neuroscientific approach has progressed far enough to replace mentalistic vocabulary with a neurobiological one. In this view mental phenomena are identical with their neurobiological “basis”. In other words, mental events are not regarded as a distinct class of phenomena, either gradually or principally. “Eliminative materialism” is the most radical position in this context, which declares terms such as intention, willful action, individual values, personality or autonomy to be part of “folk psychology”. According to this approach, these terms may well be useful socially and on an every-day-life basis, but not scientifically, and they will be replaced, “eliminated”, by the language of neurobiology in the not too distant future. The second approach—the “phenomenological point of view” in Jaspers’ terms—departs from a person’s subjective experiences as the core issue of scientific studies on psychopathology. This does not, of course, exclude neurobiological research strategies at all, but it does insist on the scientific significance of the subjective dimension. Research into delusions is one of the most interesting examples of the importance of this methodological dichotomy. We will briefly review some of the major concepts of delusional thinking as they appeared from the 19 th century until today.
DESCRIPTIVE PHENOMENOLOGICAL APPROACH
This approach to understanding delusions is a very influential one for psychiatrists. Jaspers’ book General Psychopathology marked a major step forwards in establishing psychopathology as a scientific discipline. Experiencing mental states by the patient and the understanding of this experience by the physician defined the central framework. However, in contrast to biological phenomena, mental events in Jaspers’ view can never be accessed directly, but only via the expressions of the person who experiences them. Phenomenology is the study of subjective experience. It is one’s empathic access or understanding of the patient’s experience. One enters into the other person’s experiences using the analogy of one’s own experience. Jaspers distinguishes between static understanding which “grasps particular psychic qualities and states as individually experienced” and genetic understanding which “grasps the emergence of one psychic event from another”. Phenomenology is static understanding. Phenomenology is one’s recreation of the patient’s experiences through “transferring into”, “empathizing with” or literally “feeling into” or “living with” the patient’s experiences. In this way one arrives at an “actualization, representation or bringing to mind” of the patient’s experience. “Phenomenology actualizes or represents the patient’s inner subjective experiences. We can only make a representation of them through an act of empathy or understanding” (Jaspers, 1963).
THE CONCEPT OF FORM AND CONTENT
Following the theory of knowledge of the philosopher Immanuel Kant, Jaspers accepts that all experience or knowledge entails both an incoming sensation and an organizing concept. The former is matter or content, the latter is form. The empiricists (Locke, Berkeley& Hume) emphasized incoming sensation exclusively; the rationalists (Descartes and Leibniz) emphasized the organizing concept exclusively. Kant took a carefully considered middle course. All experience and knowledge entails the two stems of conceptual form and intuitive content. This will be crucial for Jaspers’ concept of delusion. In Kant’s words from his Critique of Pure Reason : “That in the appearance which corresponds to sensation I term its matter (or content) but that which so determines the manifold of appearance that it allows of being ordered in certain relations, I term the form of the appearance”. This is the philosophical origin of the concept of form and content within Jaspers’ psychopathology. The differing forms of psychopathological experience are the topic for phenomenology. In his early paper, The Phenomenological Approach to Psychopathology , Jaspers spells it out that “phenomenological definitions" relate to the “different forms of experience”: “From its beginnings, psychiatry has had to concern itself with delimiting and naming these different forms of experience; there could, of course, have been no advance at all without such phenomenological definitions”. This is the crucial point that Jaspers is at pains to make - that phenomenology is primarily concerned with form and that content is largely irrelevant: “Phenomenology only makes known to us the different forms in which all our experiences, all psychic reality, takes place; it does not teach us anything about the contents”. Then, in General Psychopathology , Jaspers becomes more explicit about the concept of form. Form is the mode or the manner in which we experience content: “Perceptions, ideas, judgments, feelings, drives, self-awareness, are all forms of psychic phenomena; they denote the particular mode of existence in which content is presented to us”. The same content can be presented in different forms. The two Kantian stems are thought of by Jaspers as subject and object. The subjective stem is the conceptual form imposed by the mind and the objective stem is the incoming content of intuition or sensation. As a content presenting in different forms, Jaspers gives the example of hypochondriasis: “In all psychic life there is subject and object. This objective element conceived in its widest sense we call psychic content and the mode (Art) in which the subject is presented with the object (be it a perception, a mental image or thought) we call the form. Thus, hypochondriacal contents, whether provided by voices, compulsive ideas, overvalued ideas or delusional ideas, remain identifiable as content” (Jaspers, 1963).
THE FORMS OF BELIEF
Jaspers distinguishes four forms of beliefs, i.e. four distinct modes or ways in which beliefs can be presented to consciousness. These are normal belief, overvalued idea, delusion-like idea and primary delusion. In the English literature, the delusion-like idea is usually known as the secondary delusion but Jaspers himself does not use this term. The English literature tends either to split these four forms into two pairs on the basis that normal belief and overvalued idea both occur in ‘normal’ psychic life while delusion-like idea and primary delusion always reflect an ‘abnormal’ mental state, or to split off the primary delusion on the grounds that the other three are understandable while the primary delusion is not. Both Cutting and Sims refer to the first distinction while only Sims notes the second (Cutting, 1985; Sims, 1988). This first distinction emphasizes whether the belief is delusional in nature or merely overvalued. Sims (1988), in Symptoms in the Mind , appeals to Jaspers and gives the following criteria for delusion: (a) They are held with unusual conviction. (b) They are not amenable to logic. (c) The absurdity or erroneousness of their content is manifest to other people. Cutting (1985), in his The Psychology of Schizophrenia , gives an almost identical definition, again with an appeal to Jaspers. These three features (extraordinary conviction and certainty, imperviousness or incorrigibility and impossible content) are the ones usually taken to distinguish delusion from other beliefs. Sims and Cutting are correct that Jaspers does say exactly that of delusions: (a) They are held with an extraordinary conviction, with an incomparable subjective certainty. (b)There is imperviousness to other experiences and to compelling counter-argument. (c) Their content is impossible. What Sims and Cutting miss is that Jaspers says that these are merely the ‘external characteristics’ of delusion. They are characteristic of delusion but they fail to account for the essential differences between delusion and other forms of belief. In fact, Jaspers dismisses these criteria in the first paragraph of his account: “To say simply that a delusion is a mistaken idea which is firmly held by the patient and which cannot be corrected gives only a superficial and incorrect answer to the problem” (Jaspers, 1963).
It is easy to demonstrate the inadequacy of these criteria. Imagine two politicians with opposing beliefs. Both hold views with an ‘extraordinary conviction’ and ‘an incomparable subjective certainty’. Both show a very definite ‘imperviousness to other experiences and to compelling counter-argument’. For each, the judgments of the other are ‘false’, and ‘the content impossible’. Obviously, neither is deluded. Both are expounding views which are highly valued, or perhaps overvalued, but which fulfill the above ‘external characteristics’ of delusional belief. Sims’ and Cutting’s criteria must be deemed inadequate to distinguish delusions from other firmly held beliefs and the expression “held with delusion-like intensity” as an essential criterion for delusion is therefore nonsense. Plenty of other beliefs besides delusions are “held with delusion-like intensity”. Even the truth or falsehood of the content of a belief is inadequate to distinguish a delusion. Jaspers is quick to point out that the content of some delusions is true, e.g. in pathological jealousy, where the wife is having an affair but the patient is right for the wrong reasons and is therefore still deluded (Jaspers, 1963). With some awareness of the above problems, Sims adds the second distinction based on understanding. A delusion, unlike an overvalued idea, ‘is not understandable’ in terms of the patient’s cultural and educational background although the secondary delusion (or delusion-like idea) is understandable with the addition of some other psychopathological event such as hallucination or abnormal mood . The standard preoccupation remains whether any belief is delusional or merely overvalued.
DELUSION-LIKE OR OVERVALUED IDEAS
This standard view has difficulties with a wide variety of strange beliefs, some of which are examined below. Are they delusional or merely overvalued? On the above three ‘external characteristics’ plus understandability, they can certainly look very much like delusion-like ideas but the logical implication that this means a diagnosis of ‘psychosis’ is unacceptable and requires some very deft intellectual footwork to avoid. Are not the disordered beliefs of body image in anorexia derived from (understandable in terms of) the fear of gaining weight and the preoccupation with food and, if so, why do we not consider them to be delusion-like rather than overvalued? Why do we not consider the catastrophe beliefs in severe obsessional states to be delusion-like? Catastrophe beliefs are closely linked to the underlying compulsions and such patients do often believe that a failure to carry out the ritual will result in some dreadful catastrophe. Beck and his associates have described a range of abnormal cognitions consequent on depressive and anxiety states (Beck et al , 1979). We would regard many of these depressed patients as neurotic rather than psychotic depressives but nevertheless they have very compelling automatic thoughts and negative schemata of failure, hopelessness, and helplessness, clearly linked to (understandable in terms of) their mood disorders. Can the beliefs of miscast gender in transsexualism be passed off glibly as overvalued idea when personality trait makes the belief understandable?
A further example can be found in pathological gambling. Although the experienced gambler knows that the casino game is rigged against him and that in the long term the house must win, he continues to believe in his own luck. Wagenaar (1988) found a web of illogical cognitions in compulsive gamblers. Many were magical in quality and some actually made it more likely that the gambler would lose. At roulette, a game of chance entirely, players had a strong tendency to leave their chips on a winning number on the grounds that it was lucky. When they lost they tended to put their chips on numbers that have not yet won. Backing a number adjacent to, or arithmetically related to, the winner meant their luck was returning. Wagenaar points out that few numbers are not related either by proximity or by arithmetic so that the gambler can always ‘delude’ himself that his luck is rising and that a win is imminent. Some gamblers had an elaborate system, provoking the old telegram “System perfected, send more money”. Many gamblers believed that chance and luck were not just abstract ideas but were causal forces which were open to manipulation. Roulette is the closest thing to random numbers outside a computer but many of Wagenaar’s gamblers had developed beliefs which were magical in nature, defied the laws of mathematics and, in some cases, were actually helping them to lose. These magical beliefs are derived from (are understandable in terms of) the compulsion, the arousal and the excitement of pathological gambling.
Walker (1991) has proposed that: (1) A number of ideas both inside and outside psychopathology have at least a prima facie case to be considered delusion-like (they fulfill the three ‘external characteristics’ and they are understandable in terms of unusual, if not psychopathological, experiences). (2) an intellectual sleight of hand is often in operation in the distinction of overvalued and delusion-like. If we intend to make a ‘psychotic’ diagnosis, then the belief is delusion-like; if we intend to make a ‘non-psychotic’ diagnosis, then the belief is overvalued. The phenomenology is molded to fit. (3) The third aim is to suggest, with Jaspers, that, as both an overvalued idea and a delusion-like idea are understandable (the overvalued idea in terms of the personality and life experiences and the delusion-like idea in terms of the same plus some other psychopathological event) there is little to be gained by their distinction. Jaspers solves this problem neatly by shifting the whole emphasis. For him, the important distinction is not between overvalued idea and delusion-like idea but rather between delusion-like idea and primary delusion. Jaspers’ terminology has importance for his account. Only the primary delusion is a ‘delusional idea proper’ for him and the delusion-like idea is, as its name suggests, not a true delusion but merely delusion-like. Jaspers, therefore, makes no real distinction between the overvalued idea and the delusion-like idea. There are several occasions when he simply equates the two. For example, in Jaspers (1963): “Exhaustion may help to develop a long-prepared delusion of reference (an overvalued idea)”. “Melancholia. In this state the overvalued or compulsive depressive ideas become delusion-like”. “Mood-states, wishes and drives give rise to delusion-like ideas (overvalued ideas) which arise in more or less understandable fashion from them”.
THE ESSENTIAL CHARACTERISTICS OF DELUSION
Jaspers’ solution to the problem of delusion is as follows: “If we want to get behind these mere external characteristics of delusion into the psychological nature of delusion, we must distinguish the original experience from the judgment based on it, i.e. the delusional contents as presented data from the fixed judgment which is then merely reproduced, disputed, dissimulated as occasion demands”. The essential criterion distinguishing the different forms of belief lies not in their conviction and certainty, not in their incorrigibility and not in their impossible content but in their origins within the patient’s experience. Jaspers goes on: “We can then distinguish two large groups of delusion according to their origin: one group emerges understandably from preceding affects, from shattering, mortifying, guilt-provoking or other experiences, from false-perception or from the experience of derealisation in states of altered consciousness, etc. The other group is for us psychologically irreducible; phenomenologically it is something final. We give the term delusion-like to the first group; the latter we term delusions proper ”. Thus, the essential distinguishing factor within the four forms of belief is the concept of understanding. One can understand the evolution or development of the normal belief and the overvalued idea from the personality and its life events. One can understand the delusion-like idea from personality, life events and from some other psychopathological experience but the primary delusion is something new, irreducible and non-understandable. The primary delusion is of paramount importance for Jaspers. Including the above distinction of a lack of understandability, the primary delusion differs in three ways from the other three forms of belief: (a) The primary delusion is unmediated by thought. (b) The primary delusion is ununderstandable. (c) The primary delusion implies a change in ‘the totality of understandable connections’ which is personality.
PRIMARY DELUSION AS AN UNMITTELBAR PHENOMENON
Cutting across the whole of phenomenology is the distinction between ‘direct’ or ‘immediate’ ( unmittelbar ; literally ‘unmediated’) experiences and experiences which are the result of reflection or thought and which are ‘indirect’ ( gedanklich vermitteltes : Literally ‘mediated by thought’). This distinction of phenomena which are ‘unmediated’ and those which are ‘mediated by thought’ ‘overlaps’ all other divisions. Jaspers does try to clarify what he has in mind by this distinction. Immediate, direct or unmediated experiences he describes as experiences which are ‘elementary’ and ‘irreducible’. In contrast, experiences which are mediated by thought he describes as “developed, evolved, based on thinking and working through”; that is they are the product of reflection. The distinction is crucial: We have to distinguish between immediate certainty of reality and reality judgment. Reality-judgment is the result of a thoughtful digestion of direct experiences” (Jaspers, 1963). The primary delusion is a direct, unmediated phenomenon; the delusion-like idea is reflective or mediated by thought: “The primary delusional experience is the direct, unmediated intrusive knowledge of meaning. not considered interpretations but meaning directly experienced”. On the other hand: “Delusion-like ideas. emerge understandably from other psychic events and can be traced back psychologically to certain affects, drives, desires and fears”. Jaspers gives some further examples of the distinction [ Table 1 ]. The primary delusion is a direct, immediate or unmediated phenomenon while the other three forms of belief are all mediated by thought. That is, normal beliefs, overvalued ideas and delusion-like ideas are all reflective, considered interpretations. In fact, the primary delusion is essentially not a belief or judgment at all but rather an experience. Jaspers writes exactly that “Phenomenologically it is an experience”. The German is primare Wahnerlebnis - primary delusional experience.
Primary delusion is the experience of delusional meaning. The experience of meaning ( Bedeutung ) is implicit in all perception and it is the distortion of this implicit meaning which is the primary delusional experience. Jaspers begins with examples from mundane perceptions: “All thinking is thinking about meanings. Perceptions are never mechanical responses to sense stimuli; there is always at the same time a perception of meaning. A house is there for people to inhabit. If I see a knife, I see directly, immediately a tool for cutting. We may not be explicitly conscious of the meanings we make when we perceive but nevertheless they are always present.” Jaspers goes on: “Now, the primary delusional experience is analogous to this seeing of meanings. The awareness of meaning undergoes a radical transformation. The direct or immediate, intrusive knowledge of meaning is the primary delusional experience. These are not considered interpretations but direct experiences of meaning while perception itself remains normal and unchanged. All primary delusional experience is an experience of meanings”.
The meaning ‘for people to inhabit’ is implicit in the perception of a ‘house’. The meaning ‘for cutting’ is implicit in the perception of a ‘tool’. In exactly the same way, the delusional meaning is implicit in the primary delusional experience. Examples of primary delusions will help clarify Jaspers’ meaning: “Suddenly things seem to mean something quite different. The patient sees people in uniform in the street; they are Spanish soldiers. There are other uniforms; they are Turkish soldiers. Then a man in a brown jacket is seen a few steps away. He is the dead Archduke who has resurrected. Two people in raincoats are Schiller and Goethe. [and from another patient]: In the morning I ran away; as I went across the square the clock was suddenly upside down; it had stopped upside down. I thought it was working on the other side; just then I thought the world was going to end; on the last day everything stops; then I saw a lot of soldiers on the street; when I came close, one moved away; ah, I thought, they are going to make a report; they know when you are a “wanted” person; they kept looking at me; I really thought the world was turning round me. In the afternoon the sun did not seem to be shining when my thoughts were bad but came back when they were good. Then I thought cars were going the wrong way; when a car passed me I did not hear it. I thought there must be rubber underneath; large Lorries did not rattle along anymore; as soon as a car approached, I seemed to send out something that brought it to a halt. I referred everything to myself as if it were made for me people did not look at me, as though they wanted to say I was altogether too awful to look at”. For Jaspers, these two patients, and especially the second, are facing a shower of new primary delusional meanings.
Kurt Schneider’s impetus from about 1925 onwards was to reformulate clinical psychopathology on a descriptive basis, avoiding interpretation and speculation wherever possible. It remained in accordance with Jaspers’ ideas of psychopathology; however, Schneider considered it important not to return to the elementary concept of association psychology, but to keep the clinical and biographical context in mind (Schneider, 1980). Schneider mainly dealt with delusions through their formal structure. He, too, was in search of a criterion that could differentiate reliably between “delusion proper” and “delusion-like phenomena”, and such a criterion, in his view, was “delusional experience” (Wahnwahrnehmung), which was defined as a two-step process: The sensory input is correct, whereas its interpretation is delusional. The patient, for example, sees a dark cloud in the sky, which, for him, is proof, beyond doubt, that he will die the day after. This, in Kurt Schneider’s view, is delusional in a narrow sense. Unless an organic lesion of the central nervous system can be identified, he regarded such an experience as a “first rank symptom” of schizophrenia.
STRUCTURAL DYNAMIC APPROACH
The German psychiatrist and psychopathologist Werner Janzarik developed his theory of structural dynamics beginning in the 1950s. It is an interesting and underestimated approach to the understanding of psychotic disorders, beyond mere operationalism and beyond psychoanalytical interpretation. In mental life, healthy or disordered, Janzarik differentiated between structural components that are rather firm and longstanding, such as basic ideas and values, from their dynamic qualities, which mainly address the affective field. In healthy persons, the dynamic aspect is linked to certain structural components, which may have genetic or psychological origins or may just result from a learning process. In psychosis, including many delusional states, however, these dynamic forces, not being sufficiently integrated into the structural components, will show a “derailment”, clinically presenting as restriction—the depressive pole, expansion—the manic pole, or instability (Unstetigkeit)—the acute psychotic pole. In the latter case, there will often appear what is called an increasingly “impressive” way of experiencing. This means that in the patient’s perspective many, if not all perceptions, even those of minor or no importance for that person, gain high and embarrassing personal significance, albeit in an odd, vague (”impressive”) manner. Klaus Conrad (1958) gave a masterful description of this psychopathological phenomenon in his book on Beginning Schizophrenia . He argued that sensory input will be subjectively altered and will become symbolical, frightening, or even threatening. The psychotic person will often have the impression of ideas or experiences being forced upon him or her by an external power. This will clinically be described as a delusional syndrome.
ANTHROPOLOGICAL AND “DASEINSANALYTICAL” APPROACH
Binswanger says that one must deal with human existence as a whole in order to understand its particular abnormalities. Delusion for Binswanger is a pathological type of world design. World design is a term which reflects the organization of all the conscious and unconscious attitudes of a human being towards all that is sensible. Minkowski attempts to characterize mental disorder as some single fundamental disturbance (trouble generator) and he thinks that all such disturbances are spatiotemporal in nature; by this he means that the patient with a delusion of persecution is no longer able to perceive the chance nature of all that happens around him owing to a feeling of restriction of freedom and movement (the spatiotemporal disturbance), and so refers it all to himself; thus in the delusion of persecution what the patient wants is not a feeling of benevolence towards him but a feeling of ease and freedom. Rumke maintains that delusion is a product of an ill, not a normal person. He offers as proof that after their recovery patients claim they did not mean exactly what they said. He also believes that delusion is a secondary and less important phenomenon, and that what is of real interest to the psychiatrist is the inner attitude of the patient, his world design and his way of thinking, even though, as he states, phenomenology of this kind will never teach us to explain the illness, it only puts us in a position to understand it. Kronfeld’s view can best be summarized as follows: A delusion is the result of the failure of the “objectifying act” because of the strength of the “intentional act.” By “objectifying act” is meant the exercise of man’s ability to be aware of his own intention and action, and by “intentional act” is meant the exercise of man’s ability to wish, desire and imagine some particular action. The strength of this intentional act may become so great that the ego fails to objectify it, i.e ., to identify it correctly as a wish, and thus a delusion is established. Put simply, Kronfeld says that the delusional patient cannot distinguish between phantasy and reality; this has some conceptual similarity to the notion of projection: One does not recognize one’s own ideas as one’s own and attributes them to the external and objective environment. The anthropological approach and that of Daseinsanalyse considers the problem of delusions with regard to their specific relevance for the whole life of the deluded person. The central idea here is that within an existential crisis of the deluded person a delusion can serve as a kind of coping or problem-solving—albeit a “pathological” one from the perspective of others. Of course, this way of resolving the crisis itself creates more problems, and is even harmful, especially to communication with others. This is nevertheless a lesser evil for the sufferer, because it can allow a new stability of mental state, even though pathological. Here, a delusion (and psychosis in general) is understood as a very specific human way of “being in the world”, the roots of which lie in a basic disturbance of interpersonal communication.
BIOGRAPHICAL APPROACH
The period of “romantic psychiatry”, which had a significant influence on the development of European psychiatry at least in the first decades of the 19 th century, focused on complex biographical and emotional aspects of human life more than on the rationalistic perspective, which, in turn, had been the central point of reference during the period of enlightenment in the 18 th century (Steinberg, 2004). This framework of romanticism was nearly swept away around 1850 by a naturalistic attitude, which was allied to the natural sciences and biologically oriented general medicine and psychiatry, which became more and more successful. Rather than going into detail on this specific issue, I want to address the re-discovery of the biographical approach to delusions in the early 20 th century. Early in the 20 th century two influential psychiatrists, Robert Gaupp and Ernst Kretschmer, focused on the correlation between biography and personality traits of people later diagnosed as deluded. Kretschmer coined the term “sensitive delusion of reference” (sensitiver Beziehungswahn). The main hypothesis was that vulnerable and anancastic personality traits in combination with real and repeated insults will first lead to a dysphoric and suspicious attitude, and then, if no solution is found, to delusion-like ideas and, finally, to a delusion proper. In contrast to the ideas of early psychoanalysis, this approach did not claim to explain the genesis of a delusion in the sense of causality, but to identify typical patterns of situations and conditions that lead to delusional states. This explicitly included biological factors, at that time often called “constitutional”. Kretschmer spoke of the need for a “multidimensional psychiatry”—a very modern concept indeed. The case that represents this approach most prominently is that of Ernst Wagner (1874-1938). He was a teacher, living with his family (his wife and four children) in Degerloch next to Stuttgart in southern Germany. In the night from 3 to 4 September 1913, he killed all five members of his family while they were sleeping and later shot or wounded at least 20 other persons and set fire to several houses. He was examined for forensic purposes by Robert Gaupp, who found him not responsible for his deeds because of the chronic development of a delusional disorder, with the background of having both sensitive personality traits and distressing life events. Wagner was not sent to jail, but remained in several psychiatric hospitals for decades, where he began to write dramas and novels.
PSYCHOANALYTICAL APPROACH
For Freud and many of his early pupils, delusions—like the majority of psychopathological symptoms—were the result of a conflict between psychological agencies, the id, ego, and super-ego. Delusion, briefly stated, is seen as a personal unconscious inner state or conflict which is turned outwards and attributed to the external world. He considered that latent homosexual tendencies especially formed the basis of paranoid delusions. Later, psychoanalytical authors gave up this very narrow hypothesis and suggested that delusions might be a compensation for any—i.e. not necessarily sexuality-related—kind of mental weakness, e.g. lack of self-confidence, chronic anxiety or identity disturbances. This concept in a way resembles Alfred Adler’s theory of individual psychology, in which the consequences of personal failures or shortcomings play a major role in the etiology and pathogenesis of (neurotic) mental disorders (Adler, 1997). The best known example for the application of the above mentioned psychoanalytical arguments in the debate on delusion is Freud’s paper on the Schreber case.
NEUROBIOLOGICAL APPROACH
There still is no comprehensive neurobiological theory of delusion formation or maintenance, although various empirical, conceptual and speculative arguments have been proposed, often resulting from the discussion of psychotic states occurring during neurological disorders (Munro, 1994). In recent decades there has been significant progress in psychopharmacology, psychiatric genetics and functional neuroimaging in the study of psychotic and affective disorders. The problem remains, however, that most neurobiological studies have not addressed delusions per se, nor delusional disorder/paranoia, due to its rarity. Rather, they tend to be about schizophrenic or, worse, “psychotic” disorders in all their heterogeneity. These psychoses may or may not have had delusional features. So, all the neurobiological hypotheses that were suggested in connection with delusional syndromes must be read with the caveat that they might—at least partly—relate more to psychosis than to delusion, e.g. the hypotheses of hyperdopaminergic activity, functional disconnection of frontal and temporal brain areas, or disturbed basal information processing, as detectable by evoked potential techniques. The clinical efficacy of antipsychotics in acutely psychotic patients with delusional and hallucinatory syndromes is an argument in favor of the hypothesis of dopaminergic hyperactivity in mesolimbic and mesocortical circuits, since these agents have in common their dopamine antagonistic properties. As for delusions, however, this efficacy is typically limited to acute or subacute states, whereas chronic delusions, and especially the rare condition of paranoia, often, although not invariably, prove resistant to antipsychotic (and other biological and psychotherapeutic) treatments. A hypothesis proposed by Spitzer (1995) combines the aspect of disturbed dopaminergic neurotransmission in deluded patients with the concept of neural networks derived from computational science. On the basis of replicated findings from word association studies (”semantic priming paradigm”), he suggests that elevated dopaminergic transmission will result in an increased signal-noise difference in the neural network. In computer simulation models, the artificial net will show properties that—in a far-reaching conclusion by Spitzer—resemble clinical features of deluded patients, e.g. the tendency to relate any experience, however irrelevant it may objectively be, to the patient’s personal situation, often in a negative or even threatening way.
ANALYTICAL PHILOSOPHY OF MIND/LINGUISTIC APPROACH
In recent philosophical literature, there is an interesting line of thought concerning the qualitative status of subjective experiences that is important for the psychiatrist. The meaning of “qualitative” here is the specific quality of a certain experience, for example the experience of color or pain. This is usually called the “qualia-problem”. The question is what precisely makes the difference between a statement of internal experience (e.g., “I like that rich red color”) and a statement about the outer world (e.g., “It is raining”). An important difference is that utterances about one’s own mental states are not subject to external validation and there is little expectation of testing them, whereas statements about the outer world are always verifiable and subject to corrections, whether by observation or superior rational arguments by another person. To make this central issue more concrete, the statements, “I have a headache”, “I am sad”, and, “I am angry”, cannot be “corrected” by any argument by another person. The property “incorrigibility”—at least since Jaspers’ writings—also constitute a prominent criterion of delusional states. Spitzer (1990) applied this formal argument to delusional statements and came to the conclusion that we should identify delusion whenever a person speaks about the outer world with the same high degree of subjective certainty that is usually only observed in utterances about one’s “inner” experiences—i.e. with the quality of “incorrigibility”. For example, if a paranoid person says that he or she is being observed by the secret service all day, this statement, if delusional, would have the same ‘incorrigible’ degree of subjective certainty as the sentence, “I am sad”.
HALLUCINATIONS
A delusion might be an attempt at explaining a hallucinatory experience. Wernicke called such a delusion, delusion of explanation. However, even the early description by Lasegue in 1852 of delusions of persecution and of their common association with auditory hallucinations never firmly stated the temporal relationship between delusions and hallucinations. We cannot call upon any established knowledge in the field of study of hallucinations to help answer the question. French psychiatry does distinguish two types of hallucinations, one of which is, one might hold, more like a delusion than hallucination. The two types are the true hallucination with full impression of the external nature of the sensation and the so-called mental hallucination where there is no impression of the external nature of the sensation, only a belief that one has seen something, or very commonly, that one has heard voices or noises or persons talking to one. The phenomenon of mental hallucination probably deserves a place amongst the other phenomena of delusion and hallucination.
DE CLERAMBAULT’S AUTOMATISMS
The role of the hallucinatory types of experience is better discussed together with all the other so-called automatisms. De Clerambault holds that delusions are the reactions of an abnormal personality to automatisms. Briefly, his theory is an anatomical hypothesis that systematized chronic hallucinatory psychosis is based on anatomical processes in the brain due to infections, lesions, toxins, traumata or sclerosis. These anatomical insults produce mental automatisms which mark the beginning of the psychosis. Contrary to prevalent beliefs de Clerambault maintained that at the beginning these automatisms were neutral in feeling tone. The patient tended to be puzzled by them but they were neither pleasant nor unpleasant. De Clerambault also described these automatisms as non-sensory in character, to distinguish them from hallucinations [ Table 2 ]. A patient assailed by such automatisms may attempt to explain them as intentional and produce delusions such as delusions of influence, possession, persecution and so on. De Clerambault’s theoretical notions regarding the causation of chronic hallucinatory psychosis have been subjected to criticism. In the absence of published studies of the frequency and nature of the relationship between the automatisms and delusional states, the automatisms remain as hypothetical causes of delusions.
PERCEPTIONAL APPROACH
As Maher (1974) suggested, a delusion is—contrary to the classical position—not a cognitive disturbance, especially leading to flawed conclusions from correctly perceived sensory input, but a normal cognitive reaction to unexpected, strange mental events, especially perceptions. In early stages of delusional or, more generally, psychotic disorders the patient may register distressing alterations in sensory qualities; e.g., things seem bigger or smaller than usual, or look, feel or smell different. Such deeply worrying strangeness of experiences is regarded as the starting point of a development leading from suspiciousness to vague paranoid ideation and, finally, to systematized delusions. These experiences may be partly explained or at least made less frightening by the construction of a theoretical background of someone “who does all this deliberately” on the grounds of certain motives, be they known to the patient or not. This position, of course, marks a sharp contrast to Kurt Schneider’s view of “delusional experience”.
ATTRIBUTIONAL AND COGNITIVE PSYCHOLOGY APPROACH
Since the 1990s there has been an increase in psychological research on cognitive processes in deluded patients. In this line of thought, the traditional assumption of undisturbed cognitive functions in delusional disorder, i.e. pathological content on the basis of normal form of thought, was questioned. In order to come closer to delusion-related phenomena themselves—as compared to the much broader psychosis-related phenomena—a number of studies compared patients with and without delusional ideation. Such a process also led to a number of interesting therapeutic implications. Three approaches are worthy of mention.
Decision-making paradigm: Several groups found that in simple, affectively neutral decision-making paradigms, a deluded person needs less information to arrive at a definite decision than persons without a delusion or people with a depressive disorder. The latter needed significantly more information. With regard to delusions, this phenomenon was called “jumping to conclusions” and was interpreted as an argument for disturbed cognitive processes in the case of (persecutory) delusion (Garety& Freeman, 1999). Attribution psychology: A number of research groups confirmed the finding that, in comparison to healthy persons, deluded patients tend to attribute negative events or situations more often to other people or to external circumstances and not to themselves. This is also true for topics that have nothing to do with the actual delusional theme. For clinicians having had experience with paranoid patients, this is not a surprising finding, but it becomes interesting when regarded as an argument in favor of stable pathological patterns in the social cognition of deluded persons. Recently, this path has reached beyond the attributional perspective itself and encompasses cognitive models of delusional thinking in general, sometimes with a strong neurobiological impact (Blackwood et al ., 2001). Theory of mind: According to Frith& Frith (1999), patients with paranoid schizophrenia suffer from a deficit in understanding correctly what others think about the patient and what their future attitudes or actions towards the patient might be. This phenomenon is well known from autism research, and is often called “theory of mind deficit”. It is the reduced ability to form a valid hypothesis about another person’s state of mind with regard to oneself. Paranoid or, more generally speaking, delusional ideation in this view is a result of disturbed cognitive and social metarepresentation.
DEFINITION OF DELUSION
There can be no phenomenological definition of delusion, because the patient is likely to hold this belief with the same conviction and intensity as he holds other non-delusional beliefs about himself; or as anyone else holds intensely personal non-delusional beliefs. Subjectively, a delusion is simply a belief, notion or idea.
Kraepelin in the ninth edition of his Textbook defined delusional ideas as pathologically derived errors, not amenable to correction by logical proof to the contrary. As per Stoddart, a delusion is a judgment which cannot be accepted by people of the same class, education, race and period of life as the person who experiences it. Jaspers (1959) regarded a delusion as a perverted view of reality, incorrigibly held, having three components: They are held with unusual conviction They are not amenable to logic The absurdity or erroneousness of their content is manifest to other people. Hamilton (1978) defined delusion as ’A false, unshakeable belief which arises from internal morbid processes. It is easily recognizable when it is out of keeping with the person’s educational and cultural background.’ According to Sims (2003), a delusion is a false, unshakeable idea or belief which is out of keeping with the patient’s educational, cultural and social background; it is held with extraordinary conviction and subjective certainty. In the Diagnostic and Statistical Manual of Mental Disorders, a delusion is defined as: A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everybody else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person’s culture or subculture (e.g. it is not an article of religious faith).
The fact that a delusion is false makes it easy to recognize but this is not its essential quality. A very common delusion among married persons is that their spouses are unfaithful to them. In the nature of things, some of these spouses will indeed have been unfaithful; the delusion will therefore be true, but only by coincidence (Casey& Kelly, 2008).
Kendler et al ., (1983) have proposed several poorly correlated vectors of delusional severity:
Conviction: The degree to which the patient is convinced of the reality of the delusional beliefs. Extension: The degree to which the delusional belief involves areas of the patient’s life. Bizarreness: The degree to which the delusional belief departs from culturally determined consensual reality. Disorganization: The degree to which the delusional beliefs are internally consistent, logical and systematized. Pressure: The degree to which the patient is preoccupied and concerned with the expressed delusional beliefs. Affective response: The degree to which the patient’s emotions are involved with such beliefs. Deviant behavior resulting from delusions: Patients sometimes, but not always, act upon their delusions.
CLASSIFICATION
There is no recognized way of classifying delusions according to any phenomenological principles. Tables 3 and 4 gives the classification given by Cutting (1997).
Primary and secondary delusions
The term primary implies that delusion is not occurring in response to another psychopathological form such as mood disorder. According to Jaspers the core of primary delusion is that it is ultimately un-understandable. Secondary delusions are understandable when a detailed psychiatric history and examination is available. That is, they are understandable in terms of the patient’s mood state, to the circumstances of his life, to the beliefs of his peer group; and to his personality. A delusion, whether primary or secondary in nature, is based on delusional evidence: the reason the patient gives for holding his belief is like the belief itself, false, unacceptable and incorrigible. Gruhle (1915) considered that a primary delusion was a disturbance of symbolic meaning, not an alteration in sensory perception, apperception or intelligence. Wernicke (1906) formulated the concept of an autochthonous idea; an idea which is native to the soil, aboriginal, arising without external cause. The trouble with finding supposed autochthonous or primary delusions is that it can be disputed whether they are truly autochthonous. For this reason they are not considered of first rank in Schneider’s (1957) classification of symptoms.
Types of primary delusions
Delusional mood/atmosphere; Delusional perception; Delusional memory; Delusional ideas; Delusional awareness.
Delusional mood
It is usually a strange, uncanny mood in which the environment appears to be changed in a threatening way but the significance of the change cannot be understood by the patient who is tense, anxious and bewildered. Finally, a delusion may crystallize out of this mood and with its appearance there is often a sense of relief.
Delusional perception
In this an abnormal significance, usually in the sense of self-reference, despite the absence of any emotional or logical reason, is attributed to normal perception. Jaspers delineated the concept of delusional percept; and Gruhle (1915) used this description to cover almost all delusions. Schneider(1949) considered the essence of delusional perception to be the abnormal significance attached to a real percept without any cause that is understandable in rational or emotional terms; it is self-referent, momentous, urgent, of overwhelming personal significance and of course false.
Delusional memory
This is the symptom when the patient recalls as remembered an event or idea that is clearly delusional in nature, that is, delusion is retrojected in time. These are sometimes called retrospective delusions.
Delusional ideas
They appear abruptly in the patient’s mind, are fully elaborated, and unheralded by any related thoughts.
Delusional awareness
Delusional awareness is an experience which is not sensory in nature, in which ideas or events take on an extreme vividness as if they had additional reality. Delusional significance is the second stage of the occurrence of delusional perception. Objects and persons are perceived normally, but take on a special significance which cannot be rationally explained by the patient. Fine distinctions are sometimes imposed upon the classification of primary delusions, but are more collector’s items than features of useful clinical significance.
CONTENT OF DELUSIONS
Delusions are infinitely variable in their content but certain general characteristics commonly occur. It is determined by the emotional, social and cultural background of the patient. Common general themes include persecution, jealousy, love, grandiose, religious, nihilistic, hypochondriacal and several others.
Delusion of persecution
It is the most frequent content of delusion. It was distinguished from other types of delusion and other forms of melancholia by Lasegue (1852). The interfering agent may be animate or inanimate, other people or machines; may be system, organizations or institutions rather than individuals. Sometimes the patient experiences persecution as a vague influence without knowing who is responsible. May occur in conditions like: Schizophrenia, Affective psychosis: Manic, Depressive type, and Organic states: Acute, chronic. Persecutory overvalued ideas are a prominent facet of the litiginous type of paranoid personality disorder.
Delusion of infidelity
Described by Ey (1950) may be manifested as delusion, overvalued idea, depressive affect or anxiety state. Various terms have been used to describe abnormal, morbid or malignant jealousy. Kraeplin used the term ‘sexual jealousy’. Enoch and Trethowan (1979) have considered the demonstration of delusion of infidelity in distinguishing psychotic from other types.
Mullen (1997) has classified morbid jealousy with disorders of passion in which there is an overwhelming sense of entitlement and a conviction that others are abrogating their rights. The other two are the querulant who are indignant at infringements of rights and the erotomanic who are driven to assert their rights of love. Delusion of infidelity may occur without other psychotic symptoms. Such delusions are resistant to treatment and do not change with time. Delusions of jealousy are common with alcohol abuse, they may also occur in some organic states, and are often associated with impotence, e.g. the punch-drunk syndrome of boxers following multiple contra-coup contusion. Morbid jealousy arises with the belief that there is a threat to the exclusive possession of his wife, but this is just as likely to occur from conflicts inside himself, his own inability to love or his sexual interest directed towards someone else, as from changing circumstances in his environment or his wife’s behavior. Husbands or wives may show sexual jealousy, as may sexual cohabitees and homosexual pairs. Morbid jealousy makes a major contribution to the frequency of wife battering and is one of the commonest motivations for homicide.
Delusions of love
Erotomania was described by Sir Alexander Morrison (1848) as being: Characterized by delusions the patient’s love is of sentimental kind, he is wholly occupied by the object of his adoration, whom if he approach it is with respect. The respect the fixed and permanent delusions attending erotomania sometimes prompt those laboring under it to destroy themselves or others, for though in general tranquil and peaceful, the patient sometimes becomes irritable, passionate and jealous. Erotomania is commoner in women than men and a variety has been called ‘old maids insanity’ by Hart (1921), in which persecutory delusions often develop. These have sometimes been classified as paranoia, rather than paranoid schizophrenia; these delusional symptoms sometimes occur in the context of manic-depressive psychosis. Trethowan (1967) demonstrated the social characteristics of erotomania, relating the patient’s previous difficulties in parental relationships to the present erotomania. A variation of erotomania was described by and retains the name of de Clerambault (1942). Typically, a woman believes a man, who is older and of higher social status than she, is in love with her.
Grandiose delusions
In this the patient may believe himself to be a famous celebrity or to have supernatural powers. Expansive or grandiose delusional beliefs may extend to objects, so leading to delusion of invention. Grandiose and expansive delusions may also be part of fantastic hallucinosis, in which all forms of hallucinations occur.
Religious delusions
The religious nature of the delusion is seen as a disorder of content dependent on the patient’s social background, interests and peer group. The form of the delusion is dictated by the nature of the illness. So religious delusions are not caused by excessive religious belief, nor by the wrongdoing which the patient attributes as cause, but they simply accentuate that when a person becomes mentally ill his delusions reflect, in their content, his predominant interests and concerns. Although common, they formed a higher proportion in the nineteenth century than in the twentieth century and are still prevalent in developing countries.
Delusions of guilt and unworthiness
Initially the patient may be self-reproachful and self-critical which may ultimately lead to delusions of guilt and unworthiness, when the patients believe that they are bad or evil persons and have ruined their family. They may claim to have committed an unpardonable sin and insist that they will rot in hell for this. These are common in depressive illness, and may lead to suicide or homicide.
Delusions of negation/nihilistic delusions
These are the reverse of grandiose delusions where oneself, objects or situations are expansive and enriched; there is also a perverse grandiosity about the nihilistic delusions themselves. Feelings of guilt and hypochondriacal ideas are developed to their most extreme, depressive form in nihilistic delusions.
Factors concerned in the germination of delusions:
Disorder of brain functioning Background influences of temperament and personality Maintenance of self-esteem The role of affect As a response to perceptual disturbance As a response to depersonalization Associated with cognitive overload.
Factors concerned in the maintenance of delusions:
The inertia of changing ideas and the need for consistency Poverty of interpersonal communication Aggressive behavior resulting from persecutory delusions provokes hostility Delusions impair respect for and competence of the sufferer and promote compensatory delusional interpretation.
None of these factors are absolute but any or all may act synergistically to initiate and maintain delusion.
STAGES OF DELUSION FORMATION
Conrad proposed five stages of which are involved in the formation of delusions:
Trema: Delusional mood representing a total change in the perception of the world Apophany: A search for, and the finding of new meaning for psychological events Anastrophy: Heightening of the psychosis Consolidation: Forming of a new world or psychological set based on new meaning Residuum: Eventual autistic state.
THEORIES OF DELUSION FORMATION
Psychodynamic theory
Freud (1911) proposed that delusion formation involving denial, contradiction and projection of repressed homosexual impulses that break out from unconscious.
Delusions as explanations of experience
Binswanger& Minkowski (1930) proposed disordered experiences of space and time leading to imprisoned and controlled feelings. Later in 1942 de Clerambault, put forth the view that chronic delusions resulted from abnormal neurological events (infections, intoxications, lesions). Maher offered a cognitive account of delusions which emphasized disturbances of perception. He proposed that a delusional individual suffers from primary perceptual abnormalities, seeks an explanation which is then developed through normal cognitive mechanism, the explanation (i.e. the delusion) is derived by a process of reasoning that is entirely normal. Also, delusion is maintained in the same way as any other strong belief. These are further reinforced by anxiety reduction due to developing explanation for disturbing or puzzling experiences.
von Domarus rule
He postulated that delusions in schizophrenia arise from faulty logical reasoning. The defect apparently consists of the assumption of the identity of two subjects on the ground of identical predicates (e.g. Lord Rama was a Hindu, I am a Hindu, and therefore I am Lord Rama).
Learning theory
Learning theorists have tried to explain delusions in terms of avoidance response, arising specially from fear of interpersonal encounter.
Luhmann’s system theory
Luhmann defines that information, message and understanding connects the social systems with the psychic ones. If the psychic system fails to recognize the message of information correctly or is unable to negotiate between understanding and misunderstanding message, it detaches itself from the social system to which it is normally closely connected. This detachment releases the possibility of unhindered autistic fulfillment of desires and uncontrolled fear may appear as delusions.
Neuro-computational model
The cerebral cortex can be viewed as a computational surface that creates and maintains dynamic maps of important sensorimotor and higher level aspects of the organism and its environment, reflecting the organism’s experience. Acute delusions are the result of an increased activity of the euromodulators dopamine and norepinephrine. This not only leads to a state of anxiety, increased arousal and suspicion, but also to an increased signal to noise ratio in the activation of neural networks involved in higher order cognitive functions, leading to formation of acute delusions. Alteration in the neuromodulatory state not only causes the occurrence of unusual experiences but also modify neruroplasicity which influences the mechanism of long term changes. So chronic delusions may be maintained by a permanently increased neuromodulatory state, or by an extremely decreased noradrenergic neuromodulatory state (Black wood et al ., 2001).
THEORIES OF NEUROCOGNITIVE AND EMOTIONAL DYSFUNCTION
Theory of mind
It refers to the capacity of attributing mental states such as intentions, knowledge, beliefs, thinking and willing to oneself as well as to others. Amongst other things this capacity allows us to predict the behavior of others. Frith postulated that paranoid syndromes exhibit a specific ToM deficit, e.g., delusions of reference can be explained, at least in good part, by the patients’ inability to put themselves in another person’s place and thus correctly assess their behavior and intentions. Thought insertion and ideations of control by others can be traced back to dysfunctional monitoring of one’s own intentions and actions. Hence, thoughts enter the patient’s consciousness without his or her awareness of any intention to initiate these thoughts. Since deluded patients in symptomatic remission performed as well as normal controls at ToM tasks, ToM deficits seem to be a state rather than a trait variable.
The role of emotions
Delusions driven by underlying affect (mood congruent) may differ neurocognitively from those which have no such connection (mood incongruent). Thus, specific delusion-related autobiographical memory contents may be resistant to normal forgetting processes, and so can escalate into continuous biased recall of mood congruent memories and beliefs. Regarding threat and aversive response, identification of emotionally weighted stimuli relevant to delusions of persecution has been seen.
Probabilistic reasoning bias
It assumes that the probability-based decision-making process in delusional individuals requires less information than that of healthy individuals, causing them to jump to conclusions, which is neither a function of impulsive decision-making nor a consequence of memory deficit. Kemp et al ., pointed out that deluded patients are not deluded about everything and that there may be no global deficit in reasoning abilities. The findings in reasoning abilities in delusional patients are only subtle and one might question the strength of their causality in delusional thinking.
Theory of attributional bias
Bentall and others proposed that negative events that could potentially threaten the self-esteem are attributed to others (externalized causal attribution) so as to avoid a discrepancy between the ideal self and the self that is as it is experienced. An extreme form of a self-serving attributional style should explain the formation of delusional beliefs, at least in cases where the delusional network is based on ideas of persecution, without any co-occurring perceptual or experiential anomaly. During the course of illness, the preferential encoding and recall of delusion-sensitive material can be assumed to continually reinforce and propagate the delusional belief.
Multifactorial model
The emergence of symptoms assumed to depend upon an interaction between vulnerability and stress. Therefore the formation of delusion begins with a precipitator such as life event, stressful situations, drug use leading to arousal and sleep disturbance. This often occurs against the backdrop of long-term anxiety and depression. The arousal will initiate inner outer confusion causing anomalous experiences as voices, actions as unintended or perceptual anomalies which will turn on a drive for a search for meaning, leading to selection of explanation in the form of delusional belief [ Figure 1 ].
Neurobiological theories
The earlier works like Hartley (1834) suggested that vibration caused by brain lesion may match with vibrations associated with real perception. Ey (1952) believed delusion to be a sign of cerebral dysfunctions and Morselli listed the metabolic states for delusional pathogenesis. Jackson (1894) suggested pathogenesis of delusions due to combination of loss of functions of damaged part of brain. Cummings (1985) found that a wide variety of conditions can induce psychosis, particularly those that affect the limbic system, temporal lobe, caudate nucleus. He also noted that dopaminergic excess or reduced cholinergic activity also predispose to psychosis. He suggested that the common locus is limbic dysfunctions leading to inappropriate perception and paranoid delusion formation.
Septo-hippocampal dysfunction model: The dysfunction leads to erroneous identification of neutral stimuli as important and judge expected as actual. Storage of erroneous information leads to delusion formation. Semantic memory dysfunction model: Delusions form due to inappropriate lying down of semantic memory and their recollections. Regional correlation with Alzheimer’s: Revealed a significant relationship between severity of delusional thought and the metabolic rates in three frontal regions. The study indicated that severity of delusions was associated with hypometabolism in additional prefrontal and anterior cingulate regions. Delusion of alien control has been linked with hyperactivation of the right inferior parietal lobule and cingulate gyrus, brain region important for visuospatial functions. Organic delusional disorders are more likely to be noted in extrapyramidal disorders involving the basal ganglia and thalamus and in limbic system disease. Alexander et al ., (1986) proposed five structural functional loops. Any lesions, dysfunctions or derangements that affect any part of this loop can be expected to alter beliefs and emotional behavior [ Figure 2 ].
THE PERSISTENCE AND ELASTICITY OF DELUSIONS
Prediction error theories of delusion formation suggest that under the influence of inappropriate prediction error signal, possibly as a consequence of dopamine dysregulation, events that are insignificant and merely coincident seem to demand attention, feel important and relate to each other in meaningful ways. Delusions ultimately arise as a means of explaining these odd experiences (Kapur, 2003; Maher, 1974). The insight relief gained by arriving at an explanatory scheme leads to strong consolidation of the scheme in memory. In support of this view, aberrant prediction error signals during learning in patients with first-episode psychosis have been confirmed experimentally. Furthermore, the magnitude of aberrant prediction error signal correlated with delusion severity across a group of patients with first-episode psychosis. However, there are important characteristics of delusions that still demand explanation: Notably their persistence. Normal associations can extinguish if they prove erroneous, normal beliefs can be challenged and modified. But delusions are noteworthy for the fact that they remain even in the absence of support and in the face of strong contradictory evidence. We believe that this striking clinical phenomenon can be explained within the same framework by considering key findings from the animal learning literature, a literature that has been formerly invoked to explain chronic relapse to drug abuse; extinction and reconsolidation. If delusion formation may be explained in terms of associative learning then perhaps extinction may represent the process through which delusions are resolved. Extinction involves a decline in responding to a stimulus that has previously been a consistent predictor of a salient outcome. Prediction error is also central to extinction. It has been suggested that negative prediction error (a reduction in baseline firing rate of prediction error coding neurons) leads the organism to categorize the extinction situation as different from the original, reinforced, situation and it now learns not to expect the salient event in that situation. This learning focuses on contextual cues, allowing the animal to distinguish the newly non-reinforced context from the old, reinforced one. Extinction does not involve unlearning of the original association, but rather the formation of a new association between the absence of reinforcement and the extinction situation. Extinction experiences (the absence of expected reinforcement) invoke an inhibitory learning process which eventually overrides the original cue response in midbrain dopamine neurons. Individuals with psychosis do not learn well from these absent but expected events, nor do they consolidate the learning that does occur. But there is more to delusion maintenance than persistence in the absence of supportive evidence: delusions persist even when there is evidence that directly contradicts them. When confronted with counterfactual evidence, deluded individuals do not simply disregard the information. Rather, they may make further erroneous extrapolations and even incorporate the contradictory information into their belief. So, while delusions are fixed, they are also elastic and may incorporate new information without shifting their fundamental perspective.
RESOLUTION OF DELUSION
Once a simple delusional belief is adopted with conviction, the subsequent course is very variable.
Some patients have fleeting or brief delusional states, spontaneously remitting and returning to normal. Others respond well to standard treatment. Others elaborate and develop their belief into a comprehensive system which may remain unaltered even with regular medication.
The multidimensionality of delusional experience also has implications for the conceptualization of the temporal course of psychotic decompensation and resolution. Individual dimensions of delusional experience often change independently of one another during the course of a psychotic episode, so that recovery can be determined by changes in one of the several dimensions (Garety and Freeman,1999).
PATTERN OF RESOLUTION
Encapsulation: Patients vary very much in the degree to which they can maintain their original personality and adapt to a normal life. It is frequently seen in residual states. In some cases one sees a longitudinal splitting as it were in the current of life, both the reality adapted and the delusional life go on alongside each other. On certain occasions (e.g. Meeting certain people, return to familiar locations, meeting the doctor who had treated the patient) the delusional complex comes to the surface and florid symptoms reappear.
Jorgensen (1995) found three types of recovery, one with full and the other two with partial recovery of delusional beliefs. In patients with partial recovery, decrease in pressure precede, decrease in other dimensions. For two-thirds there was no change in the degree or insight during recovery. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):3-18 | oa_package/16/10/PMC3016695.tar.gz |
|||||
PMC3016696 | 21234159 | MATERIALS AND METHODS
Sample
The sample consisted of 30 alcoholic and 30 non-alcoholic parents from the Kanke Block of Ranchi district. The age range of parents was 25 to 50 years. Those parents were included who had history of more than five years of alcohol abuse and who were interacting with their children for at least five years. They had minimum primary level of educational qualification and gave consent to participate in the study. Parents having multiple substance abuse and co-morbid psychiatric illness were excluded.
Child participants were between the age range of 12 and 17 years. Only eldest children who did not have history of developmental delay were taken. Mean age of children of alcoholic parents was 13.50 ± 1.13 years, and of children of non-alcoholic parents was 3.96 ± 1.51 years. Socio-demographic characteristics of children of alcoholic and non-alcoholic parents are given in Table 1 .
Tools
Socio-demographic data sheet
Socio-demographic data was prepared to obtain background information about the subject on dimensions like age, sex, marital status, education, income, residential area, type of family etc.
Parent-child relationship scale (PCRS; Rao, 1989)
The present scale is adapted from the revised Roe-Seigalman parent-child relationship questionnaire that measures the characteristic behavior of parents as experienced by their children. It consists of 100 items categorized into ten dimensions, i.e., protecting, symbolic, punishment, rejecting, object punishment, demanding, indifferent, symbolic reward, loving, object reward and neglecting. At first all the selected participants were contacted individually and consent was taken to participate in the study. They were informed regarding purpose of the research. First, socio-demographic details were taken from parents and Parent-Child Relationship Scale (PCRS) was administered to the children. The Statistical Package for Social Sciences (SPSS), Version 13.0 was used for the analysis of the data. Percentage, chi-square test, t -test, and correlation were used to analyze the data. | RESULT
The sample consisted of a total of 60 participants; 30 children of alcoholic and 30 children of non-alcoholic parents. Table 2 shows mean and standard deviation (SD) of scores obtained by children of alcoholic and non-alcoholic parents in different domains of PCRS towards father. Significant difference was found in the domains of symbolic punishment, rejecting, objective punishment, demanding, indifferent, symbolic reward, loving and neglecting.
Table 3 shows the mean and standard deviation of scores obtained by children of alcoholic and non-alcoholic parents in various domains of PCRS towards mother. Significant difference was found in the domains of symbolic punishment, rejecting, object punishment, indifferent, neglecting and demanding. Correlation between various domains of parent-child relationship and duration of alcohol intake was done in the children of alcoholic parents.
There was significant correlation between neglecting and object reward with duration of alcohol intake [ Table 4 ]. | DISCUSSION
The present study was conducted with the aim of comparing the parent-child relationship in children of alcoholic and non-alcoholic parents. In parent-child relationship a significant difference was found in the domains of symbolic punishment, rejecting, objective punishment, demanding, indifferent, symbolic reward. loving, and neglecting for father. In child’s relationship with mother, significant difference was found in the domain of symbolic punishment, rejecting, object punishment, indifferent and neglecting.
Previous studies also reported that families of alcoholics have lower levels of family cohesion, expressiveness, independence, and intellectual orientation and higher levels of conflict compared with non-alcoholic families. Some characteristics, however, are not specific to alcoholic families: Impaired problem-solving ability and hostile communication are observed both in alcoholic families and in families with problems other than alcohol (Billings et al ., 1979). Alcoholic parents have a negative effect on their children because the effect of alcohol undermines their capacity to use their parenting skills in a number of ways. First, excessive drinking by the parents can lead to inconsistent parenting behavior. When the child misbehaves in a certain way, the parents may overreact by screaming at the child on one occasion; on another occasion, parents may act indulgently towards the child. Consequently, the child receives mixed signals about appropriate behavior. In addition, the inconsistency in parenting behaviors creates an unpredictable and unstable environment that can undermine the child’s mental and emotional growth (Windle, 1996). Haugland (2003) examined possible risk factors associated with child adjustment in a sample of children with alcohol-abusing fathers. Factors included were socioeconomic status, severity of the fathers’ alcohol abuse, parental psychological problems, and family functioning. The finding further suggested that child adjustment in families with paternal alcohol abuse is the result of an accumulation of risk factors rather than the effects of the paternal alcohol abuse alone. Both, general environmental risk factors (psychological problems in the fathers, family climate, family health and conflicts) and environmental factors related to the parental alcohol abuse (severity of the alcohol abuse, the child’s level of exposure to the alcohol abuse, changes in routines and rituals due to drinking) were related to child adjustment. The result indicated the need to obtain both parents’ assessments of child adjustment, as the fathers’ assessment was associated with different risk factors compared to the mothers. Four categories of families were distinguished based on the amount and type of disruptions in family rituals and routines, i.e., protecting, emotional disruptive, exposing, and chaotic families (Haugland, 2005). In the present study significant positive correlation between neglecting and object reward with duration of alcohol intake was found. Eiden et al . (2004) examined the transactional nature of parent-child interactions over time among alcoholic and non-alcoholic families. Higher paternal alcohol consumption at 12 months was longitudinally predictive of negative parental behavior at 24 months. Results highlighted the nested nature of risk in alcoholic families and the direction of influence from parent to child during interactions and suggested that the pathway to risk among these children is through negative parent-infant interactions. Parents who abuse alcohol were also known to exercise harsh discipline. As described above, alcoholics are easily provoked at the slightest offence. Therefore, they can be excessively harsh and arbitrary in their use of discipline. These forms of discipline can result in the growing alienation of the children from their parents (Windle, 1996).
There were certain limitations of the present study. Firstly, sample size was small. Secondly, only parent-child relationship was seen in the present study. Other areas like family environment and family interaction pattern, behavioral problems in children etc. were not included. Thirdly, response was taken only from the eldest child. | CONCLUSION
The results showed that the children of alcoholic parents tended to have more symbolic punishment, rejecting, objective punishment, demanding, indifferent and symbolic reward in loving and neglecting than children of non alcoholic parents. | Aim:
Overall aim of the study was to see parent-child relationship in children of alcoholic and non-alcoholic parents.
Materials and Methods:
The sample consisted of 30 alcoholic and 30 non-alcoholic parents and their children taken from Kanke Block of Ranchi district. The sample was selected on the basis of inclusion and exclusion criteria. Socio-demographic data sheet and Parent Child Relationship Scale (Rao, 1978) were administered to the children.
Results:
In a child’s perception of father in various domains of parent-child relationship, significant difference at P < 0.01 was found in the domain of symbolic punishment, rejecting, objective punishment, demanding, indifferent, symbolic reward in loving and neglecting, and in child’s perception of the mother. Significant difference at P < 0.01 was found in the domain of symbolic punishment, rejecting, object punishment, indifferent and in neglecting.
Conclusion:
The result showed that the children of alcoholic parents tended to have more symbolic punishment, rejecting, objective punishment, demanding, indifferent, symbolic reward loving and in neglecting than children of non alcoholic parents. | An alcoholic family’s home environment and the manner in which family members interact may contribute to the risk of the problems observed among children of alcoholics. Although alcoholic families are a heterogeneous group, some common characteristics have been identified. Families of alcoholics have lower levels of family cohesion, expressiveness, independence, and intellectual orientation and higher levels of conflict compared with non-alcoholic families (Filstead et al ., 1981; Moos & Billings, 1982; Moos & Moos, 1984; Clair & Genest, 1986). Some characteristics, however, are not specific to alcoholic families. Impaired problem-solving ability and hostile communication are observed both in alcoholic families and in families with problems other than alcohol (Billings et al ., 1979). Moreover, the characteristics of families with recovering alcoholic members and of families with no alcoholic members do not differ significantly, suggesting that a parent’s continued drinking may be responsible for the disruption of family life in an alcoholic home (Moos & Billings, 1982). Studies comparing children of alcoholics with those of non-alcoholics have also found that parental alcoholism is linked to a number of psychological disorders in children. Divorce, parental anxiety or affective disorders, or undesirable changes in the family or in life situations can add to the negative effect of parental alcoholism on children’s emotional functioning (Schuckit & Chiles, 1978; Moos & Billings, 1982). A number of influential clinicians (Black, 1982) have described children of alcoholics as victims of an alcoholic family environment characterized by disruption, deviant parental role models, inadequate parenting, and disturbed parent-child relationships. These family-related variables are thought to undermine normal psychological development and to cause distress and impaired interpersonal functioning, both acutely and chronically. In a study conducted on the effects of alcohol on parents’ interactions with children, it was found that parents are unable to respond appropriately to a child’s improper behavior. Although the child is acting improperly, the group of intoxicated parents not only fails to discipline the child, but engage in parental indulgences that are inappropriate for the occasion (Lang et al ., 1999). Eiden et al . (2004) examined the transactional nature of parent-child interactions over time among alcoholic and non-alcoholic families. They found that long-term alcohol intake was predictive of negative parental behavior. Kearns-Bodkin and Leonard (2008) suggested that children raised in alcoholic families may carry the problematic effects of their early family environment into their adult relationships. Hence, parent-child relationship is very important while working with children of alcoholic parents. Keeping this point in view, the present study aimed to assess parent-child relationship in children of alcoholic and non-alcoholic parents. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):32-35 | oa_package/a2/c6/PMC3016696.tar.gz |
||
PMC3016697 | 21234160 | MATERIALS AND METHODS
The study was conducted on 150 female teachers from various schools of Bhopal. The stratified random sampling method was used for selection of sample. The age range varied from 20-60 years and the experience also varied from 1-35 years.
Tool
The subscale of ‘The Occupational Stress Indicator’ (Wendy Lord, 1993) was used to assess stress-coping behaviour of the teachers. It is one of the subscales of the Battery consisting of 28 items comprising six dimensions of coping strategies i.e. Logics, Involvement, Social Support, Task Strategies, Time Management and Home and Work Relations. The subjects were required to rate themselves on a 6-point rating scale. Hence the range of scores on the test could be between 28 to168. The scores of the subjects were compared in terms of marital status, age, and level of teaching with the help of ‘ t ’ test and ‘F’ test was used for comparing experience. | RESULTS AND DISCUSSION
Comparison of scores on the basis of marital status revealed significantly higher scores of married teachers on five dimensions of coping i.e. logics, social support, task strategies, time management, home and work relations as well as total score, indicating better coping ability of married teachers [ Table 1 ]. Stress-coping is closely related to the overall life satisfaction of the individual (Baum and Singer, 1982). The status of marriage brings considerable satisfaction to both men and women but delivers special bonus to women in Indian society. Married women are not only happier than single women but they are also safer (Pant Amrita et al ., 2002). On the other hand studies reveal that an overall satisfaction with life and workplace is much lower among unmarried women (Pant Amrita et al ., 2002; Wadud et al ., 1998). Better coping of their job stress by married women can be explained with ‘Spillover Model’ (Crouter A., 1984), which suggests that the emotional states experienced in one sphere get transferred to the other areas of life.
Age was found to positively affect the stress-coping scores. Women in the age range of 40-60 years scored significantly higher than the women in the younger age range on all the dimensions of coping i.e. logics, involvement, social support, task strategies, time management, home and work relations as well as total score [ Table 2 ].
Studies in the past also indicated that as the workers grow older they tend to cope better with their jobs (Glenn, Taylor and Weaver, 1977; Singh, 1980; Near et al ., 1978, Wadud et al ., 1998).
The findings on teaching experience indicated that the teachers with up to five years of experience scored much lesser on stress-coping than the teachers with more than five years of experience on all the dimensions of coping i.e. logics, involvement, social support, task strategies, time management, home and work relations as well as total score indicating that with increased experience the women are in a better position to cope with the job stress [ Table 3 ] (Wadud et al ., 1998). Trendall (1989) in his study on primary, secondary and special school teachers found that more stress was experienced by the teachers with 5-10 years of experience, but senior teachers reported lesser stress. Prakash et al ., (2002) in their study of University teachers found no major differences between male and female teachers at varying teaching experience levels on measures of occupational role stressors and coping.
The comparison of primary and high school teachers did not show a significant difference in terms of stress-coping scores except on the dimension of logics, in which the high school teachers scored higher than the primary school teachers [ Table 4 ].
Empirical research has generated some hard data to suggest the sensitivity of marital status, age and experience and level of teaching as some of the significant demographic variables in determining the coping with occupational stress of female teachers. These findings have gained added significance in view of the increasing influx of women into the workforce and consequent transition of traditional roles as women in the society. | RESULTS AND DISCUSSION
Comparison of scores on the basis of marital status revealed significantly higher scores of married teachers on five dimensions of coping i.e. logics, social support, task strategies, time management, home and work relations as well as total score, indicating better coping ability of married teachers [ Table 1 ]. Stress-coping is closely related to the overall life satisfaction of the individual (Baum and Singer, 1982). The status of marriage brings considerable satisfaction to both men and women but delivers special bonus to women in Indian society. Married women are not only happier than single women but they are also safer (Pant Amrita et al ., 2002). On the other hand studies reveal that an overall satisfaction with life and workplace is much lower among unmarried women (Pant Amrita et al ., 2002; Wadud et al ., 1998). Better coping of their job stress by married women can be explained with ‘Spillover Model’ (Crouter A., 1984), which suggests that the emotional states experienced in one sphere get transferred to the other areas of life.
Age was found to positively affect the stress-coping scores. Women in the age range of 40-60 years scored significantly higher than the women in the younger age range on all the dimensions of coping i.e. logics, involvement, social support, task strategies, time management, home and work relations as well as total score [ Table 2 ].
Studies in the past also indicated that as the workers grow older they tend to cope better with their jobs (Glenn, Taylor and Weaver, 1977; Singh, 1980; Near et al ., 1978, Wadud et al ., 1998).
The findings on teaching experience indicated that the teachers with up to five years of experience scored much lesser on stress-coping than the teachers with more than five years of experience on all the dimensions of coping i.e. logics, involvement, social support, task strategies, time management, home and work relations as well as total score indicating that with increased experience the women are in a better position to cope with the job stress [ Table 3 ] (Wadud et al ., 1998). Trendall (1989) in his study on primary, secondary and special school teachers found that more stress was experienced by the teachers with 5-10 years of experience, but senior teachers reported lesser stress. Prakash et al ., (2002) in their study of University teachers found no major differences between male and female teachers at varying teaching experience levels on measures of occupational role stressors and coping.
The comparison of primary and high school teachers did not show a significant difference in terms of stress-coping scores except on the dimension of logics, in which the high school teachers scored higher than the primary school teachers [ Table 4 ].
Empirical research has generated some hard data to suggest the sensitivity of marital status, age and experience and level of teaching as some of the significant demographic variables in determining the coping with occupational stress of female teachers. These findings have gained added significance in view of the increasing influx of women into the workforce and consequent transition of traditional roles as women in the society. | Background:
The study investigates the role of certain demographic variables in determining stress-coping behavior of female teachers.
Materials and Methods:
The sample consists of 150 female teachers selected by stratified sampling method from various schools of Bhopal. Stress-coping behavior was measured with the help of a subscale of ‘The Occupational Stress Indicator’ (Wendy Lord, 1993) consisting of 28 items encompassing six dimensions of coping strategies i.e. Logics, Involvement, Social Support, Task Strategies, Time Management and Home and Work Relations. The scores of the subjects were compared in terms of marital status, age, and level of teaching with the help of ‘ t ’ test and ‘F’ test was used for comparing experience.
Results:
Marital status, age, and experience were found to be significant determinants of stress-coping, whereas the sores did not differ significantly on the basis of level of teaching.
Conclusion:
Married teachers in the age range of 40-60 years, with higher experience can cope better with the job stress than their counterparts. | An individual cannot remain in a state of stress and strain and tries to adopt some strategy to deal with the stress. Stress is also defined as a mismatch between demands and coping resources. Coping refers to the thoughts and acts people use to meet the internal and external demands.
Coping has been defined as the behavioral and cognitive efforts a person uses to manage the demands of a stressful situation (Chang & Strunk, 1999). Folkman, Lazarus, Gruen, and DeLongis (1986) define coping as “the person’s cognitive and behavioral efforts to manage the internal and external demands in the person-environment transaction”. In times of stress, an individual normally engages in certain coping strategies to handle the stressful situations and their associated emotions. The more an individual adopts adaptive coping strategies, the less his/her stress, and better his/her mental health. There are several methods of coping. Feeling in Control as a Way of Coping (Rubin, Paplau, & Salovey, 1993), Optimism and Pessimism Coping Style (Rubin, Paplau, & Salovey, 1993), Approach and Avoidant Coping (Chang & Strunk, 1999), Appraisal and Coping (Rubin, Paplau, & Salovey, 1993). Endler & Purker (1990) gave three different coping styles i.e. task- oriented (problem-focused), emotion-oriented coping and avoidance-oriented coping.
Coping resources enable the individual to handle stressors more effectively, reduce the intensity of symptoms and help recover faster from exposure. These are adaptive capacities that provide immunity against damage from stress (Baum & Singer, 1982). These resources are psychological prophylaxis that can reduce likelihood of stress (Baum & Singer, 1982). Some studies have demonstrated a relationship between coping and mental health. Ebata and Moos (1994), Simoni and Peterson (1997) and Srivastava (1991) found that positive coping (e.g. problem solving action, logical analysis, information seeking) was positively related to wellbeing. In contrast, avoidance coping (e.g. denial or suppression of feelings) was associated with maladjustment to life stress. Kucukalic et al ., (2003) emphasized that coping is a dynamic process that is reciprocally related between the individual and his environment. He found that the subjects who faced torture used more maladaptive coping than those who had not faced torture.
Evidence from the research efforts on science teacher stress suggests strategies such as meditation and relaxation and engagement in leisure-time activities for palliating stress (Betkouski, 1981; Penny, 1982). Trendall (1989) in his study on primary, secondary and special school teachers studied the relationship of age, sex, experience, qualification and level of responsibility with stress, strain and coping. Significant differences were found for sex group and level of qualification. It was found that the teachers with 5-10 years of experience experienced more stress, but senior teachers reported lesser stress.
In this study, coping was referred to as the cognitive and behavioral efforts used by an individual to handle difficulties and stress at work.
The aim of the present study was to investigate the influence of certain demographic factors in determining the stress-coping of female teachers. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):36-38 | oa_package/c0/29/PMC3016697.tar.gz |
|||
PMC3016698 | 21234161 | MATERIALS AND METHODS
Sample
The present study was a cross-sectional study for which a sample consisting of 100 patients from the inpatient and outpatient services of the Ranchi Institute of Neuro Psychiatry and Allied Sciences was taken using purposive sampling. Of the 100 patients 30 were suffering from Schizophrenia, 30 from Bipolar Affective Disorder Manic type, 10 from Bipolar Affective Disorder Depressive type, 20 were Substance Dependents and 10 were Obsessive Compulsive Disorder patients. Subjects were between the age range of 25-35 years and were educated minimum up to primary level. Both male and female subjects were taken for the study. Patients with any other neurological disorder/major physical illness were excluded. All subjects were cooperative and gave verbal consent for the study. Sample characteristics are given in Table 1 .
Tools
Socio-demographic and clinical data sheet
Socio-demographic and clinical details were collected on a socio-demographic and clinical data sheet especially designed for the study. It includes various socio-demographic variables (i.e. age, sex, marital status, family type, residence, education and religion etc.) clinical variables (i.e. diagnosis, total duration of illness).
Brief psychiatric rating scale (Overall et al., 1963)
This scale was administered to assess the severity of psychiatric symptoms.
Felt stigma scale (King et al., 2007)
Developed with the help of the Self-Esteem Scale published in the British Journal of Psychiatry , March 2007. It constitutes of a total of 28 items. Fifteen appropriate items to the socio-cultural aspects of the sample, were selected for the study. The scoring was done on a three-point scale: strongly agree, agree and disagree. Eleven items were positively worded and four were negatively worded so the scores reversed for negatively worded items. The minimum score that can be obtained is 15 and maximum is 45. The higher the score, the higher is the felt stigma. For computing levels of felt stigma the scores were also measured in three levels, 15-25 as low, 26-35 as medium and 36-45 as high felt stigma.
Insight
The system adopted by Kaplan and Sadock in their Comprehensive Textbook of Psychiatry (2000) was used to grade the patient’s insight level.
Procedure
Socio-demographic and clinical information was collected using the Socio-demographic and Clinical Data Sheet. Information was gathered from reliable sources. Participants were selected from the inpatient and outpatient services of the Ranchi Institute of Neuro Psychiatry and Allied Sciences. Participants who fulfilled the inclusion/exclusion criteria were screened for severity using the Brief Psychiatric Rating Scale. Subjects found in the range of mild to moderate level of severity on this scale participated in the study. Kaplan and Sadock’s (2000) system was used to grade patient’s insight level. Insight was considered to be absent when it was found to be below Grade III. Patients found to be having insight above Grade III level were considered as having insight. To assess the perceived stigma Felt Stigma Scale (King et al ., 2007) was administered.
Statistical analysis
The Statistical Package for Social Sciences (SPSS) Version 10.0 was used for statistical analysis. Data of the present study is described using t-test for continuous variables and Chi-square test for categorical variables. | RESULTS AND DISCUSSION
The study aimed to compare the felt stigma and its relationship with insight among patients attending inpatient and outpatient services of RINPAS. For this purpose we tried to match both the groups (i.e. with insight and without insight) in various socio-demographic and clinical variables. Both the groups were matched for sex, education, marital status, occupation, religion, domicile, and socioeconomic status but differed significantly for age. In the clinical variables, both the groups differed significantly for the diagnostic group. Our study revealed that both the groups (i.e. with insight and without insight) have significantly different levels of felt stigma [ Table 2 ].
Individuals diagnosed with mental illness can only occupy post-diagnosis identities that are known and available to them. Knowledge of a range of post-diagnosis identities depends on exposure to heterogeneity of experience, perhaps through dialogues with mental health professionals, contacts with other diagnosed individuals, and more diverse portrayals of mentally ill people in education and the media. Even with that knowledge, however, mobility across post-diagnosis identities, and other social identities, is not equally available. Some individuals may function in social environments that constrain choices for group identification, forcing them into situations of social isolation or binding them to groups that may or may not meet their needs. Therefore, awareness of identity options does not guarantee access to them. Furthermore, variations in individual characteristics like personality, creativity, self-confidence and life opportunities may alter the capacity to enact various post-diagnosis identities, or juxtapose them with other social identities. Similarly, periods of illness and recovery that may produce fluctuations in cognitive function, social skills, and expression of paranoia, depression, and other symptoms can alter capacity for group identifications and withstanding stigma. Access to specific post-diagnosis identities is likely determined by individual, social, and illness-related factors that can change, contributing to unfixed relationships between insight, treatment compliance, and psychosocial outcomes. This conceptualization encourages the replacement of dichotomies of good insight versus poor insight with consideration of internal and external resources that might promote the enactment of various post-diagnosis identities. If movement among post-diagnosis identities is expected and perhaps desirable, then it becomes important to ensure that individuals are equipped with the internal and external resources to shift identities as circumstances demand. | RESULTS AND DISCUSSION
The study aimed to compare the felt stigma and its relationship with insight among patients attending inpatient and outpatient services of RINPAS. For this purpose we tried to match both the groups (i.e. with insight and without insight) in various socio-demographic and clinical variables. Both the groups were matched for sex, education, marital status, occupation, religion, domicile, and socioeconomic status but differed significantly for age. In the clinical variables, both the groups differed significantly for the diagnostic group. Our study revealed that both the groups (i.e. with insight and without insight) have significantly different levels of felt stigma [ Table 2 ].
Individuals diagnosed with mental illness can only occupy post-diagnosis identities that are known and available to them. Knowledge of a range of post-diagnosis identities depends on exposure to heterogeneity of experience, perhaps through dialogues with mental health professionals, contacts with other diagnosed individuals, and more diverse portrayals of mentally ill people in education and the media. Even with that knowledge, however, mobility across post-diagnosis identities, and other social identities, is not equally available. Some individuals may function in social environments that constrain choices for group identification, forcing them into situations of social isolation or binding them to groups that may or may not meet their needs. Therefore, awareness of identity options does not guarantee access to them. Furthermore, variations in individual characteristics like personality, creativity, self-confidence and life opportunities may alter the capacity to enact various post-diagnosis identities, or juxtapose them with other social identities. Similarly, periods of illness and recovery that may produce fluctuations in cognitive function, social skills, and expression of paranoia, depression, and other symptoms can alter capacity for group identifications and withstanding stigma. Access to specific post-diagnosis identities is likely determined by individual, social, and illness-related factors that can change, contributing to unfixed relationships between insight, treatment compliance, and psychosocial outcomes. This conceptualization encourages the replacement of dichotomies of good insight versus poor insight with consideration of internal and external resources that might promote the enactment of various post-diagnosis identities. If movement among post-diagnosis identities is expected and perhaps desirable, then it becomes important to ensure that individuals are equipped with the internal and external resources to shift identities as circumstances demand. | CONCLUSION
Findings indicate that though there is certain amount of stigma present in patients without insight, as is expected, the level of stigma increases as the patients develop insight into their illness. Future empirical work may further clarify the connections between awareness of illness and both individual and social processes that influence psychosocial outcomes for people diagnosed with mental illnesses. At the same time, the study reveals how much more is still unknown. Identifying cross-sectional connections between insight, self-stigma, hope, self-esteem, and social functioning cannot tell us all that we want to know about the longitudinal process of living with diagnosis. Our patients grapple with constructing a personal narrative that includes the experiences of mental illness, integrating a post-diagnosis identity with other social identities, and negotiating these identities in a social context that may not accommodate one or more of them. | Background:
The literature on insight has paid insufficient attention to the social experiences that are associated with receiving and endorsing a diagnosis of mental illness. The psychological and behavioral commitments associated with insight extend beyond agreeing with a diagnosis and accepting treatment to include taking on the identity of an individual diagnosed with mental illness. This study sought to examine the relationship between insight and stigma in psychiatric patients.
Materials and Methods:
Cross-sectional assessment of insight and stigma was done using the system adopted by Kaplan and Sadock in their comprehensive textbook of psychiatry and Felt Stigma Scale in 100 psychiatric patients (40 patients suffering from Bipolar affective disorder, 30 Schizophrenics, 20 Substance dependents and 10 with Obsessive Compulsive disorder).
Results:
It was found that the level of stigma felt by patients with insight was significantly higher than that felt by patients without insight.
Conclusion:
Though there is a certain extent of stigma present in patients without insight, as is expected, the level of stigma increases as the patients develop insight. | In psychiatry, the term insight is used to refer to the capacity to recognize that one has an illness that requires treatment (Ghaemi, 1997). Research suggests that individuals diagnosed with psychotic illnesses are more likely than any other patient group to be assessed as having poor insight (Amador et al ., 1994; Lysaker et al ., 2002). Mental health professionals view insight as integral to achieving treatment compliance and promoting positive social and health outcomes for diagnosed individuals (McEvoy, 1998; McGorry & McConville, 1999). Yet research shows that interventions to promote insight have not led to improved receptivity to treatment or adherence to treatment programs (Beck-Sander, 1998; O’Donnell et al ., 2003). In fact, recent work suggests that insight may be a clinical phenomenon that is independent of beliefs about the usefulness of medical treatment (Linden & Godemann, 2007). In addition, the search for positive outcomes from insight has revealed negative outcomes, particularly in the areas of quality of life and self-esteem (Kravetz et al ., 2000; O’Mahony, 1982; Schwartz, 1998; Warner et al ., 1989). The concept of insight is problematic because it merges several aspects of the mental illness experience that may not belong together. An examination of the theoretical and empirical literatures in the area reveals a m élange of ideas about awareness of illness, acceptance of illness, willingness to take medication or other treatment, and endorsement of other expectations that are applied to people with mental disorders (Kravetz et al ., 2000). There is one consistency; judgments of insight are always based on the extent to which patients endorse a biomedical explanation for illness. Although insight usually describes adherence to a particular belief system about mental illness, some assume that lack of insight reflects the absence of complex, reflective thought (White et al ., 2000). Poor insight may be attributed to neuropsychological deficit, unremitting psychopathology, or unsophisticated ego defense mechanisms (Ghaemi, 1997). In contrast, good insight is presumed to be the outcome of an appropriate developmental or restorative process that transforms a previously unaware or highly defensive patient into one who is aware and compliant. The high insight/high functioning and low insight/dysfunctional distinctions seem clear in theory, but reality is not as easily categorized. The patients that we see with high insight are not always functioning well, and the patients that we see with low insight are not always functioning poorly. Both research literature and clinical experience suggest that a patient’s acceptance of the medical explanation for the experiences of mental illness does not tell us everything we need to know about how they are coping with diagnosis.
Stigmatization involves a separation of individuals labeled as different from “us” who are believed to possess negative traits, resulting in negative emotional reactions, discrimination and status loss for the stigmatized person (Link et al ., 2004). Stigmatization of individuals diagnosed as having serious mental illness has been observed across the world and the family members who help care for them also feel stigmatized as a result of their association with the loved one with mental illness (Phelan et al ., 1998; Phillips et al ., 2002; Struening et al ., 2001; Thara and Srinivasan; 2000). Studies on psychiatric stigma have often focused on public attitudes. Because these collective attitudes vary in their impact on individuals, and stigma is ultimately an inner subjective experience, they provide at best an approximate guide to how stigma causes difficulties to individuals with mental illness. In contrast, understanding patients’ subjective experiences of stigma attenuates us to what is at stake in their local lived world, i.e. the everyday non-trivial interpersonal transactions involving family members, partners, friends and colleagues (Kieinman and Kleinman, 1977).
The insight literature has paid insufficient attention to the social experiences that are associated with receiving and endorsing a diagnosis of mental illness. Insight involves taking on a new identity that changes the way individuals see themselves, and the ways that others see them. The insight concept is changed by recognition of its ties to identity processes that extend beyond the biomedical explanation for illness. Insight certainly reflects the extent to which individuals agree with their doctors, but it is also an indication of the extent to which individuals are willing to identify themselves as part of a group of people that are similarly affected. Consequently, the psychological and behavioral commitments associated with insight extend beyond agreeing with a diagnosis and accepting treatment to include taking on the identity of an individual diagnosed with mental illness. The expectations that individuals have for post-diagnosis identity may be extremely constricted or highly elaborated, based on the spectrum of patient identity representations that are known to them. The expectations they have for group identification with the community of mentally ill people are likely to be influenced by previous knowledge of mental illness and interactions they have had with family and friends, healthcare professionals, other patients, and society as a whole. Therefore, the identity shifts precipitated by diagnosis are affected by information and experiences embedded in the social context.
This conceptualization suggests that an intersection of individual and social processes encourages or discourages expressing beliefs that correspond to good insight. These ideas clearly require empirical validation, and recent work by Lysaker et al . (2006) is intriguing in this regard. They interviewed 75 patients with schizophrenia spectrum disorders to explore how self-stigma might explain the paradoxical links between greater insight, better functional outcomes, and poorer subjective wellbeing. Their cluster analysis of data from measures assessing insight and internalized stigma identified three groups: low insight/mild stigma, high insight/minimal stigma, and high insight/moderate stigma. Their attempt to compare the groups on measures of quality of life, self-esteem, and hope revealed that the high insight/low stigma group had significantly better interpersonal functioning. In contrast, increases in vulnerability to self-stigma demonstrated in the high insight/moderate stigma group corresponded to reports of less self-esteem and less hope. The analyses in the study did not reveal anything further about individuals demonstrating other configurations of insight and self-stigma. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):39-42 | oa_package/47/c3/PMC3016698.tar.gz |
||
PMC3016699 | 21234162 | MATERIALS AND METHODS
Study design
It is a cross-sectional one-time observational study using simple screening instruments for detecting early symptoms of depression in adolescents.
Adolescents studying in a public school constituted the study material. All the 125 students studying in 9th standard of the school were evaluated so as to eliminate any selection bias. Questionnaires were given in the class and students were instructed how to fill them in English or Hindi language.
Students were instructed not to write their names to maintain confidentiality. Written consent was taken from everyone and they were explained about the study project.
Inclusion criteria
Adolescents studying in 9 th standard of the school. All were overtly healthy.
Exclusion criteria
All students suffering from any kind of chronic disease requiring prescribed medication. All students who had taken any such screening tests before. Any past history of diagnosed mental illness.
The following two instruments were administered:
GHQ 12 (General Health Questionairre-12) BDI (Becks Depression Inventory)
The General Health Questionnaire (GHQ) is a subjective measure of psychological wellbeing and Stress Measurement. It has been used in a variety of studies to represent the stress response.
We have used Likert method of scoring in our study. Score of 14 and above is taken as evidence of distress (Goldberg and Williams 1991).
The Beck Depression Inventory (BDI) is a series of questions developed to measure the intensity, severity, and depth of depression in patients with psychiatric diagnoses. The sum of all BDI item scores indicates the severity of depression. Score of 12 and above is taken as Depression. Predictive value of the selected cut-off point, 100% sensitivity, 99% specificity, 0.72 PPV, 1 NPV, and 98% overall diagnostic value (Laasa et al . 2000).
Also socio-demographic data (e.g. academic performance, marital harmony of parents, bullying in school, etc) was collected in a separate semi-structured performa.
Statistical analysis was done with Fisher’s Exact Test using SPSS 17. | RESULTS
In GHQ-12, out of 125 adolescents, 106 did not had any evidence of stress (score <14) and 19(15.2%) were found to be having evidence of distress (score e"14). In BDI, out of 125, 102 were not depressed (score<12) and 23 (18.4%) were depressed (scoree"12).
There were in all 35 students who were detected to have positive scores either in GHQ-12 or BDI. There were seven students who had positive scores on GHQ and BDI [Tables 1 and 2 ]. | DISCUSSION
The present study found that 15.2% of the adolescents had evidence of distress and 18.4% were found to be depressed. We tried to find the factors responsible and association of the same with the prevalent stress. We found certain factors like parental fights, beating at home and inability to cope up with studies, to be significantly ( P <0.05) associated with higher GHQ-12 scores indicating evidence of distress.
Economic difficulty, physical punishment at school, teasing at school and parental fights were significantly ( P <0.05) associated with higher BDI scores indicating depression.
Factors like bullying in school and parental expectations also are responsible to adding to the stress of an adolescent though it did not reach a statistically significant level in the present study.
The generalizability of the current results is limited since the timing of the study was when the students had just entered 9 th standard and were in a jovial mood with comparatively lesser study load but this problem is unavoidable unless multiple studies are done at different times of the year and averaged out.
Cut-off score for BDI ranges from 10-12 depending upon different studies. We took the cut-off score for BDI as 12 thereby increasing the specificity to 99%.
Also, other class students and other schools should also be included in the study for increased generalizability. More extensive studies are required with greater diversity of students, schools and done at different times of the year.
In spite of the limitations, this study points towards the issue of prevalence of depression in adolescence and the purpose of the study is well served to highlight the common but ignored problem.
We recommend that teachers and parents be made aware of this problem with the help of school counselors so that the depressed adolescent can be identified and helped rather than suffer silently. | Background:
Three to nine per cent of teenagers meet the criteria for depression at any one time, and at the end of adolescence, as many as 20% of teenagers report a lifetime prevalence of depression. Usual care by primary care physicians fails to recognize 30-50% of depressed patients.
Materials and Methods:
Cross-sectional one-time observational study using simple screening instruments for detecting early symptoms of depression in adolescents. Two psychological instruments were used: GHQ-12 and BDI. Also socio-demographic data (e.g. academic performance, marital harmony of parents, bullying in school, etc) was collected in a separate semi-structured performa. Statistical analysis was done with Fisher’s Exact Test using SPSS17.
Results:
15.2% of school-going adolescents were found to be having evidence of distress (GHQ-12 score e14); 18.4% were depressed (BDI score e12); 5.6% students were detected to have positive scores on both the instruments. Certain factors like parental fights, beating at home and inability to cope up with studies were found to be significantly ( P < 0.05) associated with higher GHQ-12 scores, indicating evidence of distress. Economic difficulty, physical punishment at school, teasing at school and parental fights were significantly ( P < 0.05) associated with higher BDI scores, indicating depression.
Conclusion:
The study highlights the common but ignored problem of depression in adolescence. We recommend that teachers and parents be made aware of this problem with the help of school counselors so that the depressed adolescent can be identified and helped rather than suffer silently. | Just 40 years ago, many physicians doubted the existence of significant depressive disorders in children. However, a growing body of evidence has confirmed that children and adolescents not only experience the whole spectrum of mood disorders but also suffer from the significant morbidity and mortality associated with them.
Despite the high prevalence and substantial impact of depression, detection and treatment in the primary care setting have been suboptimal. Studies have shown that usual care by primary care physicians fails to recognize 30-50% of depressed patients (Simon and Vonkorff, 1995). Because patients in whom depression goes unrecognized cannot be appropriately treated, systematic screening has been advocated as a means of improving detection, treatment, and outcomes of depression.
While improved pediatric diagnosis alone is unlikely to significantly change patient outcomes, recognizing teenagers with depression is the first step to improved depression management. It affects 2% of pre-pubertal children and 5-8% of adolescents. The clinical spectrum of the disease can range from simple sadness to a major depressive or bipolar disorder (Son And Kirchner, 2000). Studies have found that 3-9% of teenagers meet criteria for depression at any one time, and at the end of adolescence, as many as 20% of teenagers report a lifetime prevalence of depression (Zuckerbrotand Jensen, 2006).
Childhood depression, like the depression of adults, can encompass a spectrum of symptoms ranging from normal responses of sadness and disappointment in stressful life events to severe impairment caused by clinical depression that may or may not include evidence of mania (Wolraich et al . 1996, Kovacs et al . 1994, Weller et al . 1996).
Adolescent depression may affect the teen’s socialization, family relations, and performance at school, often with potentially serious long-term consequences. Adolescents with depression are at risk for increased hospitalizations, recurrent depressions, psychosocial impairment, alcohol abuse, and antisocial behaviors as they grow up. Of course, the most devastating outcome of concern for adolescent depression is suicide, the third leading cause of death among older adolescents (Centre for Diseases Control, WISQARS).
Corelational and longitudinal studies have shown that depression is associated with higher rates of smoking, alcohol abuse, unhealthy eating, and infrequent exercise (Haarasilta et al ., 2004, Franko et al . 2005).
No perfect depression screening/assessment tool exists, but a number of adolescent depression assessment instruments do possess adequate psychometric properties to commend their use in depression detection and assessment. Optimal diagnostic procedures should combine the use of depression-specific screening tools as diagnostic aids buttressed by follow-up clinical interviews in which one obtains information from other informants (e.g., parents) and reconciles discrepant information to arrive at an accurate diagnosis and impairment assessment before treatment (Laasa et al . 2000). | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):43-46 | oa_package/65/22/PMC3016699.tar.gz |
|||
PMC3016700 | 21234163 | MATERIALS AND METHODS
This was a community-based cross-sectional study carried out over a period from January 2005 to December 2005. This study was conducted at the rural field of a nongovernmental organization Nab Bharat Jagriti Kendra, which covers a population of 1,332,739 spread over 10 blocks in the state of Jharkhand, India. A favorable sex ratio (Male: Female – 1000: 941) and literacy rate 54.1% in males and 39.45% were the striking features of this area. Mostly tribal people live in these areas. Two districts Ranchi and Hazaribagh and 5 blocks from each of these districts were chosen for the study. The sample size was estimated to be 433,657 persons. The study was conducted by making house-to-house visits and interviewing all the individuals in the families selected using a pre-tested questionnaire. Mental disability was assessed by ‘disability evaluation and assessment scale’ (IDEAS), a scale for measuring and quantifying disability in mental disorders, developed by the Rehabilitation Specialty Section of Indian Psychiatric Society (Ministry of Social Justice and Empowerment, Govt. of India, 2002). Disability in children below the age of 5 years was assessed based on a questionnaire designed on the lines of questionnaire taken from Action Aid India. Action Aid India instrument is used for the assessment of mental disability of a child. Children were examined and developmental delay in responding to the name or voice, smile, communication; and learning difficulties were noted down (Thomas et al ., 2005). The data collected was tabulated by using proportions. Findings were described in terms of percentages. After knowing the status of mental illness, community-based rehabilitation program as under was planned and implemented.
The mental health program
The primary objective was to operate a community-based mental health program in the defined catchment areas. The other program components included capacity-building program to detect mental disorders and make referrals for mental disorders, as well as linkages from government mental hospitals; awareness programs; services delivery; and rehabilitation programs.
Capacity-building program
The community-based rehabilitation workers (CBRWs) were lay volunteers identified from the community with the help of Gram Sabha . The training consisted of 10 sessions each for the 10 groups of CBRWs. Other groups like Person with disability (PWD) self-help group (18), panchayat -level PWD communities (20), block-level PWD communities (2) and district-level PWD community (1) were formed and given training regarding mental illness and mental disabilities. They helped in identifying the case, implementation of simple intervention strategies, working closely with families of the mentally ill and making appropriate referrals. Manual and audiovisual training materials were used for the sessions.
Awareness building
The awareness-building program included community sensitization camps, street plays, wall writing, pamphlets, posters, letters and booklet distribution containing all information related to, and meeting the needs of, disabled people in the catchment area of study.
Sensitization and workshops
Several sensitization programs and workshops for key players, i.e., ANM (Auxillary Nurse Midwife), media, block administration BDOs (Block Development Officer), MOs (Medical Officer), CDPOs (Child Development Project Officer), other NGOs, school teachers, physicians and other medical officers, parents, village leaders, were conducted during the Mental Health Program.
Mental health services
The identified patients and disabled were linked to the Ranchi Institute of Neuropsychiatry and Allied Sciences (RINPAS), Ranchi. They visited the patients with the help of CBRWs, and further checkup was done by psychiatrists and psychiatric social workers and the progress was reviewed monthly. A similar procedure was followed in camps held in villages which were not accessible to the clinic staff. Some psycho-social intervention included support to the patient, educating families on mental illness, management of behavioral problems, ensuring drug compliance, training patients in self care and daily living, job placements and helping them initiate small businesses as a measure of rehabilitation. The emphasis was on utilizing local resources and mobilizing local support. Over a period of 2 years, 2003-20053026 patients suffering from mental illness were registered and offered treatment; and by the end of the programs, 2112 patients were on medication, 585 were stable and 329 were self employed (rehabilitated). When we made our exit from the project area in 2005, we were quite satisfied with the pace of the activities we had initiated. Our major successes were as follows: Due to the mental health facilities in the region, there are now 3 mental hospitals run by government and a private body. RINPAS also has done camps in the remote areas and given free treatment and medication. This will help the patients to continue medication and come for follow-up. | RESULTS
The prevalence of disability was the highest in the age groups 30-34 years and 20-24 years, which was significantly higher than the prevalence in other age groups. Among 1432 patients of psychosis, 269 had depression and anxiety, 503 had mental retardation, 358 had epilepsy and 464 had substance abuse or dependence. The total number of disabled individuals was 3026, among whom 1855 were males and 823 were females. The difference in prevalence of disability seen in males (67.9%) and females (32.1%) was not statistically significant. The overall prevalence of mental disability was 1% to 2% [ Table 1 ].
The prevalence of disability was the lowest in the high socioeconomic group; except in case of mental retardation, which was higher in the high socioeconomic group. More males than females had psychosis and substance-dependence-related problems, whereas more females than males had problems of depression and anxiety [Tables 2 and 3 ]. | DISCUSSION
Well-documented studies to determine the prevalence and pattern of mental disorders are few. There are no community-based studies using IDEAS for assessment of disability, but there are some hospital-based studies among patients with mental illness and mental disability using IDEAS. This instrument was used for mental illnesses that included schizophrenia, bipolar affective disorders, anxiety disorders, depression, obsessive-compulsive disorders, dementia, behavioral disorders due to the intake of alcohol (Chaudhury et al ., 2006; Mohan et al ., 2005). The field workers involved in data collection could not detect mild degrees of disability because of their limited knowledge and lack of training. The present study showed a higher prevalence of mental disability in general (Chaudhury et al ., 2006). The prevalence was more common among the productive age group. This has been attributed to higher prevalence due to socioeconomic conditions, marital and familial problems and substance abuse. Higher prevalence of mental disabilities in males is probably due to nonrespondent females. Moreover, family members often hide female patients with mental illness due to stigma. The prevalence of mental disability was the lowest among the group of persons with high socioeconomic status. In this area, the disabled were educated up to the primary level. About a third of the 7 to 8 million Indians who suffer from psychosis will be severely disabled and will require intense rehabilitation inputs (Census of India, 2001). The moderately disabled also require intervention, largely in relation to work and employment. After knowing the status of mental illness, community-based rehabilitation program was planned and implemented.
Jharkhand is rich in local resources like agriculture, forest and minerals, and so patients who improved with treatment and engaged in field or local business tended to drop out of follow up. But dropping out of psychiatric care is not an uncommon phenomenon, especially in the case of chronic mental illness. Several studies, including our own, have addressed this issue, while some have reported more dropouts among patients whose condition improved and who were satisfied with the level of care (Rossi et al ., 2002). The contradictory finding of persons dropping out because of lack of improvement was seen in other studies (Rossi et al ., 2002). Our own follow-up study revealed that those patients who had remitted had dropped out (Thara et al ., 1990). This program has succeeded to an extent in attaining the objectives of the National Mental Health Program. Minimum mental health care is made accessible, available and affordable to the underprivileged sections of the population. Community health workers can play an important role in disseminating correct information regarding these disorders to the community and in reducing stigma. There are usually many myths and misconceptions associated with severe mental disorders in rural areas, and these are very resistant to change. However, the cure of one such patient in a village is enough to change people’s attitudes.
Limitations
A limitation of our study was that we could not interview nonrespondents because of their noncooperation or non-availability during our field visits; hence the entire population could not be covered. | CONCLUSION
Common mental disorders form a large proportion of the total burden of mental illnesses and must be addressed in all mental health programs. Collaboration between the Departments of Psychiatry and Psychiatric Social Work and NGOs is useful in developing such programs at the primary care level. In our initiative, postgraduate students of both departments got an opportunity to view the entire spectrum of mental illnesses and study the social, economic and cultural factors that were intertwined with them. | Background:
In the present era, mental disability is a major public health problem in the society. Many of the mental disabilities are correctable if detected early.
Objectives:
To assess the prevalence and pattern of mental disability.
Materials and Methods:
Community-based cross-sectional study. Patients of all age groups in the age range of 0-60 years were randomly selected from 10 blocks of 2 districts, viz., Ranchi and Hazaribagh. Thirty villages from each block were taken for the study. The study was conducted by making house-to-house visits, interviewing and examining all the individuals in the families selected using pre-tested questionnaire. Statistical Analysis: It was done by the proportions.
Results and Conclusion:
The prevalence of mental disability was found higher among males (67.9%) than among females (32.1%). The prevalence rate was higher among the productive groups and among individuals with low socioeconomic status. There is scope of community-based rehabilitation of the mentally disabled. | Mental disorders are prevalent in people of all regions, countries and societies. They affect men and women at all stages of life. Contrary to popular belief, the poor are more likely to suffer from mental and behavioral disorders (The World Health Report, 2001) and are more likely to suffer tragic outcomes as a result of their illness. The National Mental Health Program was developed in India to address the problem of mental illnesses, especially in rural areas. However, it has come under some criticism as it has laid emphasis on identifying and treating severe mental disorders such as psychosis, while not addressing common mental disorders [CMDs], which are equally disabling. CMDs, which are neurotic disorders presenting with anxiety and depressive symptoms, are widespread and are known to cause significant disability worldwide. In India, prevalence rates of CMD range from 2% to 57% (Patel, 1999). Majority of patients with CMDs present at primary care centers but end up receiving symptomatic treatments like painkillers and vitamins because their disorders are not recognized by primary care physicians as being mental illnesses. CMDs in such patients lead to chronic disability and progress in severity, making ultimate treatment more difficult (Issac et al ., 2005). World Health Organization estimates that 10% of the world’s population has mental disabilities, and 1% suffers from severe incapacitating mental disorders. The disability-adjusted life year (DALY) loss due to neuropsychiatric disorders is much higher than that due to diarrhea, malaria, worm infestations and tuberculosis if taken individually. According to estimates, DALYs lost due to mental disorders are expected to represent 15% of the global burden of diseases by 2020.
During the last two decades, many epidemiological studies conducted in India show that the prevalence of major psychiatric disorders is about the same all over the world. The prevalences reported from these studies range from 18 to 207 per 1000 of the population, with a median of 65.4 per 1000; and at any given time, about 2% to 3% of the population suffer from seriously incapacitating mental disorders or epilepsy. A meta-analysis revealed that the prevalence of psychiatric disorders was around 5.8% (Reddy et al ., 1998). Most of these patients live in rural areas, remote from any modern mental health facilities. A large number of adult patients (10.4%-53%) coming to the general OPD are diagnosed as being mentally ill. However, these patients are usually missed because either the medical officer or the general practitioner at the primary health care unit does not ask detailed mental health history. Due to the under-diagnosis of these patients, unnecessary investigations and treatments are offered, which put a heavy financial burden on the patients.
The Mental Health Act 1987 provides safeguards against stigmatization of patients suffering from mental illness. Community care of the chronic mentally ill has always been prevalent in India, largely due to family involvement and unavailability of institutions. In the 80s, a few mental health clinics became operational in some parts of the country. The Schizophrenia Research Foundation (SCARF), an NGO in Chennai, had established a community clinic in 1989 in Thiruporur, which was functional till 1999. Community mental health rehabilitation programs are carried out in a rural area in Jharkhand by the Nab Bharat Jagriti Kendra (NBJK), a nongovernmental, nonprofit organization working for people with physical and mental disabilities. The community mental health project is funded by Action Aid, India, and is carried out in Ranchi and Hazaribagh districts of Jharkhand. Covering 30 villages from 10 blocks, this area has a total population of 433,657 persons, most of them below the poverty line. One primary health center (PHC) and a few sub-centers cater to the health needs of the population. In these rural communities, faith healing and traditional medicines for mental illnesses are quiet popular and these traditional healers often are the first point of contact. The present study was conducted to understand the prevalence rate of mental disability and for developing community-based rehabilitation programs for the mentally ill. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):47-50 | oa_package/2a/8c/PMC3016700.tar.gz |
||
PMC3016701 | 21234164 | MATERIALS AND METHODS
Sample
The sample comprised of 55 elderly persons (35 men and 20 women) in the age group of 60-80 years. The mean age of the sample population was 67 years. The subjects for the sample were selected from the older adults of a Delhi-based region residing in the housing societies. These elderly persons were contacted personally, and the questionnaires were administered to them.
Measures
The revised UCLA (University of California, Los Angeles) loneliness scale (Russell et al ., 1980)
The UCLA Loneliness Scale includes 10 negatively worded and 10 positively worded items that have the highest correlations with a set of questions that are explicitly related with loneliness. The revised version of the scale has high discriminative validity. The revised loneliness scale also has a high internal consistency, with a coefficient alpha of 0.94.
Beck depression inventory (Beck et al ., 1961)
The Beck Depression Inventory (BDI) is a 21-item self-report scale measuring supposed manifestations of depression. The internal consistency for the BDI ranges from 0.73 to 0.92, with a mean of 0.86. The BDI demonstrates high internal consistency, with alpha coefficients of 0.86 and 0.81 for psychiatric and nonpsychiatric populations, respectively. The scale has a split-half reliability coefficient of 0.93.
Sociability subscale of Eysenck personality profiler (Eysenck & Eysenck, 1975)
Eysenck Personality Profiler (EPP V6) is a multidimen sional modular personality inventory for 3 dimensions: Extroversion, emotionality (neuroticism) and adventurous ness (psychoticism). Each dimension has 7 subscales.
The sociability subscale of extroversion used in this study consists of 20 questions. The response category is either ‘yes’ or ‘no.’ There are 10 positive items and 10 negative items. The factorial validity of the EPP V6 holds across different cultures and age groups, with a high equivalent factor structure among these different samples.
Procedure
Initially the participants were personally contacted and rapport was established with them. The participants completed the questionnaires given to them. Standard instructions were written on top of each questionnaire, and the participants were asked to rate themselves under the option they felt relevant to them. It was made clear to the participants that there were no right and wrong answers. If they had any difficulty, they were encouraged to ask questions. After finishing the entire set of questions, they were asked to return the questionnaires. The test administration took about 45minutes. | RESULTS
Table 1 shown above reveals that there are no significant gender differences in elderly men and women with respect to loneliness and depression. Elderly men, however, were found to be more sociable as compared to elderly women.
Table 2 shows a significant positive correlation between depression and loneliness, which is significant at the 0.01 level, i.e., there is an increase in the level of depression with an increase in loneliness among elderly men and women. A negative, though insignificant, relationship was found between sociability and loneliness. No significant relationship was found between sociability and depression.
Table 3 reveals that in the male elderly persons, a significant positive correlation was found between depression and loneliness. Sociability and loneliness were negatively correlated, though not significantly.
Female elderly persons manifested a significant positive correlation between depression and loneliness, as can be seen in Table 4 . | DISCUSSION
The health and well-being of older adults is affected by the level of social activity and the mood states. Researchers have reported the negative effects of loneliness on health in old age (Heikkinen et al ., 1995). Loneliness, coupled with other physical and mental problems, gives rise to feelings of depression in the elderly persons. Gender differences have been reported in the prevalence of health problems in elderly persons (Arber & Ginn, 1991). Results in Table 1 reveal that there are no significant gender differences in the elderly persons with respect to loneliness and depression, i.e., both the male and female elderly persons equally experience feelings of loneliness and depression. On the dimension of sociability, men were found to be more sociable as compared to their female counterparts. This may have been due to the fact that all the elderly men belonged to the working group, i.e., they were employed in government jobs before retirement and were less hesitant in socializing as compared to their female counterparts who were housewives and were spending their lives at home and finding pleasures by engaging in daily chores. Having both the intellectual and social resources allows elderly men to continue to seek out new relationships. Lack of significant gender differences on loneliness reflects the fact that since both the groups contained elderly married couples, with both partners being alive, the chances of their feeling lonely were low. Moreover, most of the couples were staying with their children and grandchildren, which did not allow them to stay lonely for long. Lack of significant gender differences on depression is contrary to the often held belief and research reports that elderly women are more prone to depression as compared to elderly men (Kessler et al ., 1993). This result is not in line with what has been reported in literature. The findings of no significant gender differences with respect to depression may be attributed to the fact that all the women were nonworking ladies before they attained 60 years of age. Hence for them, the transition into old age was less associated with a change in life style associated with a break in ties with others or a sudden loss of power and status. The transition was very gradual, which prevented any abrupt change in mood states.
A positive correlation between loneliness and depression [Tables 2 – 4 ] is in accordance with the results obtained in literature with regard to both male and female elderly persons (Green et al ., 1992). No significant relationship between loneliness and sociability [ Table 2 ] reveals that despite being sociable, they experienced increased feelings of loneliness. Possible explanation for this may be that feeling lonely not only depends on the number of connections one has with others but also whether or not one is satisfied with his life style. An expressed dissatisfaction with available relationships is a more powerful indicator of loneliness (Revenson, 1982).
Lack of significant relationship between depression and sociability [ Table 2 ] confirms the fact that depression is multicausal, i.e., it arises due to a host of factors, like declining health, significant loss due to death of a spouse, lack of social support. Also most of the elderly persons had moderate connections with their friends and family members, and they participated in daily activities.
On the basis of obtained findings, the following conclusions can be made:
A significant positive correlation exists between loneliness and depression. No significant relationship was found between loneliness and sociability; depression and sociability. Men are found to be more sociable than women. A significant correlation was found between loneliness and depression in both men and women.
There were certain limitations in the study:
The sample size was restricted to few elderly persons. Hence in future, a similar study needs to be conducted on a larger section of the elderly population. For determining gender differences, both male and female constituents of the sample should be equivalent in all respects. Moreover, no formal diagnosis of depression was made in the sample used in the study. Self-report inventory was used for determining the level of depressive symptoms in the elderly persons. Keeping in view the above limitations, longitudinal studies on a larger group of elderly men and women are needed in future. | Background:
The elderly population is large in general and growing due to advancement of health care education. These people are faced with numerous physical, psychological and social role changes that challenge their sense of self and capacity to live happily. Many people experience loneliness and depression in old age, either as a result of living alone or due to lack of close family ties and reduced connections with their culture of origin, which results in an inability to actively participate in the community activities. With advancing age, it is inevitable that people lose connection with their friendship networks and that they find it more difficult to initiate new friendships and to belong to new networks. The present study was conducted to investigate the relationships among depression, loneliness and sociability in elderly people.
Materials and Methods:
This study was carried out on 55 elderly people (both men and women). The tools used were Beck Depression Inventory, UCLA Loneliness Scale and Sociability Scale by Eysenck.
Results:
Results revealed a significant relationship between depression and loneliness.
Conclusion:
Most of the elderly people were found to be average in the dimension of sociability and preferred remaining engaged in social interactions. The implications of the study are discussed in the article. | Aging is a series of processes that begin with life and continue throughout the life cycle. It represents the closing period in the lifespan, a time when the individual looks back on life, lives on past accomplishments and begins to finish off his life course. Adjusting to the changes that accompany old age requires that an individual is flexible and develops new coping skills to adapt to the changes that are common to this time in their lives (Warnick, 1995).
The definition of ‘health’ with regard to old age is a subject of debate. There is consensus that health in old age cannot meaningfully be defined as the absence of disease because the prevalence of diagnosable disorders in elderly populations is high. Instead, health is considered to be multifaceted: The diagnosis of disease should be complemented by assessment of discomfort associated with symptoms (e.g., pain), life threat, treatment consequences (e.g., side effects of medication), functional capacity and subjective health evaluations (Borchelt et al ., 1999). Furthermore, Rowe & Khan (1987) suggested that the health of subgroups of older adults be defined in terms of their status relative to age and cohort norms.
There is a growing body of evidence that suggests that psychological and sociological factors have a significant influence on how well individuals age. Aging research has demonstrated a positive correlation of someone’s religious beliefs, social relationships, perceived health, self-efficacy, socioeconomic status and coping skills, among others, with their ability to age more successfully.
Depression or the occurrence of depressive symptomatology is a prominent condition amongst older people, with a significant impact on the well-being and quality of life. Many studies have demonstrated that the prevalence of depressive symptoms increases with age (Kennedy, 1996). Depressive symptoms not only have an important place as indicators of psychological well-being but are also recognized as significant predictors of functional health and longevity. Longitudinal studies demonstrate that increased depressive symptoms are significantly associated with increased difficulties with activities of daily living (Penninx et al ., 1998). Community-based data indicate that older persons with major depressive disorders are at increased risk of mortality (Bruce, 1994). There are also studies that suggest that depressive disorders may be associated with a reduction in cognitive functions (Speck et al ., 1995).
Though the belief persists that depression is synonymous with aging and that depression is in fact inevitable, there has been recent research which dispels this faulty notion. Depression has a causal link to numerous social, physical and psychological problems. These difficulties often emerge in older adulthood, increasing the likelihood of depression; yet depression is not a normal consequence of these problems. Studies have found that age isn’t always significantly related to level of depression, and that the oldest of olds may even have better coping skills to deal with depression, making depressive symptoms more common but not as severe as in younger populations.
When the onset of depression first occurs in earlier life, it is more likely that there are genetic, personality and life experience factors that have contributed to the depression. Depression that first develops in later life is more likely to bear some relationship to physical health problems. An older person in good physical health has a relatively low risk of depression. Physical health is indeed the major cause of depression in late life. There are many reasons for this, which include the psychological effects of living with an illness and disability, the effects of chronic pain; the biological effects of some conditions and medications that can cause depression through direct effects on the brain; and the social restrictions that some illnesses place upon older people’s life style resulting in isolation and loneliness.
There are strong indications that depression substantially increases the risk of death in adults, mostly by unnatural causes and cardiovascular disease (Wulsin et al ., 1999). Some population-based studies did find that this independent relationship does exist in later life, while others did not.
Loneliness is a subjective, negative feeling related to the person’s own experience of deficient social relations. The determinants of loneliness are most often defined on the basis of 2 causal models. The first model examines the external factors , which are absent in the social network, as the root of the loneliness; while the second explanatory model refers to the internal factors , such as personality and psychological factors.
Loneliness may lead to serious health-related consequences. It is one of the 3 main factors leading to depression (Green et al ., 1992), and an important cause of suicide and suicide attempts. A study carried out by Hansson et al . (1987) revealed that loneliness was related to poor psychological adjustment, dissatisfaction with family and social relationships.
As people grow old, the likelihood of experiencing age-related losses increases. Such losses may impede the maintenance or acquisition of desired relationships, resulting in a higher incidence of loneliness. Many people experience loneliness either as a result of living alone, a lack of close family ties, reduced connections with their culture of origin or an inability to actively participate in the local community activities. When this occurs in combination with physical disablement, demoralization and depression are common accompaniments. The negative effect of loneliness on health in old age has been reported by researchers (Heikkinen et al ., 1995). The death of spouse and friends and social disengagement after leaving work or a familiar neighborhood are some of the ubiquitous life-changing events contributing to loneliness in older people. Those in the oldest age cohort are most likely to report the highest rates of loneliness, reflecting their increased probability of such losses.
A study by Max et al . (2005) revealed that the presence of perceived loneliness contributed strongly to the effect of depression on mortality. Thus, in the oldest old, depression is associated with mortality only when feelings of loneliness are present. Depression is a problem that often accompanies loneliness. In many cases, depressive symptoms such as withdrawal, anxiety, lack of motivation and sadness mimic and mask the symptoms of loneliness.
Sociability and old age
Sociability plays an important role in protecting people from the experience of psychological distress and in enhancing well-being. George (1996) summarized some of the empirically well-supported effects of social factors on depressive symptoms in later life, and reported that increasing age, minority racial or ethnic status, lower socioeconomic status and reduced quantity or quality of social relations are all associated with increased depressive symptom levels. Social isolation is a major risk factor for functional difficulties in older persons. Loss of important relationships can lead to feelings of emptiness and depression. “Persons involved with a positive relationship tend to be less affected by everyday problems and to have a greater sense of control and independence. Those without relationships often become isolated, ignored, and depressed. Those caught in poor relationships tend to develop and maintain negative perceptions of self, find life less satisfying and often lack the motivation to change” (Hanson & Carpenter, 1994).
Having few social contacts or living alone does not assure a state of loneliness (Mullins, Johnson, & Anderson, 1987). In fact, for elderly people the time spent with family may be less enjoyable than a visit to a neighbor or someone of their age group. This can be attributed to the fact that relationships with family tend to be obligatory whereas those with friends are a matter of choice. This further emphasizes the need for a perceived internal locus of control over social interaction as a means of alleviating loneliness.
Posner (1995) points out that older people tend to make friendships predominantly with those within the same age cohort. Thus with advancing age, it is inevitable that people lose their friendship networks and that they find it more difficult to initiate new friendships and to belong to new networks. However, those with more physical, material and intellectual resources also have more social “capital,” which allows them to continue to seek out new relationships and forms of social involvement.
The number of older people is increasing throughout the world. As individuals grow older, they are faced with numerous physical, psychological and social role changes that challenge their sense of self and capacity to live happily. Depression and loneliness are considered to be the major problems leading to impaired quality of life among elderly persons. At the same time, old age can also be an opportunity for making new friends, developing new interests, discovering fresh ways of service, spending more time in fellowship with God. It can be happy and winsome or empty and sad — depending largely on the faith and grace of the person involved. Therefore, the present study was undertaken with the main purpose of studying the relationships among depression, loneliness and sociability in a group of elderly people and also to determine gender differences with respect to the above relationships of variables.
Objectives of the study
Examine the relationships among loneliness, depression and sociability in elderly persons Study gender differences with respect to sociability, loneliness and depression among elderly persons
Hypotheses
There will be a positive relationship between loneliness and depression in old age. There will be a negative relationship between sociability and loneliness in old age. There will be a negative relationship between sociability and depression in elderly persons. There will be gender differences with respect to the variables sociability, loneliness and depression in elderly persons. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):51-55 | oa_package/42/91/PMC3016701.tar.gz |
|||
PMC3016702 | 21234165 | MATERIALS AND METHODS
Sample
The present study was carried out on a sample of 35 mentally retarded children (mean age, 14.17 years; SD, 5.5) chosen at random from the Central Institute of Psychiatry, Kanke, Ranchi (Jharkhand). The sample included 19 males and 16 females. Children with comorbid epilepsy, sensory deficit (like impairment of vision, hearing), other psychiatric disorders and physical problems were excluded. Characteristics of the study population are given in Table 1 .
Tools
Specially designed socio-demographic data sheet
A format was developed to record the background information about the subject, like name, age, sex, level of retardation/epilepsy, etc.
Vineland social maturity scale (Nagpur adaptation)
The scale was originally developed by E. A. Doll in 1935, which was then adapted by Dr. A. J. Malin in the year 1965. It measures differential social capacity of an individual. It provides an estimate of social age (S.A.) and social quotient (SQ) and shows high co-relation (0.80) with intelligence. It is designed to measure social maturation in 8 social areas. The scale consists of 89 items grouped into year levels (13 age groups). It can be used for the age group of ‘below 15 years’; it means from birth to 15 years.
Stanford Binet intelligence scale (Hindi adaptation)
It was originally developed by Alfred Binet with the help of Simon in 1905 in France. In India its Hindi version was developed by S. K. Kulshrestha. Its 1960 revision has a range of 2 years to 22 years and 11 months of mental age scores. The single Binet L-M form is available with norms on data as recent as 1972. This form measures abilities in 7 categories: Languages, reasoning, memory, social intelligence, conceptual, numerical reasoning and visual motors. Test items are in the form of words, objects and pictures, and responses given by the testees are in the form of drawing, calculating, writing and speaking. In this revision, the intelligence is expressed in terms of standard score of intelligence, IQ.
Procedure
Mentally retarded children were identified on the basis of International classification of disease 10 th revision (Diagnostic criteria for research). Informed consent was taken from the informants before eliciting relevant information, and the nature and purpose of the study were explained. All subjects who were selected for the present study were interviewed and then assessed for IQ with the help of Stanford Binet Intelligence Scale. Thereafter, Vineland Social Maturity Scale was administrated to know the level of social development of each subject.
Analysis of data
Data has been analyzed using means, standard deviations, Kruskal-Wallis (nonparametric) one-way ANOVA test, chi-square test and Pearson correlation on social quotient. | RESULTS AND DISCUSSION
One-way analysis of variance was carried out to find out if there was any significant difference in social development in relation to various levels of mental retardation [ Table 2 and Figure 1 ].
The value of ANOVA was significant at. 01 level (χ 2 = 14.9; df = 3). This indicates that there were statistically significant differences in the social developments of children in relation to various levels of mental retardation; with degrees of social development (in terms of SQs) for mild, moderate, severe and profound retardations being 59.4, 42.1, 30.8 and 19.0, respectively, and the standard deviations being 20.3, 14.4, 8.6 and 9.5, respectively. This suggests that there are significant differences in the social development of each category of retardation. It is observed that with increasing severity of mental retardation, the level of social development decreases. The findings strongly suggest that among children with mental retardation too, the cognitive and social skills are interrelated. The intellectual development and social development go together in the same direction. Similar observations were reported by Pati et al ., (1996).
Computed value of correlation of social quotient in different age groups was -0.17, which is not significant statistically; this may indicate the stability of social quotient with increasing age [ Table 3 ].
Computed value of Pearson correlation coefficient between IQ and SQ was .785, which is significant. This may indicate relationship between IQ and SQ, i.e., relationship between intellectual capacity and social development [ Table 3 ].
This study shows that as the level of mental retardation increases, social development decreases correspondingly. There was no impact of the age factor on the social development of mentally retarded children. Most of the people think that as the child grows, social development will be enhanced; today, he is a child; tomorrow he will be socially developed. This study will make the parents of mentally retarded children aware about the functional requirement of counseling for the mentally retarded children. If the degree of mental retardation is more, the need for special education and training will be more intense. Similarly, for those who have a lesser IQ, there will be greater requirement of special training. Most of the parents feel it is futile to spend money for the social development of severe and profound mentally retarded children; this expense will have no utility, and so why not utilize such money for normal children? This study will be helpful in making people aware about the necessity of training, managing and rehabilitating children with mental retardation. Right from the very beginning, there is an effective role for parents, teachers and other professionals in the enhancement of social skills of mentally retarded children. This study opens the path for research to determine whether there is any impact of special education and training on the social development of mentally retarded children of various age groups.
Because of time constraints and excessive workloads, trained psychologists are unable to assess the IQ, as the number of clinical psychologists throughout India is about 600-700 (Nathawat et al ., 2001). Therefore, other methods are required for IQ assessment. Among the various techniques, social development scale is a very important method to determine one’s IQ. This social development scale is relatively easy to administer and has practical application in the assessment of IQ. It has also been found important in the management of disabled persons. SQ and IQ are highly correlated (.80) on the Stanford Binet Intelligence Scale; and the same has been found in the index study as well. Magnitude of MR is known by SQ where IQ testing is not possible. Many clinicians use social development scale in their clinics for children and adolescents as it is a valuable device for interviewing and counseling both parents and children. | RESULTS AND DISCUSSION
One-way analysis of variance was carried out to find out if there was any significant difference in social development in relation to various levels of mental retardation [ Table 2 and Figure 1 ].
The value of ANOVA was significant at. 01 level (χ 2 = 14.9; df = 3). This indicates that there were statistically significant differences in the social developments of children in relation to various levels of mental retardation; with degrees of social development (in terms of SQs) for mild, moderate, severe and profound retardations being 59.4, 42.1, 30.8 and 19.0, respectively, and the standard deviations being 20.3, 14.4, 8.6 and 9.5, respectively. This suggests that there are significant differences in the social development of each category of retardation. It is observed that with increasing severity of mental retardation, the level of social development decreases. The findings strongly suggest that among children with mental retardation too, the cognitive and social skills are interrelated. The intellectual development and social development go together in the same direction. Similar observations were reported by Pati et al ., (1996).
Computed value of correlation of social quotient in different age groups was -0.17, which is not significant statistically; this may indicate the stability of social quotient with increasing age [ Table 3 ].
Computed value of Pearson correlation coefficient between IQ and SQ was .785, which is significant. This may indicate relationship between IQ and SQ, i.e., relationship between intellectual capacity and social development [ Table 3 ].
This study shows that as the level of mental retardation increases, social development decreases correspondingly. There was no impact of the age factor on the social development of mentally retarded children. Most of the people think that as the child grows, social development will be enhanced; today, he is a child; tomorrow he will be socially developed. This study will make the parents of mentally retarded children aware about the functional requirement of counseling for the mentally retarded children. If the degree of mental retardation is more, the need for special education and training will be more intense. Similarly, for those who have a lesser IQ, there will be greater requirement of special training. Most of the parents feel it is futile to spend money for the social development of severe and profound mentally retarded children; this expense will have no utility, and so why not utilize such money for normal children? This study will be helpful in making people aware about the necessity of training, managing and rehabilitating children with mental retardation. Right from the very beginning, there is an effective role for parents, teachers and other professionals in the enhancement of social skills of mentally retarded children. This study opens the path for research to determine whether there is any impact of special education and training on the social development of mentally retarded children of various age groups.
Because of time constraints and excessive workloads, trained psychologists are unable to assess the IQ, as the number of clinical psychologists throughout India is about 600-700 (Nathawat et al ., 2001). Therefore, other methods are required for IQ assessment. Among the various techniques, social development scale is a very important method to determine one’s IQ. This social development scale is relatively easy to administer and has practical application in the assessment of IQ. It has also been found important in the management of disabled persons. SQ and IQ are highly correlated (.80) on the Stanford Binet Intelligence Scale; and the same has been found in the index study as well. Magnitude of MR is known by SQ where IQ testing is not possible. Many clinicians use social development scale in their clinics for children and adolescents as it is a valuable device for interviewing and counseling both parents and children. | CONCLUSION
It can be concluded that the social quotient increases as level of mental retardation decreases from profound to mild. The social quotient across different age ranges does not differ significantly. Clinical psychologists who are working with underprivileged children/individuals may use the Vineland Social Maturity Scale as a rapid screening test for determining IQ and capacity for social adjustment. | Background:
Social development of children with mental retardation has implications for prognosis. The present study evaluated whether the social maturity scale alone can reflect on the social maturity, intellectual level and consequent adjustment in family and society of children with mental retardation.
Materials and Methods:
Thirty-five mentally retarded children were administered Vineland Social Maturity Scale and Stanford Binet Intelligence Scale.
Results:
It was found that there was significant relationship between the measures of social maturity scale and the IQ of the subjects. Further it was found that with increasing severity of retardation, social development also decreases and age does not have any effect on social development.
Conclusion:
Social quotient increases from profound to mild level of retardation. | Mental retardation (MR) is one of the most distressing handicaps in any society. Development of an individual with mental retardation depends on the type and extent of the underlying disorder, the associated disabilities, environmental factors, psychological factors, cognitive abilities and comorbid psychopathological conditions (Ludwik, et al ., 2001). Social development means acquisition of the ability to behave in accordance with social expectations (Pati et al ., 1996). Becoming socialized involves 3 processes: i) learning to behave in socially approved ways, ii) playing approved social roles and iii) development of social attitudes (Hurlock, 1967). For people with mental retardation, their eventual level of social development has implication for the degree of support needed in their literacy arrangement and their integration in the community with increasing emphasis on mainstreaming the attainment of skills in personal, domestic and community functioning. It also contributes considerably to quality of life. Thus investigation of factors that may facilitate or inhibit social development assumes particular importance.
Mentally retarded children, due to low intellectual growth, function with a limited capacity in comparison to normal children. Hence the social functioning of these children is found to be affected, and this is closely related to degree of impairment. In addition to brain pathology, there are other factors related to the malfunctioning of these children in a normal social setup. A particular environmental setup in which a child grows up is likely to play an important part in improving or deteriorating the child’s functioning in a social milieu. Shastri and Mishra (1971) assessed 56 school-going children (aged 6-13 years) with mental retardation with the help of Social Maturity Scale and found that the mentally retarded children function more in the lower level of social interaction. As the degree of impairment in terms of intelligence goes down, it is observed that the child approaches an average or satisfactory level of social functioning. They also found that the level of social development varies with the intellectual level among persons with mental retardation, or a wide range of family and environmental variables may also influence social development. Pati et al ., (1996) designed a study to identify the effects of severity of retardation, age, type of services attended and location of services in rural/urban area on the social development of children with mental retardation using a sample of 113 subjects diagnosed as children with mental retardation. The analysis of results suggested that with increasing severity of retardation, social development also decreases. Further it was found that age, type of services and location of center do not have any effect on social development. Mayers et al ., (1979) found that a positive relationship exists between measures of adaptive behavior and IQ or mental age. Cornbell et al ., (1969) assessed relationship between social and cognitive functioning. For people with Down’s syndrome, the level of social functioning was found to exceed the level of cognitive functioning. Matson et al ., (1999) designed a study to identify the effects of seizure disorders/epilepsy on psychopathology, social functioning, adaptive functioning and maladaptive behaviors using a sample of 353 people diagnosed with a seizure disorder and either severe or profound intellectual disability. People with a diagnosis of seizure disorder were found to have significantly less social and adaptive skills when compared to developmentally disabled controls with no seizure disorder diagnosis. In the light of the above investigation, the present study was designed with the following aims: (1) to find out the effects of severity of mental retardation on social development, along with possible correlation with social quotient (SQ) and IQ, which will eventually help in formulating appropriate training management and rehabilitation of mentally retarded children; and (2) to find out the relationship between age and social development. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):56-59 | oa_package/2f/3f/PMC3016702.tar.gz |
||
PMC3016703 | 21234166 | DISCUSSION
Clark & Wells (1995) and Clark (2001) have developed a cognitive model for the management of social phobia [ Figure 1 ]. The aim of the model was to answer the question of why the fears of someone with social phobia are maintained despite frequent exposure to social or public situations and nonoccurrence of the feared catastrophes. The model suggests that when patients enter a social situation, certain rules (e.g., “I must always appear witty and intelligent.”), assumptions (e.g., “If a woman really gets to know me, she will think I am worthless.”) or unconditional beliefs (e.g., “I am weird and boring.”) are activated. When individuals believe that they are in danger of negative evaluation, an attentional shift occurs towards detailed self-observation and monitoring of sensations and images. Socially anxious individuals thus use internal information to infer how others are evaluating them. [In Figure 1 , this is ‘processing of self as a social object.’] The internal information is associated with feelings of anxiousness, and vivid or distorted images are imagined from an observer’s perspective. These images are mostly visual, but they might also include bodily sensations and auditory or olfactory perspectives. This is not, of course, what an observer actually ‘sees.’ Recurrent images can be elicited by asking patients to recall a social situation associated with extreme anxiety. The images are usually linked to early memories. The therapist asks the patient whether he or she remembers first having the experience encapsulated in the recurrent image and to recall the sensory features and meaning that this had. For example, someone who had an image of being fat remembered being teased during adolescence, which resulted at the time in feelings of humiliation and rejection. A second factor that maintains symptoms of social phobia is safety behaviors. These are actions taken in feared situations in order to prevent feared catastrophes. Safety behaviors in social phobia include using alcohol; avoiding eye contact; gripping a glass too tightly; excessive rehearsing of a presentation; reluctance to reveal personal information; and asking many questions. Safety behaviors are often problematic: t0 hey prevent disconfirmation of the feared catastrophe; they can heighten self-focused attention and monitoring to determine if the behavior is ‘working’; they increase the feared symptoms (e.g., keeping arms close to the body to stop others seeing one sweat will increase sweating); they have an effect on others (e.g., the individual may appear cold and unfriendly, so that a feared catastrophe becomes a self-fulfilling prophecy); and they can draw attention to feared symptoms (e.g., speaking quietly and slowly will lead others to focus on the individual even more). It is hypothesized that a third factor that maintains symptoms of social phobia is anticipatory and post-event processing. Such processing focuses on the feelings and constructed images of the self in the event and leads to selective retrieval of past failures.
The results of the present case study are in agreement with those of earlier studies that indicate the significance of CBT in the treatment of patients suffering from social phobia (Ponniah & Hollon, 2008). Psycho-education proved to be very useful in understanding the dynamics of the patient problems, as well as to enable the patient to proceed in positive direction with the help of emotional support. There was decrease in anxiety and distress. Anxiety improved with the practice of Jacobson’s progressive muscular relaxation technique. Similar results have been reported by David (2004). Exposure techniques involve repeatedly facing previously avoided situations in a graded manner until habituation occurs. Cognitive restructuring helped in modifying negative automatic thoughts, which in turn helped in improving the patient’s self-esteem and changing the patient’s perception and way of thinking about the world and himself as well. Cognitive treatment is useful in restructuring and modifying the patient’s negative cognitive beliefs towards himself and others. The emphasis is on shifting the focus of attention, dropping safety behaviors, processing the situation and evaluating what was predicted against what actually happened. | CONCLUSION
The case report highlights the fact that combination of cognitive, emotional and behavioral approaches is effective and is the initial choice of treatment for social phobia. | Cognitive behavior therapy is probably the most well-known and the most practiced form of modern psychotherapy and has been integrated into highly structured package for the treatment of patients suffering from social phobia. The present case study is an attempt to provide therapeutic intervention program to a 27-year-old, unmarried Christian man suffering from social phobia. The patient was treated by using cognitive behavioral techniques. After 17 sessions of therapeutic intervention program, significant improvement was found. He was under follow-up for a period of 6 months and recovered to the premorbid level of functioning. | Social phobia consists of a marked and persistent fear of encountering other people, usually in small groups; or doing certain acts in a public place, like eating in public toilets, public speaking or encounters with persons of the opposite sex. Affected individuals fear that they will be evaluated negatively or that they will act in a manner that resulting in their humiliation or embarrassment whenever they are expected to go into the phobic situations; they develop severe anticipatory anxiety. They utilize various excuses to avoid phobic situations. This avoidance usually affects their lives quite adversely. Many of these patients exhibit psychological symptoms of poor self confidence, show anxiety on trifles and may be very conscious of some physical or psychological defect in them; as a result, they may develop secondary depression. Exposure to social situations can produce physical symptoms such as sweating, blushing, muscle tension, pounding heart, dry mouth, nausea, urgency of masturbation, shaky voice or trembling. Social phobia is the third most common mental disorder in adults worldwide, with a lifetime prevalence of at least 5% (depending on the threshold for distress and impairment). There is an equal gender ratio in treatment settings; but in catchment area surveys, there is a female preponderance of 3:2. Affected individuals are more likely to be unmarried and have a low socioeconomic status. Although common, social phobia is often not diagnosed or effectively treated. There have however been a number of developments in our understanding and treatment of social phobia over the past decade. Cognitive and behavioral interventions for social phobia appear to be more effective than wait-list controls and supportive therapy. Cognitive behavioral treatment involving cognitive restructuring plus exposure appears to be an effective treatment and exhibits a larger effect than either exposure or social skills training or cognitive restructuring alone. The sessions of CBT for social phobia are devoted to training clients in the basic tenets of cognitive therapy, especially the link between faulty assumptions or irrational thinking about social situations and anxiety experienced in those situations (Albano & DiBartolo, 2007; David, 2003; Leichsenring et al ., 2009).
CASE REPORT
The patient was a 27-year-old man suffering from social phobia, youngest in his family, unmarried, graduate, having average socioeconomic status and hailing from Jharkhand state, India. The patient came to RINPAS OPD with complaints of fearfulness in crowd, sweating, low confidence, negative thoughts, decreased interaction and inferiority complex. The duration of illness was 5 to 6 years. The patient had difficultly in carrying out his daily routine; consequently, he came for treatment. It was revealed from his history that he was fearful as compared to other persons of his age; from childhood, his mother was overprotective about him. His father was dominating and did not listen to anyone in the family; the patient was very scared of his father. Due to fearfulness, he remained dependent on others for the completion of his simple tasks. Gradually he started avoiding gatherings and crowd and did not go out of home. He felt difficulty in interacting with unknown people and even in opening up with people with whom he was familiar. He was unable to talk with them in a crowd. He thought that he did not have a good pattern of behavior and could not behave like other people. Although he put in efforts to behave normally, yet he sweated a lot during public interaction. Whenever he went out in social gatherings, he thought people were avoiding him, and he felt inferior or disapproved. His self-esteem decreased gradually as he could not take initiative in any activity. Negative ideas also developed in his mind — that he would never flourish; he would not be successful; he would not be able to behave like other people in society. Behavioral analysis was done with regard to antecedent frequency, duration, intensity and motivation of the patient in order to target behavior. Assessment regarding family interaction system, available support system and perceived support system, as well as behavior of other significant persons towards the patient, was done systematically.
THERAPEUTIC PROCESS
Assessment of the problem
At first, rapport was established with the patient and then clinical interview was conducted, in view of the fact that the patient was suffering since the last 5 to 6 years. Due to fear in social gatherings, the patient was unable to interact with unknown people. He had lost his confidence and was unable to perform his work efficiently. Whenever he went to new places, he started sweating. He also suffered from inferiority complex and had lost his interest in work. He was unable to maintain his daily routine as he was lethargic. Most of the time, he was worried about his problems and was unable to overcome this feeling of worry. This severely disturbed his social functioning, and he developed depressive features and poor self-esteem.
Tools
The following tools were used for assessing the patient.
Beck depression inventory (Beck & Steer, 1990)
To assess the severity of depression, Beck Depression Inventory (BDI) was administered. Assessment revealed mild level of depression in the patient. Lack of satisfaction, sense of failure, indecisiveness, sleep disturbance, hopelessness and guilt feelings were the main features.
Social phobia inventory (Liebowitz, 2002)
The SPIN is a self-rated questionnaire that measures the 3 commonly seen types of social anxiety disorders: f0 ear; phobic avoidance; and autonomic symptoms such as blushing, sweating and trembling.
Objectives of the intervention
After assessment of the problem, the intervention package focused on the following:
To motivate the patient for therapy. To prepare him to deal with and face phobic situations he avoided due to anxiety. To reduce his anxiety. To reduce inferiority complex and increase self-esteem. To modify his negative thoughts.
The therapeutic package consisted of the following intervention techniques
Psycho-education Jacobson’s progressive muscular relaxation technique Systematic desensitization Exposure and response prevention technique Cognitive restructuring
Seventeen sessions of cognitive behavior therapy were conducted over a period of 15 weeks to achieve the goal. Each session lasted for 1 hour to 11⁄2 hours. First of all, the patient was explained the basic nature and purpose of cognitive therapy; also, the significance of this collaborative approach was discussed in detail. Then, a therapeutic intervention strategy was planned, and the following behavioral and cognitive techniques were implemented. The therapeutic intervention program started with psycho-education. Although the patient had partial insight about his problem, as he recognized few of his symptoms as a part of illness, yet to enhance the insight adequately, proper counseling was done. The patient was explained about the nature, symptomatology, causative factors, course and maintaining factors of the illness. In the next session, the role of medication and psychotherapy as a process of treatment for the purpose of recovery from illness was explained to him. In the next session, for reducing anxiety, training in Jacobson’s progressive muscular relaxation technique was given to the patient. It started with breathing exercise, and then the patient underwent Jacobson’s progressive muscular relaxation process. Training was given in several sessions, and he was persuaded to practice the process at home. After following this process, his anxiety gradually reduced, which was concluded from the fact that the patient himself confirmed that he was then unable to deal with fewer situations. After discussion with the patient in the next session, systematic desensitization was done, which involves gradual exposure to phobic stimulus along hierarchy of increasing intensity, and was continued until the patient was habituated with the situation and avoidance response was extinguished. Relaxation technique was also used before situational exposure. The procedure of exposure and response prevention was adopted to evoke anxiety in the patient, who was advised to attend social gatherings and present a speech in front of few people. Also, his friends were requested to monitor the exposure and response prevention and keep an eye on noticeable changes in his behavior. It was noticed that systematic desensitization and exposure with response prevention helped reduce his anxiety level. In subsequent sessions, in order to modify his negative thoughts and faulty cognition, cognitive restructuring was done, in which attempts were made to restructute all the negative and wrong beliefs he had developed from his childhood. He was taught how to challenge the negative thoughts in a rational, objective and analytical manner by himself.
Marked improvement was noticed after 17 sessions of therapeutic intervention program. The level of anxiety and guilt feeling decreased. His self-esteem increased and he was able to attend social gatherings; also, his negative thoughts about himself were modified. This helped in the recovery of his illness. The patient expanded the activities of the institution where he worked as part of an NGO to other cities and was able to attend various social gatherings. At the end of 6 months, follow-up was done. There was significant improvement, which ultimately led the patient to maintain a normal daily routine. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):60-63 | oa_package/b2/5c/PMC3016703.tar.gz |
||||
PMC3016704 | 21234167 | CONCLUSION
Misinterpretation of P values is extremely common. One of the reasons may be that those who teach research methods do not themselves appreciate the problem. The P value is the probability of obtaining a value of the test statistic as large as or larger than the one computed from the data when in reality there is no difference between the different treatments. In other words, the P value is the probability of being wrong when asserting that a difference exists.
Lastly, we must remember we do not establish proof by hypothesis testing, and uncertainty will always remain in empirical research; at the most, we can only quantify our uncertainty. | Few clinicians grasp the true concept of probability expressed in the ‘ P value.’ For most, a statistically significant P value is the end of the search for truth. In fact, the opposite is the case. The present paper attempts to put the P value in proper perspective by explaining different types of probabilities, their role in clinical decision making, medical research and hypothesis testing. | The clinician who wishes to remain abreast with the results of medical research needs to develop a statistical sense. He reads a number of journal articles; and constantly, he must ask questions such as, “Am I convinced that lack of mental activity predisposes to Alzheimer’s? Or “Do I believe that a particular drug cures more patients than the drug I use currently?”
The results of most studies are quantitative; and in earlier times, the reader made up his mind whether or not to accept the results of a particular study by merely looking at the figures. For instance, if 25 out of 30 patients were cured with a new drug compared with 15 out of the 30 on placebo, the superiority of the new drug was readily accepted.
In recent years, the presentation of medical research has undergone much transformation. Nowadays, no respectable journal will accept a paper if the results have not been subjected to statistical significance tests. The use of statistics has accelerated with the ready availability of statistical software. It has now become fashionable to organize workshops on research methodology and biostatistics. No doubt, this development was long overdue and one concedes that the methodologies of most medical papers have considerably improved in recent years. But at the same time, a new problem has arisen. The reading of medical journals today presupposes considerable statistical knowledge; however, those doctors who are not familiar with statistical theory tend to interpret the results of significance tests uncritically or even incorrectly.
It is often overlooked that the results of a statistical test depend not only on the observed data but also on the choice of statistical model. The statistician doing analysis of the data has a choice between several tests which are based on different models and assumptions. Unfortunately, many research workers who know little about statistics leave the statistical analysis to statisticians who know little about medicine; and the end result may well be a series of meaningless calculations.
Many readers of medical journals do not know the correct interpretation of ‘ P values,’ which are the results of significance tests. Usually, it is only stated whether the P value is below 5% ( P < .05) or above 5% ( P > .05). According to convention, the results of P < .05 are said to be statistically significant, and those with P > .05 are said to be statistically nonsignificant. These expressions are taken so seriously by most that it is almost considered ‘unscientific’ to believe in a nonsignificant result or not to believe in a ‘significant’ result. It is taken for granted that a ‘significant’ difference is a true difference and that a ‘nonsignificant’ difference is a chance finding and does not merit further exploration. Nothing can be further from the truth.
The present paper endeavors to explain the meaning of probability, its role in everyday clinical practice and the concepts behind hypothesis testing.
WHAT IS PROBABILITY?
Probability is a recurring theme in medical practice. No doctor who returns home from a busy day at the hospital is spared the nagging feeling that some of his diagnoses may turn out to be wrong, or some of his treatments may not lead to the expected cure. Encountering the unexpected is an occupational hazard in clinical practice. Doctors after some experience in their profession reconcile to the fact that diagnosis and prognosis always have varying degrees of uncertainty and at best can be stated as probable in a particular case.
Critical appraisal of medical journals also leads to the same gut feeling. One is bombarded with new research results, but experience dictates that well-established facts of today may be refuted in some other scientific publication in the following weeks or months. When a practicing clinician reads that some new treatment is superior to the conventional one, he will assess the evidence critically, and at best he will conclude that probably it is true.
Two types of probabilities
The statistical probability concept is so widely prevalent that almost everyone believes that probability is a frequency . It is not, of course, an ordinary frequency which can be estimated by simple observations, but it is the ideal or truth in the universe , which is reflected by the observed frequency. For example, when we want to determine the probability of obtaining an ace from a pack of cards (which, let us assume has been tampered with by a dishonest gambler), we proceed by drawing a card from the pack a large number of times, as we know in the long run, the observed frequency will approach the true probability or truth in the universe. Mathematicians often state that a probability is a long-run frequency, and a probability that is defined in this way is called a frequential probability . The exact magnitude of a frequential probability will remain elusive as we cannot make an infinite number of observations; but when we have made a decent number of observations (adequate sample size), we can calculate the confidence intervals, which are likely to include the true frequential probability. The width of the confidence interval depends on the number of observations (sample size).
The frequential probability concept is so prevalent that we tend to overlook terms like chance, risk and odds, in which the term probability implies a different meaning. Few hypothetical examples will make this clear. Consider the statement, “The cure for Alzheimer’s disease will probably be discovered in the coming decade.” This statement does not indicate the basis of this expectation or belief as in frequential probability, where a number of repeated observations provide the foundation for probability calculation. However, it may be based on the present state of research in Alzheimer’s. A probabilistic statement incorporates some amount of uncertainty, which may be quantified as follows: A politician may state that there is a fifty-fifty chance of winning the next election, a bookie may say that the odds of India winning the next one-day cricket game is four to one, and so on. At first glance, such probabilities may appear frequential ones, but a little reflection will reveal the contrary. We are concerned with unique events, i.e., the likely cure of a disease in the future, the next particular election, the next particular one-day game — and it makes no sense to apply the statistical idea that these types of probabilities are long-run frequencies. Further reflection will illustrate that these statements about probabilities of the election and one-day game are no different from the one about the cure for Alzheimer’s, apart from the fact that in the latter cases an attempt has been made to quantify the magnitude of belief in the occurrence of the event.
It follows from the above deliberations that we have 2 types of probability concepts. In the jargon of statistics, a probability is ideal or truth in the universe which lies beneath an observed frequency — such probabilities may be called frequential probabilities. In literary language, a probability is a measure of our subjective belief in the occurrence of a particular event or truth of a hypothesis. Such probabilities, which may be quantified that they look like frequential ones, are called subjective probabilities. Bayesian statistical theory also takes into account subjective probabilities (Lindley, 1973; Winkler, 1972). The following examples will try to illustrate these (rather confusing) concepts.
A young man is brought to the psychiatry OPD with history of withdrawal. He also gives history of talking to himself and giggling without cause. There is also a positive family history of schizophrenia. The consulting psychiatrist who examines the patient concludes that there is a 90% probability that this patient suffers from schizophrenia.
We ask the psychiatrist what makes him make such a statement. He may not be able to say that he knows from experience that 90% of such patients suffer from schizophrenia. The statement therefore may not be based on observed frequency. Instead, the psychiatrist states his probability based on his knowledge of the natural history of disease and the available literature regarding signs and symptoms in schizophrenia and positive family history. From this knowledge, the psychiatrist concludes that his belief in the diagnosis of schizophrenia in that particular patient is as strong as his belief in picking a black ball from a box containing 10 white and 90 black balls. The probability in this case is certainly subjective probability .
Let us consider another example: A 26-year-old married female patient who suffered from severe abdominal pain is referred to a hospital. She is also having amenorrhea for the past 4 months. The pain is located in the left lower abdomen. The gynecologist who examines her concludes that there is a 30% probability that the patient is suffering from ectopic pregnancy.
As before, we ask the gynecologist to explain on what basis the diagnosis of ectopic pregnancy is suspected. In this case the gynecologist states that he has studied a large number of successive patients with this symptom complex of lower abdominal pain with amenorrhea, and that a subsequent laparotomy revealed an ectopic pregnancy in 30% of the cases.
If we accept that the study cited is large enough to make us assume that the possibility of the observed frequency of ectopic pregnancy did not differ from the true frequential probability, it is natural to conclude that the gynecologist’s probability claim is more ‘evidence based’ than that of the psychiatrist, but again this is debatable.
In order to grasp this in proper perspective, it is necessary to note that the gynecologist stated that the probability of ectopic pregnancy in that particular patient was 30%. Therefore, we are concerned with a unique event just as the politician’s next election or India’s next one-day match. So in this case also, the probability is a subjective probability which was based on an observed frequency . One might also argue that even this probability is not good enough. We might ask the gynecologist to base his belief on a group of patients who also had the same age, height, color of hair and social background; and in the end, the reference group would be so restrictive that even the experience from a very large study would not provide the necessary information. If we went even further and required that he must base his belief on patients who in all respects resembled this particular patient, the probabilistic problem would vanish as we will be dealing with a certainty rather than a probability.
The clinician’s belief in a particular diagnosis in an individual patient may be based on the recorded experience in a group of patients, but it is still a subjective probability. It reflects not only the observed frequency of the disease in a reference group but also the clinician’s theoretical knowledge which determines the choice of reference group (Wulff, Pedersen and Rosenberg, 1986). Recorded experience is never the sole basis of clinical decision making.
GAP BETWEEN THEORY AND PRACTICE
The two situations described above are relatively straightforward. The physician observed a patient with a particular set of signs and symptoms and assessed the subjective probability about the diagnosis in each case. Such probabilities have been termed diagnostic probabilities (Wulff, Pedersen and Rosenberg, 1986). In practice, however, clinicians make diagnosis in a more complex manner which they themselves may be unable to analyze logically.
For instance, suppose the clinician suspects one of his patients is suffering from a rare disease named ‘D.’ He requests a suitable test to confirm the diagnosis, and suppose the test is positive for disease ‘D.’ He now wishes to assess the probability of the diagnosis being positive on the basis of this information, but perhaps the medical literature only provides the information that a positive test is seen in 70% of the patients with disease ‘D.’ However, it is also positive in 2% of patients without disease ‘D.’ How to tackle this doctor’s dilemma? First a formal analysis may be attempted, and then we can return to everyday clinical thinking. The frequential probability which the doctor found in the literature may be written in the statistical notation as follows:
P (S/D+) = .70, i.e., the probability of the presence of this particular sign (or test) given this particular disease is 70%.
P (S/D–) = .02, i.e., the probability of this particular sign given the absence of this particular disease is 2%.
However, such probabilities are of little clinical relevance. The clinical relevance is in the ‘opposite’ probability. In clinical practice, one would like to know the P (D/S), i.e., the probability of the disease in a particular patient given this positive sign. This can be estimated by means of Bayes’ Theorem (Papoulis, 1984; Lindley, 1973; Winkler, 1972). The formula of Bayes’ Theorem is reproduced below, from which it will be evident that to calculate P(D/S), we must also know the prior probability of the presence and the absence of the disease, i.e., P (D+) and P (D–).
P (D/S) = P (S/D+) P (D+) ÷ P (S/D+) P (D+) + P (S/D–) P (D–)
In the example of the disease ‘D’ above, let us assume that we estimate that prior probability of the disease being present, i.e., P (D+), is 25%; and therefore, prior probability of the absence of disease, i.e., P (D–), is 75%. Using the Bayes’ Theorem formula, we can calculate that the probability of the disease given a positive sign, i.e., P (D/S), is 92%.
We of course do not suggest that clinicians should always make calculations of this sort when confronted with a diagnostic dilemma, but they must in an intuitive way think along these lines. Clinical knowledge is to a large extent based on textbook knowledge, and ordinary textbooks do not tell the reader much about the probabilities of different diseases given different symptoms. Bayes’ Theorem guides a clinician how to use textbook knowledge for practical clinical purposes.
The practical significance of this point is illustrated by the European doctor who accepted a position at a hospital in tropical Africa. In order to prepare himself for the new job, he bought himself a large textbook of tropical medicine and studied in great detail the clinical pictures of a large number of exotic diseases. However, for several months after his arrival at the tropical hospital, his diagnostic performance was very poor, as he knew nothing about the relative frequency of all these diseases. He had to acquaint himself with the prior probability, P (D +), of the diseases in the catchment area of the hospital before he could make precise diagnoses.
The same thing happens on a smaller scale when a doctor trained at a university hospital establishes himself in general practice. At the beginning, he will suspect his patients of all sorts of rare diseases (which are common at the university hospital), but after a while he will learn to assess correctly the frequency of different diseases in the general population.
PROBABILITY AND HYPOTHESIS TESTING
Besides predictions on individual patients, the doctor is also concerned in generalizations to the population at large or the target population. We may say that probably there may have been life at Mars. We may even quantify our belief and mention that there is 95% probability that depression responds more quickly during treatment with a particular antidepressant than during treatment with a placebo. These probabilities are again subjective probabilities rather than frequential probabilities . The last statement does not imply that 95% of depression cases respond to the particular antidepressant or that 95% of the published reports mention that the particular antidepressant is the best. It simply means that our belief in the truth of the statement is the same as our belief in picking up a red ball from a box containing 95 red balls and 5 white balls. It means that we are, however, almost not totally convinced that the average recovery time during treatment with a particular antidepressant is shorter than during placebo treatment.
The purpose of hypothesis testing is to aid the clinician in reaching a conclusion concerning the universe by examining a sample from that universe. A hypothesis may be defined as a presumption or statement about the truth in the universe. For example, a clinician may hypothesize that a certain drug may be effective in 80% of the cases of schizophrenia. It is frequently concerned about the parameters in the population about which the presumption or statement is made. It is the basis for motivating the research project. There are two types of hypotheses, research hypothesis and statistical hypothesis (Daniel, 2000; Guyatt et al ., 1995).
Genesis of research hypothesis
Hypothesis may be generated by deduction from anatomical, physiological facts or from clinical observations.
Statistical hypothesis
Statistical hypotheses are hypotheses that are stated in such a way that they may be evaluated by appropriate statistical techniques.
Pre-requisites for hypothesis testing
Nature of data
The types of data that form the basis of hypothesis testing procedures must be understood, since these dictate the choice of statistical test.
Presumptions
These presumptions are the normality of the population distribution, equality of the standard deviations, random samples.
Hypothesis
There are 2 statistical hypotheses involved in hypothesis testing. These should be stated a priori and explicitly. The null hypothesis is the hypothesis to be tested. It is denoted by the symbol H 0 . It is also known as the hypothesis of no difference . The null hypothesis is set up with the sole purpose of efforts to knock it down. In the testing of hypothesis, the null hypothesis is either rejected (knocked down) or not rejected (upheld). If the null hypothesis is not rejected, the interpretation is that the data is not sufficient evidence to cause rejection. If the testing process rejects the null hypothesis, the inference is that the data available to us is not compatible with the null hypothesis and by default we accept the alternative hypothesis , which in most cases is the research hypothesis. The alternative hypothesis is designated with the symbol H A .
Limitations
Neither hypothesis testing nor statistical tests lead to proof. It merely indicates whether the hypothesis is supported or not supported by the available data. When we reject a null hypothesis, we do not mean it is not true but that it may be true. By default when we do not reject the null hypothesis, we should have this limitation in mind and should not convey the impression that this implies proof.
Test statistic
The test statistic is the statistic that is derived from the data from the sample. Evidently, many possible values of the test statistic can be computed depending on the particular sample selected. The test statistic serves as a decision maker, nothing more, nothing less, rather than proof or lack of it. The decision to reject or not to reject the null hypothesis depends on the magnitude of the test statistic.
Types of decision errors
The error committed when a true null hypothesis is rejected is called the type I error or α error . When a false null hypothesis is not rejected, we commit type II error, or β error . When we reject a null hypothesis, there is always the risk (howsoever small it may be) of committing a type I error, i.e., rejecting a true null hypothesis. On the other hand, whenever we fail to reject a null hypothesis, the risk of failing to reject a false null hypothesis, or committing a type II error, will always be present. Put in other words, the test statistic does not eliminate uncertainty (as many tend to believe); it only quantifies our uncertainty.
Calculation of test statistic
From the data contained in the sample, we compute a value of the test statistic and compare it with the rejection and non-rejection regions, which have to be specified in advance.
Statistical decision
The statistical decision consists of rejecting or of not rejecting the null hypothesis. It is rejected if the computed value of the test statistic falls in the rejection region, and it is not rejected if the value falls in the non-rejection region.
Conclusion
If H 0 is rejected, we conclude that H A is true. If H 0 is not rejected, we conclude that H 0 may be true.
P values
The P value is a number that tells us how unlikely our sample values are, given that the null hypothesis is true. A P value indicating that the sample results are not likely to have occurred, if the null hypothesis is true, provides reason for doubting the truth of the null hypothesis.
We must remember that, when the null hypothesis is not rejected, one should not say the null hypothesis is accepted. We should mention that the null hypothesis is “not rejected.” We avoid using the word accepted in this case because we may have committed a type II error. Since, frequently, the probability of committing error can be quite high (particularly with small sample sizes), we should not commit ourselves to accepting the null hypothesis.
INTERPRETATIONS
With the above discussion on probability, clinical decision making and hypothesis testing in mind, let us reconsider the meaning of P values. When we come across the statement that there is statistically significant difference between two treatment regimes with P < .05, we should not interpret that there is less than 5% probability that there is no difference, and that there is 95% probability that a difference exists, as many uninformed readers tend to do. The statement that there is difference between the cure rates of two treatments is a general one, and we have already discussed that the probability of the truth of a general statement (hypothesis) is subjective , whereas the probabilities which are calculated by statisticians are frequential ones. The hypothesis that one treatment is better than the other is either true or false and cannot be interpreted in frequential terms.
To explain this further, suppose someone claims that 20 (80%) of 25 patients who received drug A were cured, compared to 12 (48%) of 25 patients who received drug B. In this case, there are two possibilities, either the null hypothesis is true, which means that the two treatments are equally effective and the observed difference arose by chance; or the null hypothesis is not true (and we accept the alternative hypothesis by default), which means that one treatment is better than the other. The clinician wants to make up his mind to what extent he believes in the truth of the alternative hypothesis (or the falsehood of the null hypothesis ). To resolve this issue, he needs the aid of statistical analysis. However, it is essential to note that the P value does not provide a direct answer. Let us assume in this case the statistician does a significance test and gets a P value = .04, meaning that the difference is statistically significant ( P < .05). But as explained earlier, this does not mean that there is a 4% probability that the null hypothesis is true and 96% chance that the alternative hypothesis is true. The P value is a frequential probability and it provides the information that there is a 4% probability of obtaining such a difference between the cure rates, if the null hypothesis is true . In other words, the statistician asks us to assume that the null hypothesis is true and to imagine that we do a large number of trials. In that case, the long-run frequency of trials which show a difference between the cure rates like the one we found, or even a larger one, will be 4%.
Prior belief and interpretation of the P value
In order to elucidate the implications of the correct statistical definition of the P value, let us imagine that the patients who took part in the above trial suffered from depression, and that drug A was gentamycin, while drug B was a placebo. Our theoretical knowledge gives us no grounds for believing that gentamycin has any affect whatsoever in the cure of depression. For this reason, our prior confidence in the truth of the null hypothesis is immense (say, 99.99%), whereas our prior confidence in the alternative hypothesis is minute (0.01%). We must take these prior probabilities into account when we assess the result of the trial. We have the following choice. Either we accept the null hypothesis in spite of the fact that the probability of the trial result is fairly low at 4% ( P < .05) given the null hypothesis is true, or we accept the alternative hypothesis by rejecting the null hypothesis in spite of the fact that the subjective probability of that hypothesis is extremely low in the light of our prior knowledge.
It will be evident that the choice is a difficult one, as both hypotheses, each in its own way, may be said to be unlikely, but any clinician who reasons along these lines will choose that hypothesis which is least unacceptable: He will accept the null hypothesis and claim that the difference between the cure rates arose by chance (however small it may be), as he does not feel that the evidence from this single trial is sufficient to shake his prior belief in the null hypothesis. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):64-69 | oa_package/c1/31/PMC3016704.tar.gz |
|||||
PMC3016705 | 21234168 | CONCLUSION
ERP constitutes a millisecond-by-millisecond record of neural information processing, which can be associated with particular operations such as sensory encoding, inhibitory responses and updating working memory. Thus it provides a noninvasive means to evaluate brain functioning in patients with cognitive disorders and is of prognostic value in few cases. ERP is a method of neuropsychiatric research which holds great promise for the future. | Electroencephalography (EEG) provides an excellent medium to understand neurobiological dysregulation, with the potential to evaluate neurotransmission. Time-locked EEG activity or event-related potential (ERP) helps capture neural activity related to both sensory and cognitive processes. In this article, we attempt to present an overview of the different waveforms of ERP and the major findings in various psychiatric conditions. | Richard Caton (1842–1926), a medical lecturer at Liverpool, was the pioneer in the field of evoked potential. He observed that “feeble currents of varying direction pass through the multiplier when the electrodes are placed on two points of the external surface.” This sentence marked the birth of the electroencephalogram (EEG), though it was invented much later by Hans Berger, a German Psychiatrist, in 1929.
WHAT IS EVENT-RELATED POTENTIAL?
Event-related potentials (ERPs) are very small voltages generated in the brain structures in response to specific events or stimuli (Blackwood and Muir, 1990). They are EEG changes that are time locked to sensory, motor or cognitive events that provide safe and noninvasive approach to study psychophysiological correlates of mental processes. Event-related potentials can be elicited by a wide variety of sensory, cognitive or motor events. They are thought to reflect the summed activity of postsynaptic potentials produced when a large number of similarly oriented cortical pyramidal neurons (in the order of thousands or millions) fire in synchrony while processing information (Peterson et al ., 1995). ERPs in humans can be divided into 2 categories. The early waves, or components peaking roughly within the first 100 milliseconds after stimulus, are termed ‘sensory’ or ‘exogenous’ as they depend largely on the physical parameters of the stimulus. In contrast, ERPs generated in later parts reflect the manner in which the subject evaluates the stimulus and are termed ‘cognitive’ or ‘endogenous’ ERPs as they examine information processing. The waveforms are described according to latency and amplitude.
DIFFERENT ERP WAVEFORMS
P50 wave
The amount of attenuation in the neural response to the second of the two identical stimuli indexes the strength of the inhibitory pathway. This paradigm has been adapted as a test of sensory gating principally through the study of the P50 waveform. Sensory gating is crucial to an individual’s ability to selectively attend to salient stimuli and ignore redundant, repetitive or trivial information, protecting the brain from information overflow (Light and Braff, 2003). The most positive peak between 40 and 75 msec after the conditioning stimulus is the P50 (Olincy et al ., 2005). The P50 amplitude is the absolute difference between the P50 peak and the preceding negative trough (Clementz et al ., 1997). P50 can be elicited by either the “paired click” paradigm or the “steady-state” paradigm.
N100 or N1 wave
A negative deflection peaking between 90 and 200 msec after the onset of stimulus, is observed when an unexpected stimulus is presented. It is an orienting response or a “matching process,” that is, whenever a stimulus is presented, it is matched with previously experienced stimuli. It has maximum amplitude over Cz and is therefore also called “vertex potential.”
P200 or P2 wave
P200 or P2 wave refers to the positive deflection peaking around 100-250 msec after the stimulus. Current evidence suggests that the N1/P2 component may reflect the sensation-seeking behavior of an individual.
N200 or N2 wave
Is a negative deflection peaking at about 200 msec after presentation of stimulus.
There are 3 components of the N200 waveform —
N2a/ Mismatch negativity (MMN)
MMN is a negative component which is elicited by any discriminable change (Näätänen and Tiitinen, 1998) in a repetitive background of auditory stimulation (Winkler et al ., 1996). MMN represents the brain’s automatic process involved in encoding of the stimulus difference or change.
N2b
It is slightly later in latency than the N2a and appears when changes in physical property of the stimulus are task relevant.
N2c
It is the classification N2, elicited when classification of disparate stimuli is needed.
N300
N300 is a recent finding in the context of semantic congruity and expectancy.
P300
The P3 wave was discovered by Sutton et al . in 1965 and since then has been the major component of research in the field of ERP. For auditory stimuli, the latency range is 250-400 msec for most adult subjects between 20 and 70 years of age. The latency is usually interpreted as the speed of stimulus classification resulting from discrimination of one event from another. Shorter latencies indicate superior mental performance relative to longer latencies. P3 amplitude seems to reflect stimulus information such that greater attention produces larger P3 waves. A wide variety of paradigms have been used to elicit the P300, of which the “oddball” paradigm is the most utilized where different stimuli are presented in a series such that one of them occurs relatively infrequently — that is the oddball. The subject is instructed to respond to the infrequent or target stimulus and not to the frequently presented or standard stimulus. Reduced P300 amplitude is an indicator of the broad neurobiological vulnerability that underlies disorders within the externalizing spectrum {alcohol dependence, drug dependence, nicotine dependence, conduct disorder and adult antisocial behavior} (Patrick et al ., 2006).
N400
It is a negative wave first described in the context of semantic incongruity, 300–600 msec post-stimulus (Kutas and Hillyard, 1980). N400 is inversely related to the expectancy of a given word to end a sentence.
P600
In the domain of language processing, a P600 effect occurs to sentences that (a) contain a syntactic violation, (b) have a nonpreferred syntactic structure or (c) have a complex syntactic structure (Osterhout and Holcomb, 1992).
Movement-related cortical potentials
MRCPs denote a series of potentials that occur in close temporal relation with movement or movement-like activity. These may occur before/during or after the movement and they refer to the associated preparedness for movement in the cortex. Kornhuber and Deecke (1965) distinguished 4 components of the MRCPs, viz., (1) Bereitschafts potential, (2) Reafferent potential, (3) Pre-motion positivity and (4) Motor potential.
Contingent negative variation
Richard Caton in 1875 first used the term negative variation while describing electrical activity of gray matter, while Walter (1964) coined the term contingent negative variation (CNV) . CNV can be elicited by a standard reaction time paradigm (S1-S2-motor response) or only by paired stimuli without any motor response (S1-S2 paradigm). A first stimulus (S1) serves as a preparatory signal for an imperative stimulus (S2) to which the subject must make a response. In the S1-S2 interval, there are early and late CNV components. Early CNV is considered as indicator of arousal processes, and late CNV is associated with attention to the experimental task.
Post-imperative negative variation
PINV is the delay in CNV resolution, that is, negativity continues after S2. PINV is a marker of sustained cognitive activity.
EVENT-RELATED POTENTIAL CHANGES IN PSYCHIATRIC DISORDERS
Alcohol dependence syndrome
N1 P2
Alcohol-induced attenuation of N1 and P2 amplitudes has been consistently reported. The N1 amplitude was dose-dependently suppressed by alcohol, and the N1 peak latency was prolonged by the higher (0.85 g/ kg) dose of ethanol, thus supporting the previous observations.
N2
There is increase in the latency of N200.
P300
Acute ethanol intake is seen to reduce P300 amplitude. However, wave abnormalities have also been seen in abstinent individuals and in the first-degree relatives of patients (Patrick et al ., 2006).
CNV, MRCP
Decreased amplitude of CNV and MRCP denoting deficits in executive functioning in alcoholic patients has been reported.
Schizophrenia
P300
One of the most robust neurophysiological findings in schizophrenia is decrease in P300 amplitude. P300 is often smaller in amplitude and longer in latency in patients who have been ill longer. P300 latency was found to be increased in schizophrenic patients but not in their first-degree relatives (Simlai& Nizamie, 1998). In longitudinal analyses, P300 amplitude is sensitive to fluctuations in the severity of positive symptoms, independent of medication, and to the enduring level of negative symptom severity (Mathalon et al ., 2000).
P50
Diminished P50 suppression has been reported in patients with schizophrenia (Bramon et al ., 2004) and in their non-psychotic relatives (Clementz et al ., 1998).
N1, P2, N2
Schizophrenia patients have demonstrated reduced N100, P200 and N200 amplitudes (O’Donnell et al ., 2004).
MMN
There is decreased MMN amplitude, as well as abnormal MMN topographical distribution, in treatment-refractory patients with schizophrenia (Milovan, 2004).
CNV, BP
CNV amplitude was noted to be shorter and latency longer, localized to the left central region, in schizophrenia patients. BP interval and amplitude were found to be increased when compared to controls (Simlai& Nizamie, 1998). BP latency was found to be decreased in patients denoting impairment in planning movement and decision making (Duggal& Nizamie, 1998).
Late components
There is an increase in N400 and P600 latencies in schizophrenic patients.
Bipolar affective disorder
P50
P50 suppression deficits have been found in patients with bipolar disorder with psychotic symptoms, as well as in their unaffected first-degree relatives, suggesting P50 to be an endophenotypic marker for the illness (Schulze et al ., 2007).
P300
Salisbury et al . (1999) have recently noted P300 reduction in manic psychosis. Latency prolongation and amplitude reduction were seen in chronic bipolar patients (O’Donell et al ., 2004).
Depression
P300
Reduced amplitude of P300 has been seen in depressed patients, mainly with suicidal ideations, psychotic features or severe depression (Hansenne et al ., 1996).
NEUROTIC DISORDERS
Phobia
P300
Studies show that individuals with spider and snake phobias showed significantly larger P300 amplitudes than healthy controls when exposed to pictures of their feared objects, indicating enhanced processing of stimuli that reflect critical fear concerns (Miltner et al ., 2000).
Panic disorder
P300
An enlarged frontal P3a to distractor stimuli among patients has been reported using a three-tone discrimination task, supporting the hypothesis of dysfunctional prefrontal-limbic pathways. In addition, a longer P3b latency in drug-free patients than in unaffected controls has also been reported as possible evidence of a dysfunctional hippocampus and amygdala (Turan et al ., 2002).
Generalized anxiety disorder
ERPs elicited by threat-relevant stimuli support the existence of an attentional bias, showing larger amplitude of P300 and slow waves in response to fear-related words or pictures in subjects with high-trait anxiety or anxiety disorders when compared with healthy controls (De Pascalis et al ., 2004).
Obsessive compulsive disorder
N2, P3
OCD patients are seen to have significantly shorter P300 and N200 latencies for target stimuli and greater N200 negativity when compared with normal controls. However, there are no significant relationships between these ERP abnormalities in OCD patients and the type or severity of their OCD symptoms. Paul and Nizamie (1999) found increased P300 latency in OCD patients but no difference in amplitude.
Posttraumatic stress disorder
P50
There are reports of a reduction of the P50 suppression response in persons with posttraumatic stress disorder (PTSD) (Neylan et al ., 1999; Skinner et al ., 1999).
P300
The most common finding is reduced P300 amplitudes (Metzger et al ., 1997).
Dissociative disorder
P300
Patients showed significant reduction in the amplitudes of P300 during dissociative disorders compared with the levels at remission. The latency of P300 remained unchanged. The amplitudes of P300 might be a state-dependent biological marker of dissociative disorders.
Personality disorders
In healthy subjects, several studies have reported some relationships between N200, P300 and personality. A consistent result of these studies is that introverts exhibit higher P300 amplitude than extroverts. P300 amplitude is weakly correlated (positively) to the self-directedness dimension; and CNV, to cooperativeness. Longer N200 latency may be associated with higher harm avoidance score. N200 amplitude is negatively correlated to persistence. This indicates that lower N200 amplitude may be related to a higher persistence score. | CC BY | no | 2022-01-12 15:33:02 | Ind Psychiatry J. 2009 Jan-Jun; 18(1):70-73 | oa_package/b1/a8/PMC3016705.tar.gz |
|||||
PMC3016708 | 21062141 | Introduction
Parental care has evolved considerably across several taxa of animals ( Clutton-Brock 1991 ). In insects, parental care is not common, but it is known in different lineages. Several insect species have developed parental care that varies in its form and degree of sociality ( Tallamy and Wood 1986 ; Costa 2006 ). Guarding of eggs and early-stage nymphs is the type of parental care most frequently observed, and is believed to have evolved in response to intense arthropod predation pressure ( Costa 2006 ). Provisioning, an advanced form of parental care, has been reported for several insect species ( Scott 1998 ), and progressive provisioning, parents repeatedly transporting food to their young, has also been reported ( Filippi-Tsukamoto et al. 1995 ; Filippi et al. 2001 ). Although progressive provisioning would enhance the survival of young, there have been few reports on species showing progressive provisioning other than in Hymenoptera and Isoptera.
All earwig (Dermaptera) species studied to date exhibit parental care ( Lamb 1976 ), however, the extent of care varies greatly from species to species ( Vancassel 1984 ). For example, Tagalina papua shows only egg guarding ( Matzke and Klass 2005 ), but the hump earwig, Anechura harmandi, mothers guard and clean the eggs and are killed and eaten by the first-instar nymphs before they disperse from the nest ( Kohno 1997 ; Suzuki et al. 2005 ). Although nearly 2,000 Dermaptera species have been described ( Haas 2003 ), parental behavior has been examined in only a handful of these species. Furthermore, the mothers of some species have been reported to provision their nymphs ( Shepard et al. 1973 ; Lamb 1976 ; Rankin et al. 1996 ; Kölliker 2007 ), but the effect of provisioning on the survival of the nymphs remains unknown.
Anisolabis maritima Bonelli (Dermaptera: Anisolabididae) is a cosmopolitan species that shows sub-social behavior in which the females tend clutches of eggs in soil burrows ( Bennett 1904 ). Mothers of this species bring food to the nest ( Guppy 1950 ). The present study examined the maternal behavior of A. maritima and focused on whether mothers provision their nymphs progressively and whether provisioning improves the survival of the nymphs. | Materials and Methods
All A. maritima individuals were caught in a field on the coast of Izumozaki, Niigata Prefecture, Japan (138° 42′ 10′′ N, 37° 32′ 11′′ E) between late April and early May in 2008 and 2009. All females were coupled with a male for 1–2 days prior to the start of the experiment. After body length was measured, the females were placed together in a polyethylene container (12 × 8 × 5 cm) with some sand and a small stone as shelter. The containers were maintained under dim light conditions, at room temperature, and under sufficient humidity. All individuals were fed turtle food pellets ad libitum. All containers holding a female with an egg mass were checked daily. When hatched nymphs were found, the containers were assigned to an experiment.
Observation of defending behavior
Some females before first oviposition were assigned as non-caring females. Both noncaring ( n = 20) and caring (attending nymphs, n = 24) females were approached from the front and gently touched on the back with forceps three times. The initial responses shown by the females during these disturbances were recorded.
Observation of provisioning behavior
Bottle caps (25 mm in diameter, 10 mm in depth) placed at a distance of 2–3 cm from the burrow were used as food containers. Immature (before dispersal) nymphs were not able to enter the bottle cap to eat (S Suzuki, personal observation). The food provided was 10 turtle food pellets (average 0.07 g total). Each container ( n = 16) was checked daily; the number of remaining pellets were counted, and pellets were added as necessary to again keep a total of 10. When more than half of the nymphs left the nest, or some nymphs were found in the bottle cap, the brood was considered to have dispersed.
Effects of provisioning on nymph survival
All containers holding a female with an egg mass were checked daily, and when hatched nymphs were found, the containers were assigned randomly to an experiment. In the mother-removal group ( n = 14), mothers were removed just after hatching and their nymphs were maintained without food. In the feeding group ( n = 13), mothers were removed, and 10 food pellets per day were provided in the nests as food. The leftover food was replaced every day. In the non-feeding group ( n = 14), nymphs were maintained with the mother but no food was provided. In the control group ( n = 15), nymphs were maintained with the mother and 10 food pellets per day were provided in bottle caps to allow provisioning by the mother. After eight days, the number of surviving nymphs was counted. | Results
Observation of defending behavior
When disturbed by forceps, the females showed three different response types: (1) remaining immobile, (2) counterattacking, or (3) running away. When a female ran away from its initial position before the three taps with the forceps were completed, it was recorded as “escaped.” Since caring females without disturbance always stay in the nest or cover their nymphs, remaining immobile can be regarded as a defensive behavior. Fifteen out of 20 females not attending nymphs escaped after three taps, but 19 out of 24 females attending nymphs did not (P=0.0006, Fisher's exact test, Table 1 ). Counterattacks were observed in 3 cases of females attending nymphs.
Observation of provisioning behavior
The females did not carry any food until after hatching, when they began to carry food with their mouth to the nest. The number of instances of food carrying and dispersal days are shown in Figure 1 .
Nymphs dispersed from the nest in 5.9 ± 0.9 days (mean ± SD). There was leftover food in most nests, though the pellets were crumbled and could not be counted. Mouthpart-to-mouthpart contact was not observed between mothers and nymphs, and the nymphs ate the food themselves in the nest.
Effects of provisioning on nymph survival
Fewer nymphs were observed in the broods in both the mother-removal and non-feeding groups than in the control and feeding groups ( Figure 2 , F = 20.3, d.f = 3, p < 0.01, TukeyKramer method). | Discussion
Nesting and brood attendance are found throughout the order Dermaptera ( Costa 2006 ). Previous studies have reported food provisioning to nymphs by earwig parents in some species, based on observations of mouthpart-to-mouthpart contact ( Lamb 1976 ) and direct evidence ( Staerkle and Kölliker 2008 ). However, to the author's knowledge, no research has been conducted to examine the effect of provisioning on the survival of the nymphs. The results of the present experiments demonstrate that when mothers are removed, fewer nymphs survive. When food was provided to the nymphs whose mother was removed, however, they survived as well as when the mother was present. In contrast, fewer nymphs survived under the mother-removal treatment (no food). Although the mortality factor was not directly observed, the absence of dead nymphs in the mother-removal treatment suggests sibling cannibalism. Food con-sumption could be quantified only on the family group level, confounding food provisioning and larval/female self-feeding. Larval feeding and self-feeding by mother could not be distinguished in this study. However, since nymphs before dispersal cannot enter a bottle cap to eat, this result is direct evidence of food provisioning by the mother. These results indicate that food being provided for the hatched nymphs is a prerequisite for their survival.
Progressive provisioning is well known in organisms from higher taxa such as birds and mammals ( Clutton-Brock 1991 ) and is essential for the survival of young in these altricial species. In contrast, reports of progressive provisioning are rare among insects other than species in Hymenoptera and Isoptera. For example, the burrower bug, Parastrachia spp., feed their nymphs fallen drupes ( Filippi et al. 2001 ; Kölliker et al. 2006 ), and crickets ( Anuro gryllus spp.) breed in underground burrows to which the mother brings food for the young ( Walker 1983 ). Since A. maritima females brought food back to the nest for several days ( Figure 1 ), this behavior can be regarded as progressive provisioning. In another Dermaptera species, Forficula auricularia, females regurgitate food ( Staerkle and Kölliker 2008 ), but the nymphs of A. maritima were not observed in any mouthpart-to-mouthpart contact in the present study. Since A. maritima mothers provision food only by placing it in the nest, food allocations to individual nymphs have not been confirmed. However, since the mothers provision increasing amounts of food with increasing days from hatching ( Figure 1 ), they could control food mass according to the needs of the nymphs.
The results of the feeding group indicate the self-feeding ability of the nymphs. Even in the European earwig, which is reported to be fed directly by the mother ( Staerkle and Kölliker 2008 ), nymphs are able to feed directly, which is important to survival ( Kölliker 2007 ). The present study was conducted under no predation risk and with artificial food. In cases where there may be no food for the nymphs in the nest, the nymphs must leave the nest to feed without provisioning by the mother. Sub-social insect species showing progressive provisioning often face a high predation risk of nymphs ( Filippi-Tsukamoto et al. 1995 ). Earwig nests also suffer high predation pressure from various animals ( Kohno 1997 ; Kölliker and Vancassel 2007 ). Many females attending nymphs stayed in the nest even when disturbed by the forceps ( Table 1 ). Since this can be regarded as defensive behavior, reduced predation pressure is expected with maternal attendance. Nymphs will suffer from predation pressure and starvation without mother attendance. Filippi et al. ( 2000 ) demonstrated that in the sub-social shield bug, Parastrachia japonensis, progressive provisioning enhances nymphal survival in high predation-pressure environments by inhibiting nymphal dispersal from safe nesting sites. Staying in the nest decreases the risk of predation, and provisioning by the mother decreases the risk of starvation.
It is difficult to distinguish provisioning from young/female self-feeding. The present study confirmed provisioning by A. maritima females by providing food using a barrier that nymphs can not cross and showing an improved survival rate in the presence of food. This provides evidence in favor of the effectiveness of progressive provisioning and defensive behavior by A. maritima mothers under laboratory conditions. Food provisioning was the primary aspect of care that influenced the benefits of maternal attendance in the present study. | Associate Editor: Tigrul Giray was editor of this paper.
Provisioning the young is an important form of insect parental care and is believed to improve the survival and growth of the young. Anisolabis maritima Bonelli (Dermaptera: Anisolabididae) is a cosmopolitan species of earwig that shows sub-social behavior in which the females tend clutches of eggs in soil burrows. The defensive and provisioning behaviors of these females were examined in this study. When disturbed, maternal individuals abandoned the nest less than non-maternal individuals. Females brought food to the nest after their eggs hatched, and the survival of the nymphs was increased by provisioning. Even when mothers were removed, providing food to the nymphs increased survival as well as when the nymphs were provisioned by the mother. These results show that A. maritima mothers provision the nymphs and that this provisioning improves their survival.
Keywords | Acknowledgments
This study was supported in part by a Grantin-Aid for Scientific Research (21770018) from the Japan Society for the Promotion of Science. | CC BY | no | 2022-01-12 16:13:45 | J Insect Sci. 2010 Oct 22; 10:184 | oa_package/b3/d7/PMC3016708.tar.gz |
||
PMC3016709 | 20874600 | Introduction
The disadvantages of pesticide application in olive orchards encourage pest control to be oriented toward integrated management ( Viggiani 1989 ; Katsoyannos 1992 ; Civantos and Sánchez 1993 ; Longo 1995 ; Hilal and Ouguas 2005 ), where biological control plays an important role ( Jiménez 1984 ; Jervis et al. 1992 ; Delrio 2007 ; Hegazi et al. 2007 ), given that this agricultural ecosystem is rich in auxiliary insect fauna ( Arambourg 1986 ; De Andrés 1991 ; Blibech et al. 2005 ). However, the development of biological control requires a thorough knowledge of the entomophagous insects that could be used as control agents. A candidate genus is Elasmus , species of which parasitize different phytophagous insects that constitute major pests of the olive, such as Zeuzera pyrina, Euphyllura olivina , and especially the last stages of the larvae Prays oleae , which is a major pest of the olive throughout most of the olive-growing zones of the Mediterranean Basin ( Campos 1976 ; Arambourg 1986 ; Civantos 1999 ).
The species Elasmus steffani (Viggiani) (Hymenoptera: Elasmidae), is a gregarious idiobiont ectoparasite, and the females oviposit in the cocoon containing the last host larval stage. Different aspects of its biology have previously been studied ( Campos 1981 ). In some zones, it is ranked as the most important regulator of the phyllophagous generation, and the second most important in the anthophagous generation of P. oleae , in some years reaching high parasitism rates ( Campos and Ramos 1981 ; Katsoyannos 1992 ; Bento et al. 1998 ). This Elasmidae, usually associated with the olive ( Oleae europaea ) ( Graham 1995 ; Torres 2007 ), has also been recorded within the complex of the most abundant parasitoids of Lobesia botrana on grapes in Italy ( Marchesini and Monta 1994 ).
Given the ease of raising E. steffani on a substitute host ( Redolfi and Campos 1998 ) and the fact that other species of Elasmus are being used for biological control ( Ubandi et al. 1988 ; Bastian and Hart 1989 ; Kovalenkov et al. 1991 ), the aim of this work was to study, in detail, the biology of this parasitoid raised on Ephestia kuehniella (Zeller) (Lepidoptera: Pyralidae) and to identify ways in which to enhance its production.
This study will lead to improvement of the basic knowledge on this species and provide practical information for studies concerning the use of E. steffani in programs of integrated management of olive orchards in either inundative or inoculative releases against P. oleae. | Materials and Methods
Populations of E. steffani and the substitute host E. kuehniella were obtained from laboratory-raised specimens. E. kuehniella was reared according to a method described by Celli et al. ( 1991 ), based on an insect artificial diet consisting of wheat germ and beer yeast (2:1), under laboratory conditions (20 ± 2° C, 60 ± 10% RH and 14:10 L:D). The last-stage larvae were chosen and wrapped in fine white paper, and afterward they were exposed to the E. steffani adults. In order to maintain the population size of E. steffani adults, 100 parasitoid couples (10 days old) and 300 E. kuehniella larvae were placed in wooden boxes (40 × 39 × 37 cm) with a glass cover. The larvae were renewed every 24 or 48 hours, and the adults fed on a honey and water mixture (2:1) ( Redolfi and Campos 1998 ). This rearing was done under laboratory conditions (26 ± 2° C, 60 ± 10% RH and 14:10 L:D).
All trials of this study were done at 26° C, 60% RH, and photoperiod of 14:10 L:D. All host larvae in the trials were previously wrapped in fine white paper to simulate a cocoon ( Redolfi and Campos 1998 ).
Duration of the stages of development of E. steffani
Three 2-litre glass flasks were used and placed horizontally with a round filter paper covering the bottom and closed by a nylon mesh. In each flask, 100 E. steffani couples (male and female) were placed, and they were fed on a honey and water mixture (2:1) ( Redolfi and Campos 1998 ).
After 10 days, 300 E. kuehniella in the last larval stage were exposed to the parasitoid for 24 h. The paralyzed host larvae with parasitoid eggs were placed individually in plastic Petri dishes of 8.5 cm in diameter and examined under a microscope every 8 h. Upon pupation, the parasitoid pupae were isolated individually in 30-ml glass tubes having cotton stoppers and were kept until adult emergence. Just after the emergence, the adults were separated on male-female pairs and placed in 30 ml tubes, where they were fed on a honey and water mixture (2:1).
Pre-oviposition, oviposition and reproductive capacity of E. steffani
Ten pairs (female and male) of recently emerged E. steffani were placed individually in Petri dishes of 8.5 cm in diameter, and they were fed a honey-water mixture (2:1). They were exposed daily to five hosts in the last larval stage until the death of the female parasitoids. The days to oviposition, the number of eggs oviposited per day and per larva, and the number of larvae parasitized per day per female were recorded, and the means were determined. The parasitized larvae were isolated in Petri dishes.
Mating and parasitism behavior in E. steffani
During the last two trials of this study, mating and parasitism behavior were observed, and various mating and ovipositing postures of the parasitoid, as well as time of behavioural events, were recorded.
Parthenogenesis in E. steffani
A total of 10 recently emerged virgin female E. steffani were placed in a 2-litre glass flask and exposed daily to three host larvae. The parasitized larvae were isolated in Petri dishes until the emergence of the adults. The number of studied larvae was large enough to analyze the type of parasitoid parthenogenesis.
Longevity and feeding of E. steffani adults
Each of 30 pairs of E. steffani adults, emerged within 24 h, were assigned to one of the three following sources of food: no food, a honey-water mixture (2:1), and a honey-water mixture (2:1) plus host larvae (3/d). Each pair of adults was placed in a glass tube stopped with cotton.
Statistical analysis
The differences of duration of the developmental stages between males and females and the longevity of adults (males and females) in different feeding condition were tested for significance with the Kruskal-Wallis test (p < 0.01). Non-parametric tests were used because the data were not normally distributed, even after transformation. | Results
Duration of the developmental stages of E. steffani
No significant differences (p > 0.01) were found regarding the time required by the E. steffani males and females to complete their development from egg to adult. The duration under study conditions was approximately 11–15 days, with pupation occupying half of that time period ( Table 1 ).
Pre-oviposition, oviposition and reproductive capacity E. steffani
The mean pre-oviposition period was 8.9 ± 5.0 days, and the mean oviposition period was 30.4 ± 10.5 days. The mean reproductive capacity was 185.5 ± 62.3 eggs per female, with an average of 5.4 ± 0.9 eggs per day. The number of eggs per host larva was 4.4 ± 0.4, and the number of parasitized hosts per day averaged 1.3 ± 0.1 ( Table 2 ). The oviposition rhythm observed ( n = 10 females) reached its highest level at 35 days of age, with maximum oviposition between 15 and 40 days of age ( Figure 1 ).
Mating and parasitism behavior in E. steffani
Mating occurred both in the case of the females and males immediately after emergence. The male mounted the dorsal side of the female and made vibrating movements with its antennae, touching the antennae of the female and remaining in this position for an average of 17.4 ± 2.3 min. The copulation lasted for 2 to 5 s. In brood flasks containing 100 E. steffani pairs that had already mated, males were repeatedly seen on top of the females.
Regarding parasitic behaviour, upon detecting a host larva, the parasitoid would immediately insert its ovipositor into the host for 10 to 20 min, repeating this 4 or 5 times in zones near the head and behind the fourth pair of legs. During the insertion intervals, the parasitoid, using its legs and antennae to feel the larvae, remained stationary on the host for variable time periods. The process lasted between 2.15 and 2.34 h. When the host was immobilized, the parasitoid oviposited eggs near the larvae, preferentially near the ends, but rarely on the cuticle of the larvae itself.
All values correspond to 10 pairs (female and male) of recently emerged E. steffani placed individually in Petri dishes.
Parthenogenesis in E. steffani
The offspring of virgin females were all male, indicating that E. steffani was capable of arrhenotokous parthenogenesis being of the arrenotokia type, as mentioned by Clausen ( 1940 ) for other species of Elasmidae.
Longevity and feeding of E. steffani adults
Adults that were supplied with an aqueous honey solution in the absence of hosts had significantly longer longevities (p < 0.01) than those that were similarly fed and had access to hosts, and those with no food. In the treatments that provided adults with food, female longevity was three-fold (p < 0.01) that of males. The lack of feeding negatively affected survival in females (2.3 ± 0.7 days) and in males (2.2 ± 0.7 days), with survival time varying between 1 and 3 days without significant differences between sexes ( Table 3 ).
A daily feeding rhythm was perceived in hours of light, principally 8:00 and 18:00 h, with peak activity between 9:00 and 11:00 h. Generally the females spent more time feeding (16.3 ± 4.6 min, n = 10) than the males (7.4 ± 1.8 min, n = 10).
Female E. steffani host fed upon E. kuehniella. For this, the parasitoid paralyzed the host larva and afterward, for 20 min sucked the haemolymph directly from the points where the ovipositor had been inserted. After feeding, the zone fed upon turned light-green in color and later black. Generally, the parasitoid did not oviposit in a host that had formerly been fed upon. On rare occasions, the male parasitoid also fed on the paralyzed host in places where the female had inserted the ovipositor, but only for a brief time (5 min). The female parasitoid only used 15% of paralyzed larvae to feed. | Discussion
The lack of significant differences between males and females in terms of duration of development has been recorded also in the case of Elasmus zehntneri ( Tanwar 1990 ). This could be due to the fact that the parasitoid is gregarious, and therefore the male generally does not need additional time to search for a female.
With regard to the duration of the larval and pupal stage of E. steffani ( Table 1 ), Campos ( 1981 ) reported higher values (8.7 ± 0.3 and 11.7 ± 0.4 for larval and pupal stage, respectively), but it should be taken into account that the host was P. oleae and that the specimens were kept outside with daily variations in temperature of between 13 and 27° C. These fluctuations could prolong the duration of these developmental stages, as mentioned for E. zehntneri grown on Tryporyza nivella intacta Sn. ( Scirpophaga nivella ) ( Ubandi et al. 1988 ; Tanwar 1990 ).
The duration of egg-to-adult development reinforces the observation by Clausen ( 1940 ) concerning the developmental period of Elasmidae, which requires 10 to 16 days in Elasmus nephantidis and an average of 14.5 days for Elasmus hispidarum at an average temperature of 29° C.
The pre-oviposition period for E. steffani is long and highly varied, which also has been mentioned for E. zehntneri , which averages 4.0 ± 1.8 days (range: 2–10) ( Tanwar 1990 ).
This could be because Elasmus species, as the females of many synovigenic parasitoids, not only parasitize hosts but also feed on them to secure nutrients for the continued production of oocytes ( Rosenheim and Rosen 1992 ; Jervis and Kidd 1986 ). The decision whether to host-feed or oviposit will depend on the number of mature eggs that a parasitoid is carrying, which is the reason why the status of the ovaries may determine the duration of any preoviposition period following eclosion ( Jervis et al. 2005 ).
The mean number of eggs oviposited per female was greater for E. steffani than that recorded for other species of the genus Elasmus. That is, a range of 14 to 57 eggs was recorded for E. nephantidis ( Ramanchandra-Rao and Cherian 1927 ), 69.37 and 53 eggs for E. zehntneri ( Cherian and Israel 1937 ; Tanwar 1990 ), and 21–68 (42.70) eggs in 9 to 12 days for E. brevicornis ( Peter and David 1990 ). In these cases, the shorter duration of the adult state of the parasitoids must have had an influence. In E. steffani , the number of eggs laid per female and per day was 4.4 ± 0.4 when the female had 4 or 5 host larvae. On the other hand, in the E. zehntneri case, the average number of oviposited eggs per larvae was 49.3 ( Tanwar 1990 ). Since a host represents a limited amount of resource for gregarious parasitoids, clutch size is a variable feature of a species, because it is influenced by different factors such as host availability and size (Fellowes et al. 2007). Previous studies carried out on E. steffani showed that when the number of available hosts per female decreased daily, the clutch size increased, and at the same time, the superparasitism increased ( Redolfi and Campos 1998 ). In contrast, when the host size was smaller, the clutch size was 2 ± 0.08 ( Campos 1981 ).
The daily oviposition rhythm proved uniform in its rise and fall, with only one maximum peak at 35 days of age ( Figure 1 ), without pronounced fluctuations, in comparison with other ectoparasitoid species ( Redolfi et al. 1987 ).
The results indicate that in programs of biological control, depending on the food resources within the agricultural ecosystem, females should be released at 5 to 20 days of age.
The mating behaviour was similar to that described by De Bach ( 1985 ) for this same species and by Peter and David ( 1990 ) for Elasmus brevicornis. The only notable difference with respect to the latter species is that the E. steffani male remains for a longer time on top of the female and the female moves, transporting the male as she goes. In brood flasks containing 100 E. steffani pairs that had already mated, males were repeatedly seen on top of the females, behaviour that was never observed among isolated pairs. It is quite possible that this resulted from having a high number of specimens in a limited physical space. Studies of mating behaviour can help to identify the attributes that make a given species an effective biological control agent ( Luck 1990 ).
Longevity, like fecundidty, is influenced by a range of physical and biotic factors such as temperature, host density, and food source ( Jervis et al. 2005 ). The great effect that food availability exerted on the longevity of the adult of E. steffani indicates the need of a carbohydrate source for survival ( Table 3 ).
Sugar consumption can increase the longevity and lifetime fecundity of many species of parasitic wasps. Consequently, for these insects the availability of sugar sources in the field is important for their reproductive success ( Siekmann et al. 2001 ). In the case of unfed adults, the results do not coincide with those mentioned by Campos ( 1981 ), who established a greater longevity for males and females maintained without food, with significant differences between them. It is possible that the daily variations in the temperatures used by this author caused a longer duration, considering that at a constant high temperature, the parasitoid would more rapidly expend its energy reserves. Energy reserves, and consequently, life expectancy can decline at different rates depending on factors such as temperature and locomotory activities ( Siekmann et al. 2001 ).
The function of host-feeding may be either to obtain energy or protein and other nutrients necessary for the production of eggs ( Houston et al. 1992 ). Comsuption of host hemolymph improves longevity of some host-feeding wasp species but not in others ( Jervis et al. 2005 ), and in the case of E. steffani , the longevity significantly diminishes when host larvae are supplied in addition to the honey-water mixture ( Table 3 ). Why some species derive clear longevity benefits from host-feeding fluids whereas others do not is not clear, but it may have to do with interspecific differences in the nature of the nutrients consumed and with the amount of sugars present in the ingested fluids, in particular ( Giron et al. 2002 ; Rivero and West 2005 ). In addition, the action of feeding on the previously paralyzed host larva could account for the presence in the field of paralyzed P. oleae larvae without eggs, as well as for similar observations by Peter and David ( 1990 ) on larvae of Diaphania indica paralyzed by E. brevicornis. Thus, in agreement with van Alphen and Jervis ( 1996 ), E. steffani appears to have a non-destructive feeding behaviour with regard to the host, which is characteristic of gregarious parasitoids in which the female oviposit fewer eggs ( n = 1–2) or does not oviposit in larvae that have been paralyzed and used for food.
The greater longevity of E. steffani females compared with males under conditions of food availability has also been observed in E. zehntneri ( Tanwar 1990 ) and E. brevicornis ( Peter and David 1990 ), as well as in other species of parasitoids, which is reasonable in view of the copulating function of the male. | Elasmus steffani (Viggiani) (Hymenoptera: Elasmidae) is a gregarious idiobiont ectoparasitoid of Prays oleae (Bernard) (Lepidoptera: Plutellidae), an olive crop pest. In the substitute host Ephestia kuehniella (Zeller) (Lepidoptera: Pyralidae), the duration of the developmental stages was approximately 11–15 days. The preoviposition was 8.9 ± 5.0 days, and oviposition lasted 30.4 ± 10.5 days, with a reproduction capacity of 185.5 ± 62.3 eggs per female, for a mean of 5.4 ± 0.9 eggs per day. The oviposition rhythm reached its maximum when the parasitoid was 35 days of age. The lack of food negatively influenced survival of the adults, while those fed on a honey-water mixture lived significantly longer that those that also had access to a host as food. The female parasitoid fed upon 15% of the paralyzed larvae. The virgin female E. steffani exhibits arrhenotokic parthenogenesis.
Keywords | Acknowledgements
The authors thank H. Barroso for technical assistance and David Nesbitt for correcting the English version of the manuscript. This work was supported by a grant from Junta de Andalucía and the AECI (Mutis) provided I.R. a Doctoral Fellowship. | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Jul 30; 10:119 | oa_package/7b/61/PMC3016709.tar.gz |
||
PMC3016710 | 20879916 | Introduction
The apple tree was introduced to Brazil in the 1960s in Fraiburgo, Santa Catarina. Since this crop was introduced in the country, farmers have faced attacks by several pests, which cause the loss of up to 100% of the harvest ( Ribeiro 1999 ). Currently the apple tree is considered the most important fructiferous tree of temperate climate cultivated in the country; it has great significance in the domestic market and for exports as well ( Silva et al. 2007 ).
Despite its recent cultivation in Brazilian lands, the national pomiculture is not only supplying the domestic market, but is also establishing itself gradually in international trade and European markets. In 2007, Brazil exported about 95,000 tons of apples to the European Union, with Santa Catarina and Rio Grande do Sul as the most productive states, accounting for about 96% of the Brazilian production of this fruit ( Agrianual 2008 ).
However, the imposed exigencies by this and other consuming markets have forced Brazilian producers to adapt to new methods of fruit production, in other words, integrated production. This system permits the production of better-quality fruits, the reduction in pesticide-use, and the possibility of tracking the final product. In integrated production, there are great efforts to control pests by increasing natural factors of mortality using biological agents such as parasitoids, predators, and entomopathogens, with the focus on predators that are able to consume great quantities of prey.
Among the predators, insects belonging to the family Chrysopidae have been considered voracious organisms with strong adaptability to different agroecosystems ( Senior and McEwen 2001 ; Medina et al. 2003 ; Athan et al. 2004 ) and are widely distributed throughout the American continents, occurring from the southeast of the United States to the southern region of South America ( Albuquerque et al. 1994 ). Past research has demonstrated that Chrysoperla externa (Hagen) (Neuroptera: Chrysopidae) are effective predators of mites on apples ( Miszczak and Niemczyk 1978 ). In Brazil, C. externa is one of the most common species of green lacewings found in agricultural crops including the apple tree ( Freitas and Penny 2001 ). C. externa feed on harmful arthropodpests of the apple tree, such as the woolly apple aphid Eriosoma lanigerum , the green citrus aphid Aphis citricola , the San Jose scale Quadraspidiotus perniciosus , and the European red mite Panonychus ulmi ( Ribeiro 1999 ).
In this context, the use of selective pesticides, which control pests without affecting the populations of natural enemies in a negative way, constitute an important strategy in the integrated management of pests ( Moura and Rocha 2006 ). It is important to identify and develop selective products and to determine the factors that affect behavior, development, and reproduction of beneficial organisms in a way that can be used in conjunction with biological methods of pest control in the apple tree crop.
The objective of this work was to assess the effects of certain pesticides used in integrated apple production in Brazil on the survival and reproduction of adults of C. externa , collected in commercial apple orchards in the towns of Bento Gonçalves (29° 10′ 29′′ S; 51° 31′ 19′′ W) and Vacaria (28° 30′ 44′′ S; 50° 56′ 02′′ W), both in Rio Grande do Sul, as well as studying possible morphological changes of C. externa eggs caused by these chemical agents via ultrastructural analysis using electronic scanning microscopy. | Materials and Methods
The rearing and maintenance of both populations of C. externa was done in a climatic room, at 25 ± 2° C, 70 ± 10% RH, and a photoperiod of 12:12 L:D. Following the techniques described by Auad et al. ( 2001 ) they were fed UV-killed eggs of Anagasta kuehniella (Zeller) (Lepidoptera: Pyralidae).
Pesticides
Commercial formulations of abamectin 18 CE (0.02 g a.i. L -1 ), carbaryl 480 SC (1.73 g a.i. L -1 ), sulfur 800 GrDA (4.8 g a.i. L -1 ), fenitrothion 500 CE (0.75 g a.i. L -1 ), methidathion 400 CE (0.4 g a.i. L -1 ), and trichlorfon 500 SC (1.5 g a.i. L -1 ), recommended for use in integrated apple production in Brazil, were used in the bioassays with adults of C. externa . The dosage used was the manufacturer's highest recommended rate for controlling pests and diseases in apple trees. Distilled water was used as the control. The application of the evaluated compounds and distilled water over the insects was made using a Potter's tower (Burkard Scientific Ltd., www.burkard.co.uk ) regulated at 15 lb pol-2, ensuring the application of 1.65 to 1.89 mg cm-2 of aqueous pesticide solution, according to methodology suggested by IOBC ( Sterk et al. 1999 ; van de Veire et al. 2002 ).
Bioassays
Fifteen pairs (each pair constituted by one male and one female) of C. externa from each population, with ages from 0 to 24 h obtained from rearing and selected for treatment were anesthetized with CO2 for one min, and then pesticides and distilled water were applied immediately. Although adult male and female C. externa are similar in overall size and appearance, they were sexed by looking closely at the ventral surface of the tip of the abdomen using a stereoscopic microscope (40x) as described by Reddy ( 2002 ) and Reddy et al. ( 2004 ). Males have a small rounded capsule flanked by two small projections, while females have an oval area bounding a longitudinal slit.
After application of pesticides and distilled water, each pair was transferred to a PVC cage (7.5 cm diameter × 8 cm) covered internally with white filter paper, closed in the superior edge with organza type cloth, supported in a plastic tray (40 cm long × 20 cm wide × 10 cm high), and fed every three days with brewer's yeast and honey in the proportion of 1:1 (v/v). The cages were kept in a climatic room, at 25 ± 2° C, 70 ± 10% RH, and a photoperiod of 12:12 L:D. The evaluations took place at 3, 6, 12, 24, 48, 72, 96, and 120 h after application with the goal of determining the mortality rate of the treated C. externa .
Six pairs of C. externa from each of the studied populations by treatment among the fifteen pairs that received pesticide application were used for the evaluation of effects of the compounds on the reproduction of this species. The evaluations began three days after the applications and continued twice a day with 12 hour intervals until the start of oviposition.
Four consecutive weeks after the start of the oviposition, the number of eggs deposited was counted at three-day intervals. Ninety-six eggs (by treatment) were separated into microtitration plate compartments using a camel hair brush. The plates were closed with a PVC film and kept under controlled conditions until the eggs hatched, when egg viability was evaluated. The oviposition capacity and egg viability of treated C. externa pairs were evaluated.
For the evaluation of adult mortality rate, a fully randomized experimental design in a 2 × 7 (two populations of C. externa × seven treatments) factorial scheme was used. Five replicates were used, with the experimental plot constituted by three pairs of C. externa . For the evaluation of the effects of the compounds on oviposition capacity and egg viability, a fully randomized experimental design with a 2 × 4 factorial scheme (two populations × four treatments) was used. For the oviposition evaluation, six replicates were used, and each plot was constituted by a C. externa couple; while in the evaluation of egg viability, eight replicates were used, and the experimental plot was composed of 12 eggs.
Pesticides classification
The mortality rate of treated adults was corrected by the Abbott's formula ( Abbott 1925 ). The pesticides were then classified based on the reduction of beneficial capacity and mortality caused to the predator using Equation 1, proposed by Vogt ( 1992 ). where: According to recommendations of IOBC, the evaluated pesticides were organized in four toxicological classes ( Sterk et al. 1999 ; van de Veire et al. 2002 ): class 1 = harmless (E < 30%), class 2 = slightly harmful (30% ≤ E ≤ 80%), class 3 = moderately harmful (80% < E ≤ 99%), and class 4 = harmful (E > 99%).
Statistical analysis
The obtained data in the bioassays with C. externa adults were submitted to analysis of variance using a two-way ANOVA, and the data referring to the number of eggs deposited by female C. externa and to the eggs' viability followed a split spot arrangement. The means of the different treatments were compared using the Scott-Knott clustering test ( Scott and Knott 1974 ) at 5% significance when the F -test was significant using the statistical software, SAS ( SAS Institute 2001 ).
The mortality data obtained from the bioassays with C. externa adults were angular-transformed (arcsine √x/100 transformation) before processing variance analysis. Data about amount of eggs laid per female were transformed to √x+1.
Data referring to the oviposition from females treated with pesticides as well as distilled water (control) were subjected to a model analysis using the software R ( R Development Core Team 2006 ). GLM mode (Generalized Linear Models) with negative binomial distribution of error (logarithmic linkage function) for the over dispersion correction was applied for the output variable of oviposition ( Crawley 2002 ). The following input variables were used to fit the model: C. externa populations, time (in days) after oviposition beginning, and treatments. Residual analyses with envelope approach generating probability distribution graphs of Normal (Gauss), Poisson, Binomial, and Negative Binomial (Pascal) were performed to verify how the data fit the models ( Paula 2004 ). The best fitting model choice to oviposition data collected were the graphs plotted by the envelope approach and the AIC index (Akaike Information Criteria) ( Akaike 1974 ; apud Paula 2004 ), as well as in the relationship between the deviance and degrees of freedom of the residue.
After the choice of the model, the necessary parameters estimates were calculated ( Table 1 ) allowing the oviposition equations to be constructed for both C. externa populations and the evaluated treatments. Then, a program was developed to adjust several possibilities of the oviposition predator with all the equations being based on the general one (Equation 2) that follows. This program was implemented through the R software ( R Development Core Team 2006 ). where: As an example, for the equation that gives female oviposition of C. externa from Bento Gonçalves treated with distilled water (control), the input variables Population, Treat2, Treat3, and Treat4 must have a value of 0.
Ultrastructural analysis of C. externa eggs
Eggs laid by C. externa from both populations, treated with abamectin or sulfur, as well as distilled water (control), were prepared for later studies under scanning electronic microscopy, given the fact that these pesticides reduced viability rates through evaluation. Twenty newly laid eggs were used per treatment; they were transferred to plastic containers (Eppendorf, www.eppendorf.com ) with capacities of 2.0 ml and subjected to a protocol for biological sampling preparation, according to the laboratory's routine techniques described by Borém et al. ( 2008 ). Then, the samples were studied under a scanning electronic microscope (LEO Evo40 XVP). | Results
Six hours after the application of the pesticides, no compound had caused the death of any C. externa . However, 12 hours after application of carbaryl, fenitrothion, and methidathion significant mortality was observed in adults from both populations, and this situation remained unchanged until the last evaluation (120 hours after the beginning of the bioassay) when these compounds had caused the death of 100% of the individuals. Sulfur and abamectin also caused mortality of 6.7% and 10%, respectively, in adults from the Bento Gonçalves population until the end of the evaluations and were innocuous to those from Vacaria. Trichlorfon was harmless to adults of both populations, and trichlorfon and sulfur did not change the mortality pattern of any population throughout evaluation process ( Table 2 ).
Oviposition capacity of surviving C. externa treated with trichlorfon, sulfur, or abamectin was not reduced by these compounds in either of the studied populations. However, females from Bento Gonçalves treated with sulfur or abamectin showed similar variations in the mean amount of laid eggs throughout the evaluation period. Females from Vacaria had similar variations when treated with trichlorfon or sulfur ( Table 3 ).
It was also verified that the peak of oviposition for all treatments happened near the 15th day after the beginning of oviposition, regardless of the population. The mean amount of eggs varied from 101.5 to 120.2 for females from Bento Gonçalves and from 124.8 to 142.8 for females from Vacaria, with nearly 40 eggs each day.
Oviposition capacity of C. externa was reduced for both populations. In all evaluated treatments from the 27th day of oviposition, oviposition capacity varied from 47.5 to 63.0 eggs per female for the Bento Gonçalves population, and from 47.8 to 60.7 for the Vacaria population ( Table 3 ).
Analysis of the data to develop a model that fit the obtained data and generation of equations that aim to predict C. externa oviposition from both studied populations evidenced that negative binomial (Pascal) was the best fit distribution with an AIC of 4175.1 and ratio between deviance and degrees of freedom of the residue equal to 443.55/425 (the result is 1.04), considered adequate by the residue analysis.
Oviposition modeling ( Figure 1 ) showed that trichlorfon, followed by abamectin, were the most harmful compounds. These compounds affected oviposition of C. externa regardless of the origin of the studied C. externa population. Sulfur allowed the most oviposition with the mean varying around 50 to 100 eggs every three days for females from Bento Gonçalves and around 56 to 120 eggs for females from Vacaria. Oviposition behavior for females treated with the different pesticides was similar for both populations.
It was also observed that the C. externa oviposition estimates for each of the tested pesticides, and the control showed greater oviposition capacity for females from Vacaria. This was true for both obtained and predicted values ( Figure 2 ).
Nevertheless, there was a trend that the mean amount of C. externa eggs laid, irrespective of the pesticide used, was equal at the end of the oviposition period for both populations based on the prediction made by the adjusted model ( Figures 1 and 2 ).
As for egg viability, it was observed that sulfur was the most damaging to both C. externa populations. For C. externa from Bento Gonçalves, oviposition was reduced in every single evaluation except for the first. Changes in hatching eggs caused by sulfur were also observed through the evaluations varying from 50% to 82% for C. externa from Bento Gonçalves and from 73% to 92% for C. externa from Vacaria ( Table 4 ).
Abamectin also negatively affected this biological parameter but just on the 18th, 21st, and 24th day after oviposition began for C. externa from Bento Gonçalves and on the first and second evaluations performed three and six days after oviposition began for the Vacaria population. In the other evaluations no differences were observed between abamectin and the control. Throughout the performed evaluations, no changes were verified in egg viability laid by C. externa treated with abamectin, regardless of the population ( Table 4 ).
Trichlorfon showed to be innocuous to C. externa , causing no reduction in viability of the eggs laid by treated females irrespective of the day of evaluation and irrespective of the studied population. Exceptions occurred in evaluations performed three and six days after the beginning of oviposition for C. externa from Vacaria when this pesticide provided egg viability of 86.5% and 85.4%, respectively ( Table 4 ).
Based on the mortality caused by the compounds tested on C. externa from Bento Gonçalves and Vacaria and its effects on the reproductive capacity and egg viability ( Tables 2 , 3 , and 4 ), trichlorfon, sulfur, and abamectin were classified as harmless (class 1), while carbaryl, fenitrothion, and methidathion were classified as harmful (class 4) for both of the studied populations ( Table 5 ).
Ultrastructural analysis of C. externa eggs from both populations treated with sulfur or abamectin, which negatively affected egg viability, showed that these compounds changed the chorion and micropyle morphology of the eggs compared to eggs from females treated with distilled water ( Figures 3 , 4 , 5 , and 6 ). The malformation occurrence frequencies in the samples observed under a scanning electron microscope were about 67% for eggs of C. externa treated with sulfur and nearly 50% for eggs laid by females treated with abamectin.
It was also verified that some females of C. externa from both populations treated with sulfur showed malformations in the distal region of the abdomen and genitalia with the presence of dark, unidentified material ( Figure 7 ). | Discussion
The results for abamectin in the present research are similar to the outcome of Godoy et al. ( 2004 ), who also observed no significant differences in mortality rates between this compound and the control samples of C. externa .
The safety of sulfur on adult C. externa is related to the innate tolerance of this predator to acaricides and fungicides containing sulfur, since according Croft ( 1990 ), these compounds are considered selective to natural enemies.
Trichlorfon was observed to be innocuous to adult C. externa . This is possibly due to its inability to penetrate C. externa 's integument, as also related by Croft ( 1990 ) to Chrysoperla cornea . However, that author also commented that C. externa has developed low-level resistance to a wide range of conventional insecticides, including several organophosphates, carbamates, and some pyrethroids, and C. externa has widely adapted to the pesticide regimes used on apple trees. Detoxification factors can also provide selectivity to adults of Chrysoperla spp., as related to phosmet.
The results obtained in our research with carbaryl were also obtained by Wilkinson et al. ( 1975 ) and Güven and Göven ( 2005 ) for adults of C. cornea that resulted in 100% mortality, classifying carbaryl as harmful to this Chrysoperla species.
Results obtained in this study for carbaryl, fenitrothion, and methidathion, which caused 100% mortality, confirmed results achieved by Grafton-Cardwell and Hoy ( 1985 ), Singh and Varma ( 1986 ), and Mizell III and Schiffhauer ( 1990 ), who observed high susceptibility of C. cornea to carbamates and organophosphates. This shows the high toxicity of these pesticides to adults of several Chysoperla species, which may restrict its use in both integrated pest management programs and integrated Brazilian apple production.
Studies conducted by Vogt et al. ( 2001 ) and Bozsik et al. ( 2002 ) with C. cornea evidenced that carbaryl and malaoxon showed high inhibitory capacity on the acetylcolinesterase enzyme in this species, which also occurred in the present study with carbaryl, fenitrothion, and methidathion. The authors describe that acetylcolinesterase activity prediction appears to be an important tool for measuring differences either in susceptibility or tolerance of the species or in populations of a common enemy species in relation to potential side effects of a pesticide to the environment.
As for the reproductive capability of treated C. externa females, it was verified that the highest oviposition values achieved in this study were similar to those obtained by Ru et al. ( 1975 ) in studies about the biology of C. externa . It is believed that the Vacaria population presents greater reproductive potential when compared to the population from Bento Gonçalves, which must be considered when making use of C. externa in integrated pest management programs and in integrated apple production in southern Brazil. Probably the C. externa population from Vacaria is more fit because it has been regularly exposed to the evaluated pesticides before being tested in the laboratory. This population may have developed more tolerance (or resistance) to these pesticides than the Bento Gonçalves population, which has not been exposed to pesticides and has not developed resistance.
C. externa oviposition estimates ( Figure 1 ) were based on the evaluations performed up to 27 days after the beginning of oviposition (dashed vertical line); hence the values are obtained from the prediction given by the adjusted model. Future research should consider a wider oviposition period. For example, during six or seven weeks, since the studies of this species ( Núñez 1988 ; Carvalho et al. 1998 ; Silva et al. 2004 ) have been evidencing that the oviposition period can reach up to 100 days depending on food given to adults. Some researchers have already shown the possibility of evaluating C. externa oviposition subjected to pesticide application through selectivity tests for up to 50 days ( Bueno and Freitas 2004 ).
Viability reductions, as observed mostly in eggs from C. externa treated with sulfur, may be a side effect of this pesticide on oogenesis, possibly on trophocytes (sister-cells of the oocytes) and responsible for their nutrition. According to Chapman ( 1998 ), the trophocytes malformation or the absorption of contaminated proteins by these cells may result in lack of nutrients for embryos or changes in embryo development, leading to embryo death. In this way, pesticides must have affected such physiological events and caused a reduction in viability rates for treated eggs.
The obtained toxicity classification for sulfur in this research confirmed the research of Silva et al. ( 2006 ) for adult C. externa . Silva et al. ( 2006 ) considered sulfur harmless to C. externa with total effect (E) lower than 30%. This result also matches those of Hassan et al. ( 1983 , 1987 , 1994 ) for the species C. carnea .
Silva et al. ( 2006 ) classified chlorpyrifos as harmful (class 4); this was the same classification given to fenitrothion and methidathion in the present study. Fenitrothion and methidathion are pesticides of the same chemical group of chlorpyrifos (organophosphates), which demonstrates the high toxicity of these compounds to C. externa .
Research conducted by Hassan et al. ( 1983 , 1987 ) with C. carnea on the toxicity classifications attributed to trichlorfon, carbaryl, fenitrothion, and methidathion were the same classifications given to the same compounds in this study on C. externa . The methods used were identical.
The observed changes in the external surface of the chorion of eggs from females exposed to sulfur or abamectin residues suggests that the changes might have been induced by changes in the folicular cells responsible for the secretion of chorion layers, since shape modifications caused in the above mentioned cells are reflected in the chorion morphology ( Chapman 1998 ). However, changes in the cells' constitution may also be responsible for modification in the chorion surface since the proteins synthesized by folicular cells behave as basic material to the chorion formation. These proteins also may affect the formation of aeropyle, micropyle, and other chorion pores.
It is believed that the abnormalities caused by sulfur and abamectin to both chorion and micropyle of eggs from treated C. externa may be responsible for the observed reduction of egg viability. According to Mazzini ( 1976 ) and Chapman ( 1998 ), alterations in any of the chorion layers may affect its permeability, and consequently, the loss of water, embryonic development, and egg viability. Still according to the same authors, abnormalities in cellular processes which are responsible for the micropyle formation may inhibit access for the sperm to the inner side of the egg and interfere in its fertilization and viability.
The causes of observed deformation at both the distal region of the abdomen and the genitalia of C. externa females from Bento Gonçalves and Vacaria treated with sulfur could not be explained by this research or found in scientific literature.
In conclusion, sulfur and abamectin are responsible for anomalies in the chorion and micropyle of C. externa eggs. Sulfur causes malformations in the genitalia of treated females. Sulfur, trichlorfon, and abamectin are harmless, whereas carbaryl, fenitrothion, and methidathion are harmful to adults of both studied populations, according to the IOBC toxicity classification. | This research aimed to assess the toxicity of the pesticides abamectin 18 CE (0.02 g a.i. L -1 ), carbaryl 480 SC (1.73 g a.i. L -1 ), sulfur 800 GrDA (4.8 g a.i. L -1 ), fenitrothion 500 CE (0.75 g a.i. L -1 ), methidathion 400 CE (0.4 g a.i. L -1 ), and trichlorfon 500 SC (1.5 g a.i. L -1 ) as applied in integrated apple production in Brazil on the survival, oviposition capacity, and egg viability of the lacewing, Chrysoperla externa (Hagen) (Neuroptera: Chrysopidae) from Bento Gonçalves and Vacaria, Rio Grande do Sul State, Brazil. An attempt was made to study morphological changes caused by some of these chemicals, by means of ultrastructural analysis, using a scanning electronic microscope. Carbaryl, fenitrothion, and methidathion caused 100% adult mortality for both populations, avoiding evaluation of pesticides' effects on predator reproductive parameters. Abamectin and sulfur also affected the survival of these individuals with mortality rates of 10% and 6.7%, respectively, for adults from Bento Gonçalves, and were harmless to those from Vacaria at the end of evaluation. Trichlorfon was also harmless to adults from both populations. No compound reduced oviposition capacity. C. externa from Vacaria presented higher reproductive potential than those from Bento Gonçalves. In relation to egg viability, sulfur was the most damaging compound to both populations of C. externa . Ultrastructural analyses showed morphological changes in the micropyle and the chorion of eggs laid by C. externa treated with either abamectin or sulfur. The treatment may have influenced the fertilization of C. externa eggs and embryonic development. Sulfur was responsible for malformations in the end region of the abdomen and genitals of treated females. When applied to adults, abamectin, sulfur, and trichlorfon were harmless, while carbaryl, fenitrothion, and methidathion were harmful, according to the IOBC classification.
Keywords | Acknowledgements
Authors thank the CNPq and FAPEMIG for the scholarships that allowed this research; Dr. Eduardo Alves and MSc. Eloísa Lopes, from the Department of Phytopathology, Federal University of Lavras, Brazil, for their great help with ultrastructural analysis; Dr. András Bozsik, from the Department of Plant Protection, Faculty of Agricultural Sciences, Debrecen University, Hungary, and Dr. Massimo Mazzini, Department of Environmental Sciences, Tuscia University, Italy, for sending bibliographical material that helped in the results discussion; Brent F. Newby, B. S., Kansas State University, U.S.A., for the manuscript's careful revision in concern of the language; and two anonymous reviewers for their critical comments on the early version of the manuscript.
Abbreviation
International Organization for Biological Control | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Jul 30; 10:121 | oa_package/59/ec/PMC3016710.tar.gz |
||
PMC3016720 | 21073346 | Introduction
Vespa velutina Lepeletier (Hymenoptera: Vespidae), a vespine wasp endemic to southeast Asia, preys on honeybees, both the native Apis cerana cerana F. (Hymenoptera: Apidae) as well as the introduced European Apis mellifera (Hymenoptera: Apidae) ( Matsuura and Yamane 1990 ; Tan et al. 2005 ; Tan et al. 2007 ). The wasps hawk (capture) foraging honeybees on the wing near honeybee colonies, and predation is especially fierce in autumn when V. velutina are most populous ( Li 1993 ). While native A. cerana colonies have evolved defense strategies against V. velutina predation, the introduced A. mellifera sustains significantly greater losses than the former ( Qun 2001 ; Tan et al. 2005 ; Tan et al. 2007 ). If V. velutina come close to a honeybee nest, the guard bee cohort increases, shimmers their wings, and, if V. velutina persist, the guard bees launch strikes to kill them by heat-balling ( Tan et al. 2005 ; Tan et al. 2007 ).
Such endothermic heat is generated by the thoracic musculature ( Esch 1960 ; Esch and Goller 1991 ; Stabentheiner et al. 2003 ; Kunieda et al. 2006 ) and facilitates pre-flight warm up ( Krogh and Zeuthen 1941 ; Heinrich 1979 ; Esch et al. 1991 ), brood incubation ( Bujok et al. 2002 ), heat balling ( Ono et al. 1987 ; Tan et al. 2005 ; Tan et al. 2007 ), and other defensive contexts ( Southwick and Moritz 1985 ). When many worker bees ball a wasp, they can kill it by raising the ball core temperature to 46° C in about 3 min ( Matsuura and Sakagami 1973 ; Ono et al. 1987 ; Tan et al. 2005 ). However, the “resting” temperatures of guard bees at a hive entrance are considerably lower than those reached during heat balling. Because a “resting” guard bee has a temperature of about 24° C, which nearly doubles to 46° C during heat balling, the hypothesis for this study was that such a physiological thermal jump must be graded and ought to be reflected in more gradual changes in the transition from simply guarding to active heat balling. The sequence of changes in the behaviour and thoracic temperatures of guard bees was examined during the transition from “resting” to poised alert, to wing-shimmering, and finally, to heat-balling attacks by A. cerana and A. mellifera against V. velutina hawking at the hive entrance. | Materials and Methods
Six colonies (three A. cerana and three A. mellifera ) of equal size, four combs of about 10,000 bees each, were tested in autumn (September–October 2008) in an apiary at Yunnan Agricultural University, Kunming, China. Because subspecies of A. mellifera differ greatly in defensive behaviour (Hepburn and Radloff 1998), the Italian bee, A. mellifera ligustica Spinola, was used. This is the principal mellifera race that is used commercially in China.
In the bioassays, a live V. velutina wasp was suspended from a horizontally placed wire by a piece of cotton tied around its petiole. The wasp was held about 20 cm away from the entrance of a hive and could fly and move freely within the confines of the length of the cotton. Its movements would alert the guard bees. For each bee colony, the thoracic temperatures of 20 individual guard bees were measured in the absence of V. velutina as the control group, and 20 more bees were measured after presentation of the live wasps as the test group in experiment 1. In a second experiment, dead, dichloromethane-extracted, washed, and dried V. velutina served as the controls against live wasps.
The body temperatures of the guard bees were measured about 20–30 cm away from the entrance of the hive with a hand-held laser infrared digital thermometer with a resolution of ± 0.1° C (AZ @ Model 8889, AZ Instrument Corp, www.az-instrument.com.tw ). During the tests, ambient temperature was about 2123° C. A digital video camera (Panasonic NVGS400GC, www.panasonic.com ) was placed 1 m in front of the hive entrances to record the shimmering and wasp-striking behaviour of the guard bees during the thermomeasurement. Just when the guard bees were launching to strike the wasp, their instantaneous thoracic temperatures were immediately measured in an area of about 4 mm 2 . For each colony, 10 individual striking bees and 10 guard bees that did not strike were measured. | Results
When live wasps were placed at the entrance of an A. cerana hive, changes in the thoracic temperature of guard bees increased significantly from a resting temperature of 24.3 ± 1.1° C to 29.8 ± 1.6° C during shimmering (Dependent t -test, t 59 = 21.9, p < 0.001, Figure 1 , Video ). However, exposure to wasps had no significant effect on the thoracic temperature of A. mellifera guard bees for the same test (Dependent t -test, t 59 = 0.24, p = 0.81, Table 1 ). There were significant differences for both the control and test groups between A. cerana and A. mellifera (Independent t -test, Control: t 118 = 2.6, p = 0.01; Test: t 118 = 18.1, p < 0.001; Table. 1). When V. velutina flies near A. cerana guard bees, some attack and engulf it in the core of the heat balling bees. The thoracic temperatures of guard bees just on the verge of a strike increased by 1.7 ± 1.8° C to 31.4 ± 0.9° C, which is significantly higher than that of alert but non-striking guard bees (29.7 ± 1.6°C) (Independent t-test, t 58 = 5.0, p < 0.001).
When dead V. velutina wasps were presented to A. cerana and A. mellifera in the second test, there was no significant difference in the mean thoracic temperatures of guard bees between the control group with no dead wasp and the test group with a dead V. velutina for both A. cerana (t 118 = 0.04, p = 0.97) and A. mellifera (t 188 = 1.6, p = 0.11). | Discussion
Although heat-balling wasps as such is well documented ( Ono et al. 1987 ; Tan et al. 2005 ), the behavioural sequence of attracting additional recruits to the guard bee cohort, increased numbers of wing-shimmering guard bees (an average of 32.2 ± 3.2 bees/ball ( Tan et al. 2005 )) that raise thoracic temperature prior to striking V. velutina have not been previously measured for either A. cerana or A. mellifera. Un-alerted guard bees of both A. cerana and A. mellifera have relatively low thoracic temperatures, about 24° C, but when hawking V. velutina approach them, unlike A. mellifera, A. cerana guard bees are immediately alerted and begin body shaking and wing shimmering. Likewise, thoracic temperature rapidly increases some 5.4 ± 1.9° C, and those guard bees with the higher thoracic temperatures more readily attack V. velutina than those at lower temperature. The wing shimmering behaviour is directly associated with increasing the guard bee cohort, and may be mediated by the simultaneous release of a pheromone. Because shimmering guard bees increase their surface temperatures during wing-shimmering, this would facilitate the dispersal of any recruiting pheromones ( Stabentheiner et al. 2002 ). Likewise, during fanning A. cerana face away from the nest entrance ( Sakagami 1960 ), and this would direct any pheromonal plume backwards into the nest. However, it has been reported that A. cerana does not expose its Nasanov gland during shimmering ( Koeniger et al. 1996 ). Wing-shimmering is also interpreted as an anti-predator visual pattern disruption mechanism, similar to that of A. nuluensis ( Koeniger et al. 1996 ).
In contrast, A. mellifera guard bees do not exhibit these behavioural responses to hawking V. velutina, and there is no rapid elevation of thoracic temperature. This apparent inability to rapidly detect V. velutina and to respond defensively accounts for the greater V. velutina presence and hawking success rate at colonies of A. mellifera than A. cerana in autumn ( Tan et al. 2007 ). Moreover, A. cerana may also withdraw into its nest or use wing-shimmering, traits absent from the behavioural repertoire of A. mellifera. However, wasp-balling behaviour is exhibited by Apis mellifera cypria, which apparently kills wasps by asphyxiation ( Papachristoforou et al. 2007 ).
In any event, V. velutina preferentially hawk A. mellifera foragers when both A. mellifera and A. cerana occur in the same apiary ( Tan et al. 2007 ). The present observations suggest a reciprocal co-evolution in the prey/predator relationship between V. velutina and A. cerana, both of which are endemic to and sympatric in southeast Asia ( Li 1993 ; Tan et al. 2005 ), while A. mellifera was introduced from Europe, where there is no widespread V. velutina predation. The fact that the behavioural sequences described here for A. cerana also occur in Apis nuluensis ( Koeniger at al. 1996 ) and Apis dorsata ( Kastberger et al. 1998 ; Kastberger and Stachl 2003 ) suggests that such anti-predator behavioural adaptations may be widespread between predators and honeybees in southeast Asia. | Associate Editor: Tugrul Giray was editor of this paper
When vespine wasps, Vespa velutina Lepeletier (Hymenoptera: Vespidae), hawk (capture) bees at their nest entrances alerted and poised guards of Apis cerana cerana F. and Apis mellifera ligustica Spinola (Hymenoptera: Apidae) have average thoracic temperatures slightly above 24° C. Many additional worker bees of A. cerana, but not A. mellifera , are recruited to augment the guard bee cohort and begin wing-shimmering and body-rocking, and the average thoracic temperature rises to 29.8 ± 1.6° C. If the wasps persist hawking, about 30 guard bees of A. cerana that have raised their thoracic temperatures to 31.4 ± 0.9° C strike out at a wasp and form a ball around it. Within about three minutes the core temperature of the heat-balling A. cerana guard bees reaches about 46° C, which is above the lethal limit of the wasps, which are therefore killed. Although guard bees of A. mellifera do not exhibit the serial behavioural and physiological changes of A. cerana , they may also heat-ball hawking wasps. Here, the differences in the sequence of changes in the behaviour and temperature during “resting” and “heat-balling” by A. cerana and A. mellifera are reported.
Keywords | Acknowledgements
Financial support was granted to Tan Ken by the Xishuangbanna Tropical Botanical Garden and Yunnan Agricultural University of China. | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Sep 9; 10:142 | oa_package/dd/31/PMC3016720.tar.gz |
||
PMC3016757 | 21067416 | Introduction
Certain workers have indicated the population characteristics of Phthiraptera on selected avian hosts. Saxena et al. ( 2007 ) and Gupta et al. ( 2007 ) have noted the prevalence, intensity of infestation and the applicability of the negative binomial model in the frequency distribution patterns of twelve phthirapteran species occurring on house sparrows, Indian parakeets, common mynas, white breasted kingfishers and red avadavats, in the district of Rampur Uttar Pradesh, India. Rekasi et al. ( 1997 ) noted the frequency distribution of 15 species of avian lice and also reviewed 12 previously described distributions. Rozsa ( 1997 ) examined the ecological factors expected to determine the abundance of lice on birds. Reiezigel et al. ( 2005 ) recommended the determination of crowding indices to analyze parasite populations.
There are no reports on the population levels of Phthiraptera parasitizing cattle egrets. The present report furnishes information on the prevalence, intensities of infestation and the frequency distribution patterns of two phthirapterans infesting this bird. Furthermore, information on the egg laying sites, patterns of oviposition and egg morphology of both the species are also described. | Materials and Methods
Seventy cattle egrets ( Bubulcus ibis L.) were trapped live during August 2004 - March 2005, in the district of Rampur. After tying the legs, each bird was critically examined (with the help of magnifying lens). Louse free birds were immediately released and the infested birds subjected to delousing using “fair isle” method ( Saxena et al. 2007 ). Lice were transferred to 70% alcohol and separated by species, stage and sex, for further analysis. The data were used for recording the prevalence, mean intensity, sample mean abundance and variance to mean ratio of the louse population. The exponent (k) of the negative binomial distribution and index of discrepancy (D) were estimated with software developed by Rozsa et al. ( 2000 ). The goodness of fit between observed and expected frequencies (negative binomial) were determined by the χ 2 test. Birds heavily infested with each species were critically examined to record the number of eggs laid on the feathers of different regions of the body. Certain egged feathers were gently cut to record the patterns of oviposition, under stereozoom trinocular microscope. A few eggs were teased out and examined by SEM using the methods described by Gupta et al.. ( 2009 ) | Results
One ischnoceran species, Ardeicola expallidus (Blagoveshtchensky), (Phthiraptera: Philopteridae) and one amblyceran species, Ciconiphilus decimfasciatus (Boisduval and Lacordaire) (Menoponidae) were recorded from seventy cattle egrets during survey work.
Population characteristics
A. expallidus: A total of 633 specimens were collected from 12 infested birds (prevalence 17.2 %; sample mean abundance - 9.0; range of infestation, 14 – 120; mean intensity - 52.8). Frequency distribution pattern was skewed (variance to mean ratio - 71.5) and the observed frequencies failed to correspond to the negative binomial distribution (χ 2 = 64.7, P > 0.05; exponent of negative binomial 0.04; D of poulin - 0.88). Females outnumbered the males in natural population (male, female ratio - 1:1.2) while nymphal population dominated over the adults (adult, nymph ratio - 1:1.2). The ratio of the nymphal population (first, second and third nymphal instars) remained 1:0.7:0.5.
C. decimfasiatus : Total numbers collected from 29 infested hosts was 2993 (prevalence 41.4 %; sample mean abundance - 42.8; range of infestation, 2 – 241 and mean intensity 103.2). Frequency distribution was hollow curve type (variance to mean ratio - 126.9). The negative binomial was not found to be a good fit (χ 2 = 35.9, P > 0.05; exponent of negative binomial - 0.1; D of poulin - 0.76). The sex ratio was female biased (male, female ratio - 1:1.2). The nymphal population was slightly greater than the adult population (adult, nymph ratio - 1:1.1). The ratio of the three nymphal instars was 1: 0.8:0.5
Egg lying site and pattern
The ischnoceran louse, A. expallidus showed restricted oviposition sites on the host body (73% wings, 12% tail, 9% abdomen, 3% breast, 2% nape and 1% neck). The eggs were laid inside the furrows, between the barbs and near the rachis ( Plate I , 1). As many as seven eggs have been found lined one behind the other in a single furrow. The egg was inclined at 30–50°, with respect to the rachis. The maximum number of eggs encountered on a single feather was 180.
The amblyceran louse, C. decimfasciatus exhibited more or less widespread oviposition sites on the host body (45% breast, 31% abdomen, 9% back, 8% legs, 4% neck, 2% nape and 1% tail). Eggs were laid on the lateral plumulaceous portion of the vane. This louse showed a tendency to lay fresh eggs near the already laid eggs. Thus, eggs were laid in groups in somewhat “grape bunch” pattern ( Plate I , 2, 3). Eggs were inclined at 25–40° and were glued at the rear end. More than 500 eggs were counted on a single feather.
Egg morphology
The egg chorion of A. expallidus (length 0.9 – 1.0 mm, width 0.17 – 0.19 mm) bears very prominent elongated hexagonal ridges ( Plate II , 1). Opercular disc of the egg also bears similar (but faint) ridges. A thick rod like small polar thread arises from the lateral side of the operculum ( Plate II , 3, 4). The stigma has a rosette-like (0.021 mm in diameter) in appearance ( Plate II , 5). The egg of C. decimfasciatus is ovoid in shape (0.6 – 0.7 mm in length and 0.18–0.20 mm in width) ( Plate II , 2). The egg chorion is smooth (i.e., devoid of sculpturing /ornamentation). The operculum is hat-shaped and lacks polar thread ( Plate II , 6). The opercular disc bears hexagonal marks. Eleven to fifteen button shaped micropyles are lined along the opercular rim. The stigma has a beehive-like appearance (0.033 mm in diameter). | Discussion
Two phthirapteran species ( A. expallidus and C. decimfasciatus ) are known to occur on cattle egrets ( Price et al. 2003 ). The mean intensity of C. decimfasciatus on Indian cattle egrets was very high (103.2). This amblyceran louse is haematophagous in nature and the crop of adults and nymphs were found full of host blood. The haematophagous lice can cause skin lesions that can be a probable site of secondary infection, irritation, restlessness, reduced egg production and weight loss in the infested hosts ( Wall and Shearer 2001 ; Mullen and Durden 2002 ). Furthermore, they may also act as reservoir and transmitter of pathogens responsible for infectious diseases ( Price and Graham 1997 ).
The prevalence of two lice on cattle egrets was not as high (17.2 – 41.4 %) in comparison to that of other species (12.5 – 97.5 %) ( Saxena et al. 2007 ; Rekasi 1997 ; Rozsa et al. 1997 ). However, the mean intensity appeared to be high (52.8, A. expallidus and 103.2, C. decimfasciatus ) in comparison to other species (0.26 – 37.7) examined by the above mentioned workers.
Out of the 27 cases analyzed by Rekasi et al. ( 1997 ) the frequency distribution of 19 species conformed to the negative binomial model. The negative binomial was found to be a good fit in only one case (out of 12 species) by Saxena et al. ( 2007 ) and Gupta et al. ( 2007 ). In the present case the frequency distribution of both the species was aggregated but failed to correspond to the negative binomial model. However, the significance of the frequency distribution lies not in the statistical patterns itself, but in the underlining ecological factors that are responsible for the generation of non-random distribution of parasite ( Randolph 1975 ). Workers like Crofton ( 1971 ) and Randolph ( 1975 ) have postulated a series of situations which might give rise to the contagious distribution of parasites (i.e. non-random distribution of host, resistance to infestation by previously infested hosts, seasonal variation in infestation level of parasites and non-random differences in behavior and physiology such as breeding success and moult etc.) that are related to population. It is difficult to postulate the factor that might account for the observed distribution of parasites on cattle egrets. Since avian lice exhibit seasonal variation in population levels, the population of nymphs may vary from to time. Apart from seasons, many other factors can affect the population structure ( Marshall 1981 ).
The nymph population had a slight edge over the adult population in the case of 9 species (out of 10) examined by Saxena et al. ( 2007 ) while adults dominated over nymphal population in the case of 2 species studied by Gupta et al. ( 2007 ). In the present case, the proportion of nymphs remained slightly higher than adults (sign of expanding population).
In the case of both the cattle egret lice, the sex ratios were female biased, as expected. In all of the 12 species examined by Saxena et al. ( 2007 ) and Gupta et al. ( 2007 ), the male, female ratio remained 1:1.1 to 1:1.65. Sampling bias (due to small size of males) and unequal longevity of the two sexes have been considered responsible for sex ratio biases ( Marshall 1981 ). Furthermore, the avian lice exhibit considerable diversity with respect to the pattern of egg laying on body feathers ( Marshall 1981 ). The same has been found to be true in the case of the two avian lice infesting cattle egrets.
The avian lice exhibit certain distinctive features (on or within chorionic shell) in the form of markings/ sculpturings/ ornamentation/ projections on the eggshell. Balter ( 1968 a and b ) remarked that louse egg morphology could be used as a guide to louse taxonomy. Kumar et al. (2004) noted the eggshell markings of 3 species of Lipeurus differed ( L. heterographus has distinct hexagonal ridges; L. lawrensis tropicalis has a chorion pitted with faint hexagonal ridges; L. caponis has granular protuberances). Gupta et al. ( 2007 ) showed that the egg shell of selected species of Menacanthus differed in number, location and nature of apophyses. Likewise, the eggshells of selected species of Brueelia differ in the presence of polar thread as well as the number and disposition of micropyles. However, the eggshells of other species of Ardeicola and Ciconiphilus have not yet been studied to provide the comparison. | The prevalence, intensities of infestation, range of infestation and population composition of two phthirapteran species, Ardeicola expallidus Blagoveshtchensky (Phthiraptera: Philopteridae) and Ciconiphilus decimfasciatus Boisduval and Lacordaire (Menoponidae) on seventy cattle egrets were recorded during August 2004 to March 2005, in India. The frequency distribution patterns of both the species were skewed but did not correspond to the negative binomial model. The oviposition sites, egg laying patterns and the nature of the eggs of the two species were markedly different.
Key Words | Acknowledgements
We thank two anonymous reviewers for fruitful comments on an earlier draft of the paper; the Principal, Govt. Raza P. G. College, Rampur, India for laboratory facilities, Prof. E. Mey (Naturhistorisches Museum in Thuringer Landesmuseum Heidecksburg, schlobbezirk 1, D-07407 Rudolstadt Bundesrepublik, Germany) for the identification of lice, the Department of Science Technology, India for providing financial support to Dr. A. K. Saxena, in form of project no. SP/ SO/ AS-30/ 2002. | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Sep 27; 10:163 | oa_package/c5/e4/PMC3016757.tar.gz |
||
PMC3016758 | 21062140 | Introduction
Infestation with the louse, Pediculus humanus capitis De Geer (Phthiraptera: Pediculidae) is one of the most common parasitic infestation of humans worldwide ( Burgess 2004 ). Children 3–12 years of age are the most affected group in both developed and developing countries. The main symptoms associated with infestation are constant itching, scalp irritation and social sanctioning. In rare cases, the itch-scratch cycle can lead to secondary infection with impetigo and pyoderma ( Mumcuoglu et al. 2009 ). Transmission of head louse occurs mainly by direct host-to-host contact ( Takano-Lee et al. 2005 ). Moreover, Falagas et al. ( 2008 ) reported that head louse infestation has been increasing worldwide due to the lack of effectiveness of pediculicides.
Traditionally, the main treatment to control head lice is chemicals including a wide variety of neurotoxic synthetic insecticides such as DDT, lindane, malathion, carbaryl, permethrin and δ-phenothrin ( Burgess 2004 ). The repeated overuse of these products has resulted in the selection of resistant populations of head lice in several countries, including Argentina ( Mumcuoglu et al. 1995 ; Picollo et al. 1998 ; 2000 ; Kasai et al. 2009 ). There is a strong consumer pressure against insecticide use that impacts health, the food supply, water and the environment ( Isman 2008 ). Thus, there is an urgent need to find and develop new pediculicide substances. Plant essential oils and their constituent compounds such as the monoterpenoids seem to be good candidates because many are easily extractable, are biodegradable, and are very effective against a wide spectrum of insect pests ( Isman 2000 ; Rahman and Talukder 2006 ), including head lice ( Priestley et al. 2006 ; Rossini et al. 2008 ). In a previous study, we reported the fumigant and repellent activity of many native aromatic plants belonging to six botanical families against head lice ( Toloza et al. 2006 ). In order to continue this work, the purpose of the current study was to assess the insecticidal activity of essential oils from native and cultivated aromatic plants from Argentina for their activity against permethrin-resistant head lice. | Methods and Materials
Insects
Head lice were collected from heads of 2,120 infested children 6–13 years old, using a fine toothed antilouse metallic comb. Lice were obtained from three elementary schools (HB, PR and E14) located in Buenos Aires, where a topical method indicated high resistance levels to permethrin (71.42, 35.37 and 33.33; respectively) (Toloza AC, unpublished). Briefly, this method consisted of the topical application of serial dilutions of permethrin in acetone. They were applied to individual lice with a 5-μl syringe, and each louse was treated with 0.1μl of the solution on the dorsal abdomen. Each concentration was replicated at least three times using 10 adults per replicate. Once collected, head lice were transported to our laboratory according to Picollo et al. ( 1998 , 2000 ). The protocol for lice collection was approved by the ad hoc committee of the Centro de Investigaciones de Plagas e Insecticidas (CIPEIN, Buenos Aires, Argentina), and archived in our laboratory. After collection, head lice were maintained without feeding in an environmental chamber (Lab-line Instruments, www.lab-line.com ) at 18 ± 0.5°C and 70–80% RH in darkness.
Essential oils
Twenty five species of native and exotic plants in thirteen different families were collected in the spring of 2006 and 2007 from different regions of Argentina. Four of these plants, Aloysia citriodora, Satureja parvifolia, Baccharis salicifolia and Chenopodium ambrosioides , were the most effective. They were obtained from different environmental areas that allowed separating them into different chemotypes.
Voucher specimens were deposited at Jujuy Herbarium (Index herbarium code JUA) and INTA EEA La Rioja Herbarium (CHAM). The species are listed by family in Table 1 and the Appendix.
Dried leaves of each individual species were hydrodistilled in a Clevenger-like apparatus for 1 h. The oils obtained were dried over anhydrous sodium sulphate (Merck, www.syngentacropprotection.com ) and stored in a refrigerator until analysis.
Gas chromatography (GC)
Analyses of essential oils were performed in a Shimadzu GC-R1A (FID) gas-chromatograph ( www.shimadzu.com ), fitted with a 30 m × 0.25 mm (0.25 μm film thickness) fused silica capillary column coated with a phase 5% phenyl 95% dimethylpolysiloxane, non polar DB-5 column. The GC operating conditions were as follows: oven temperature programmed from 40 -230° C at 2° C/min, injector and detector temperatures 240° C. The carrier gas was nitrogen at a constant flow of 0.9 ml/min, and 0.3 μl of each material was injected into the Chromatograph. The constituents of the essential oils were identified on the basis of: (1) their GC retention index with reference to an homologous series of n-alkanes (C12 – C25); (2) by comparison of their retention times with those of pure authentic samples from Sigma and Fluka Companies; (3) peak enrichment on co-injection with authentic standards wherever possible; (4) by GC-MS library search (Adams and Nist); and (5) using visual inspection of the mass spectra from literature, for confirmation. Analyses were performed with a Perkin Elmer Q-700 ( www.perkinelmer.com ) equipped with a SE30 capillary column (30 m × 0.25 mm; coating thickness 0.25 μm film). The analytical conditions were: oven temperature from 40° C to 230° C at 2°C/min, the carrier gas was helium at a constant flow of 0.9 ml/min, the source was at 70 eV.
Bioassay
The method of Toloza et al. ( 2006 ) was employed to evaluate the fumigant activity of the essential oils. Briefly, it consisted of an enclosed chamber (9 cm diameter Petri dish) that allowed creating a saturated microatmosphere as a result of the evaporation of the essential oil, thus exerting inactivation on lice. A drop of 60 μl of pure essential oil was deposited on a micro coverglass within the chamber. Control consisted of the same experimental unit without the addition of any substance. In each chamber 15 adult insects (males and females) were placed and exposed to the vapors of the essential oils. Lice were observed for evidence of knockdown every 5 min. for 60 min. The criterion for knockdown was when an insect remain on its back with no leg movements. Once used in a given assay, the insects were discarded. Three replicates were made for each tested essential oil. During each study, the assembled units were kept at 28 ± 1° C and 60 ± 5% RH.
Statistical analysis
Probit analysis ( Litchfield and Wilcoxon 1949 ) was used to estimate time in minutes to knockdown of 50% of exposed insects of each experimental unit (KT 50 ), by using POLO Plus v2.0 (LeOra Software 2002). Insects exposed to test essential oils that did not show evidence of efficacy in 60 min, were considered to possess a KT 50 > 60 min and were not included in further statistical analysis. Samples for which the 95% fiducial limits did not overlap were considered to be significantly different. | Results
Significant differences in fumigant activity against head lice were found among the essential oils from the native and exotic plant species ( Table 1 ). Eighteen of the twenty five studied essential oils were native from Argentina (72.0%). It is important to note that 75% of the effective essential oils (KT 50 <60) were from plants native to Argentina. On the basis of effectiveness, the essential oil from the native Cinnamomum porphyrium was the most effective (KT 50 = 1.12 min), followed by A. citriodora (chemotype 2) and Myrcianthes pseudomato , with KT 50 values of 3.02 and 4.09; respectively. There were significant differences among all the studied chemotypes ( Table 1 ). For example, A. citriodora (chemotype 2), S. parvifolia (chemotype 2) and C. ambrosioides (chemotype 1) showed pediculicidal action, with KT 50 values of 3.02, 32.06 and 42.03 min; respectively. This is in contrast to the chemotypes of A. citriodora (chemotype 1), S. parvifolia (chemotype 1) and C. ambrosioides (chemotype 2) that possessed KT 50 >60 min. Both chemotypes 1 and 2 of the essential oil B. salicifolia showed no action against head lice. | Discussion
The present study shows that essential oils from some Argentinean aromatic plants have fumigant activity against head lice. In comparison with our previous work ( Toloza et al. 2006 ), new botanical families have been examined including Loganiaceae, Asteraceae, Monimiaceae, Apiaceae, Cupressaceae, Euphorbiaceae, Fabaceae and Schizaeaceae. This is the first study related to the pediculicidal efficacy of these botanical families. An interesting result was the fumigant activity of the three most effective essential oils from C. porphyrium, A. citriodora (chemotype 2) and M. pseudomato , which were 25.7-, 9.5-, and 7-fold more toxic than the oil of Buddleja mendozensis .. Yang et al. ( 2005 ) studied the fumigant activity of an essential oil from another plant of the same genera, Cinnamomum zeylanicum , which was also shown to be effective against head lice. However, a comparison with our results was not possible due to differences in the methodology.
Concerning the variation in chemical composition of the essential oils, it is important to note that the differences found in the fumigant efficacy of the oils A. citriodora, S. parvifolia, C. ambrosioides , are related to their chemical composition. For example, both chemotypes of C. ambrosioides had a very different chemical composition. Chemotype 1 possess as their main compounds trans carveol (42.4%), trans -pinocarvyl acetate (22.4%) and cis -carveol (10.6%), while chemotype 2 has ascaridol as its main compound (99.4%). However, other factors such as the interactions among the constituents could also affect the biological activity of the whole oil ( Burgess 2004 ). Both chemotypes of S. parvifolia had similar chemical composition whose main compounds were piperitone and piperitenone oxide. However, chemotype 2 had a higher proportion of piperitone than chemotype 1 (46.0 and 41.9%; respectively) and a lower proportion of hydrocarbons and oxygencontaining sesquiterpenes ( Dambolena et al. 2009 ). These differences were likely responsible for effectiveness of chemotype 2 against head lice.
There are several well documented studies showing that plant extracts could be used as medicinal products against a broad spectrum of ectoparasites of humans and house animals ( Semmler et al. 2009 ; Burgess 2009 ). However, most of the mentioned products are derived from fixed oils rather than essential oils. For example, a product containing a neem seed extract was highly effective in in-vivo and in-vitro tests against head lice ( Heukelbach et al. 2006 ; Abdel-Ghaffar and Semmler 2007 ). Recently, Burgess et al. ( 2010 ) showed in a clinical trial the superiority of a spray containing coconut, ylang ylang and anise oils over a permethrin lotion against head lice.
Our study indicates that certain essential oils from local plants of Argentina were highly effective in the vapor phase against head lice. However, in vitro efficacy tests of botanical extracts are only the first step of research and much work is needed before they could be used in a commercial product. The incorporation of excipients (alcohols, etc.) that increase the stability of essential oils is of a great concern since essential oils are highly volatile, and the effectiveness of the product could decay in hours if the formulation is incorrect. For instance, active ingredients that were effective in vitro could show low or no activity against lice when incorporated into a liquid formulation because certain adjuvants or excipients could affect the insecticidal activity once they are incorporated into a formulation. Once the vehicle base and the excipients are selected, a battery of tests for acute and chronic toxicity (e.g., burning sensation, skin irritation, etc) is needed. A final step should consider the in vivo efficacy of the product (i.e. in clinical trials).
The present work showed that basic research on essential oils toxicity against head lice should be considered only if the steps mentioned above are completed. | Infestation with the head louse, Pediculus humanus capitis De Geer (Phthiraptera: Pediculidae), is one of the most common parasitic infestation of humans worldwide. Traditionally, the main treatment for control of head lice is chemical control that is based in a wide variety of neurotoxic synthetic insecticides. The repeated overuse of these products has resulted in the selection of resistant populations of head lice. Thus, plant-derived insecticides, such as the essential oils seem to be good viable alternatives as some have low toxicity to mammals and are biodegradable. We determined the insecticidal activity of 25 essential oils belonging to several botanical families present in Argentina against permethrin-resistant head lice. Significant differences in fumigant activity against head lice were found among the essential oils from the native and exotic plant species. The most effective essential oils were Cinnamomum porphyrium , followed by Aloysia citriodora (chemotype 2) and Myrcianthes pseudomato , with KT 50 values of 1.12, 3.02 and 4.09; respectively. The results indicate that these essential oils are effective and could be incorporated into pediculicide formulations to control head lice infestations once proper formulation and toxicological tests are performed.
Keywords | Acknowledgements
We thank the authorities of the elementary schools where head lice were collected. This investigation received financial support from Agencia Nacional de Promoción Científica y Tecnológica (Argentina), Laboratorio Elea (Buenos Aires), and CONICET (Argentina). We thank two anonymous referees for their helpful comments. | CC BY | no | 2022-01-12 16:13:45 | J Insect Sci. 2010 Oct 22; 10:185 | oa_package/79/17/PMC3016758.tar.gz |
||
PMC3016759 | 14531383 | Keywords: | To the Editor: Argentina has the highest incidence of hemolytic uremic syndrome (HUS) in the world, and 10.4 cases per 100,000 children <5 years of age were reported in 2001. HUS is the leading cause of acute renal failure in children ( 1 ); in 20% to 35% serious chronic renal failure develops, ranging from mild to serious, and HUS is the second leading cause of chronic renal failure ( 2 , 3 ) in Argentina. Recently, evidence of Shiga toxin–producing Escherichia coli (STEC) infection was found in 59% of Argentine HUS cases; O157:H7 was the predominant serotype isolated ( 4 ). Although outbreaks of E. coli O157:H7 have been linked to eating contaminated ground beef ( 5 ), the organism is rarely isolated from the implicated meat, and the sources of infection for sporadic cases have rarely been identified. We report a sporadic HUS case linked to the consumption of home-prepared hamburger contaminated with E. coli O157.
A 2-year-old girl was brought to the emergency room of the Hospital Nacional de Pediatría “Prof. Dr. Juan Garrahan” in Buenos Aires on April 26, 2002, with a 1-day history of bloody diarrhea. Results of a physical examination were normal, and a stool culture was requested. The patient was sent home with dietary and general instructions. As watery diarrhea persisted with vomiting and fever, the girl was brought in again 3 days later. At that time, she exhibited moderate dehydration, pallor, drowsiness, and a generalized seizure of 10 to 15 min duration, tachycardia, tender and tense abdominal wall, and a history of oligoanuria for the last 48 h. Blood pressure was 128/67 mm Hg. The child was hospitalized with a presumptive diagnosis of HUS and anuric renal failure.
Initial laboratory findings included the following: hematocrit, 26%; hemoglobin level, 8.8 g/dL; leukocyte count, 34,800/mm 3 ; segmented neutrophil count, 29,928/mm 3 ; platelet count, 91,000/mm 3 ; serum glucose, 160 mg/dL; blood urea nitrogen (BUN), 268 mg/dL; serum creatinine, 6.3 mg/dL; albumin, 1.7 g/dL; uric acid , 14.8 mg/dL; calcium, 6.9 mg/dL; phosphorus, 6.7 mg/dL; magnesium, 2.0 mg/dL; sodium, 113 mEq/L; potassium, 7.6 mEq/L; pH 7.28; bicarbonate, 10 mmol/L; base excess, –14.9 mmol/L. Chest x-ray findings were normal with a cardiothoracic index of 0.5; results of an abdominal sonogram were normal. A sonogram of the renal system also showed that the kidneys were of normal shape and size and had increased echogenicity. Results of a brain scan showed nonspecific brain atrophy .
The clinical findings and the laboratory features of microangiopathic hemolytic anemia, thrombocytopenia, and acute renal failure were consistent with the diagnosis of HUS. The patient remained anuric for 17 days, required 17 peritoneal dialysis procedures, and six infusions of packed red blood cells. One month after the acute period, she had elevated BUN and serum creatinine levels and massive proteinuria.
The rectal swab sample collected on April 26 was routinely cultured for E. coli , Salmonella , Shigella , Yersinia , Aeromonas , Plesiomonas , Vibrio, and Campylobacter species. Sorbitol nonfermenting colonies were recovered on sorbitol-MacConkey (SMAC) agar (Difco Laboratories, Detroit, MI) and SMAC supplemented with cefixime (50 ng/mL) and potassium tellurite (25 mg/mL) (CT-SMAC). The bacterial confluent growth zones of both SMAC and CT-SMAC were positive for stx 2 and rfb O157 genes by multiplex polymerase chain reaction (PCR) using the primers described by Pollard et al. ( 6 ) and Paton et al. ( 7 ), respectively. The E. coli O157 isolates were identified by standard biochemical methods and serologic tests by using specific O157 antiserum (INPB-ANLIS “Dr. Carlos G. Malbrán”) and sent to the Servicio Fisiopatogenia as National Reference Laboratory (NRL) for further characterization.
As part of the case-control study conducted in the pediatric hospital to identify the risk factors associated with the STEC infection, parents of the 2-year-old girl were interviewed with a standardized questionnaire 8 days after onset of symptoms. Information was collected about her clinical illness, potential exposures in the 7 days before the onset of diarrhea, and demographic issues. Her parents reported that on April 25 the girl had eaten a home-prepared hamburger, made from ground beef purchased from a local market. No other family members reported diarrhea.
Three days after the interview, on May 6, a formal complaint was presented by the mother at the Division of Public Health of Lanús, in the southern area of Buenos Aires, where the family lives. The frozen leftover ground beef from the same package used to make the hamburgers was provided by the child’s family and processed at the Laboratorio Central de Salud Pública.
A 65-g portion of the ground beef was incubated overnight at 42°C in 585 mL of modified E. coli medium broth containing novobiocin (final concentration, 20 μg/mL). The sample was positive using the E. coli O157 Visual Immunoassay (Tecra Internacional Pty. Ltd., French Forest NSW, Australia) ( 8 ). Immunomagnetic separation was performed with 1 mL of the culture, according to the instructions of the manufacturer (Dynal, Inc., Oslo, Norway). The concentrate sample was plated onto CT-SMAC and O157:H7 ID medium (bioMérieux, Marcy l’Etoile, France). Up to 20 sorbitol-nonfermenting colonies were selected, confirmed as E. coli O157, and sent to NRL.
At NRL, both clinical and ground beef O157 isolates were confirmed as E. coli O157:H7, susceptible to all of the antibiotics assayed, as previously described ( 9 ). Genotypic characterization showed that the isolates harbored stx 2, eae, and EHEC- hly A genes.
To establish their clonal relatedness, the strains were assayed by subtyping methods ( 9 ). The identity of the strains was confirmed by the unique pulsed-field gel electrophoresis (PFGE) pattern with the restriction enzymes Xba I and Avr II, and the same phage type 4. In addition, both strains were characterized as stx 2/ stx 2vh-a by PCR-restriction fragment length polymorphism.
To our knowledge, this is the first HUS case in our country in which the source of infection was identified. No investigation was conducted to trace back the source of the ground beef. This study illustrates the importance of the surveillance of STEC infections and the usefulness of molecular subtyping techniques, such as PFGE and phage typing, to determine the relatedness of strains and assess epidemiologic associations.
The public should be made aware that hamburgers, even when prepared at home, can be a source of infection. A primary strategy for preventing infection with E. coli O157:H7 is reducing risk behaviors through consumer education ( 10 ). | Acknowledgments
We thank Patricia Griffin for her review and helpful comments on earlier draft.
This work was supported by grants from Centers for Disease Control and Prevention (USA) and Fundación Alberto J. Roemmers (Argentina). | CC BY | no | 2022-01-25 23:38:27 | Emerg Infect Dis. 2003 Sep; 9(9):1184-1186 | oa_package/42/bd/PMC3016759.tar.gz |
|||||
PMC3016760 | 14531381 | To the Editor : Severe acute respiratory syndrome (SARS) is a recently recognized infectious disease caused by a novel human coronavirus (SARS-CoV) ( 1 ). The first case of SARS, diagnosed as communicable atypical pneumonia, occurred in Guangdong Province, China, in November 2002. Thousands of patients with SARS have been reported in over 30 countries and districts since February 2003.
SARS is clinically characterized by fever, dry cough, myalgia, dyspnea, lymphopenia, and abnormal chest radiograph results ( 1 – 3 ). According to the World Health Organization (WHO) ( 4 ), the criteria to define a suspected case of SARS include fever (>38°C), respiratory symptoms, and possible exposure during 10 days before the onset of symptoms; a probable case is defined as a suspected case with chest radiographic findings of pneumonia and other positive evidence.
Although most reported patients with SARS met the WHO criteria, we found two SARS case-patients who did not exhibit typical clinical features. Case 1 was in a 28-year-old physician. He had close contact with three SARS patients on February 1, 2003. After 10 days, he had mild myalgia and malaise with a fever of 37.3°C. He had no cough and no other symptoms. Leukocyte and lymphocyte counts were normal. The chest radiograph showed no abnormalities. He did not receive any treatment except rest at home. His symptoms disappeared after 2 days. He completely recovered and returned to work 4 days after onset of symptoms. After 12 weeks, his serum was positive for immunoglobulin (Ig) G against SARS-CoV in an indirect enzyme-linked immunosorbent assay (ELISA) with inactivated intact SARS-CoV as the coated antigen.
Case 2 was in a 13-year-old boy whose mother had been confirmed to have SARS on February 4, 2003. Fever developed in the boy 20 days after his mother’s onset of the disease. He did not come into contact with other confirmed SARS patients during this period. He had a mild headache and diarrhea with a fever from 37.2°C to 37.8°C for 3 days. No other symptoms and signs developed, and a chest radiograph showed no abnormalities. He completely recovered after 5 days. After 12 weeks, his serum was positive for IgG against SARS-CoV, detected with an ELISA.
In both case-patients, SARS had been initially excluded in spite of their close contacts with SARS patients because their symptoms could be explained as a common cold, and no specific diagnostic approaches were considered when they were sick since the causative agent of SARS was not identified until March 2003 ( 5 ). However, their serum specimens were positive for IgG against SARS-CoV by ELISA. Those results strongly indicate that both patients had been infected with SARS-CoV, although their signs and symptoms did not meet the criteria for the SARS case definition. Mild SARS-CoV infection may not easily be defined clinically, and such patients may potentially spread the disease if they are not isolated. | CC BY | no | 2022-01-25 23:35:53 | Emerg Infect Dis. 2003 Sep; 9(9):1182-1183 | oa_package/3c/71/PMC3016760.tar.gz |
|||||||
PMC3016761 | 14519260 | Conclusions
This report represents the third confirmed case of C. muris infection in humans. Previously, one case of C. muris infection was identified in an HIV-positive child in Thailand and in an HIV-positive adult in Kenya using microscopy and molecular analysis ( 5 , 7 ). C. muris and C. andersoni –like oocysts were found in two healthy Indonesian girls, but the diagnosis was not confirmed by molecular tools ( 8 ). One putative C. muris infection was reported in an immunocompromised patient in France based on sequence analysis of a small fragment of the SSU rRNA ( 6 ). However, the sequence presented was more similar to that of C. andersoni (2-bp differences in a 242-bp region) than to C. muris (8-bp differences in the region).
Although determining whether or not the C. muris contributed medical problems in this patient is not possible, detecting C. muris in her stool sample is an unexpected finding. A major difference between C. parvum or C. hominis and C. muris, is that C. parvum and C. hominis normally colonize the intestine, whereas C. muris is a gastric pathogen in cattle. Anderson ( 12 ) and Esteban and Anderson ( 13 ) reported that another gastric species, C. andersoni, infects only the glands of the cattle stomach (abomasum), where it retards acid production. These researchers postulated that this process may affect protein digestion in the abomasum and account for the fact that milk production in cows that are chronically infected with C. muris appears to be reduced by approximately 13%. Thus, an infection by C. muris may perhaps cause similar protein digestion problems in human infections, particularly in HIV-positive persons.
Even though only a few cases of C. muris infections have been identified so far in humans, gastric cryptosporidiosis occurs much more often than believed, especially in HIV-positive persons. Up to 40% of cryptosporidiosis in HIV-infected persons includes gastric involvement ( 14 ). Although most gastric Cryptosporidium infections in HIV-positive persons are likely caused by C. parvum or C. hominis because of immunosuppression, the contribution of C. muris probably has been underestimated. Thus, molecular characterizations of stomach tissues from patients with gastric cryptosporidiosis may help us to understand the pathogenesis of human Cryptosporidium infection.
Our report expands the geographic range of suspect C. muris infections in humans and suggests that this species may be a global emerging zoonotic pathogen. This pathogen may be of particular importance to persons living in regions where rodents live in close proximity to humans and sanitation may be minimal. C. muris may also be more prevalent than currently recognized. The organism is nearly twice as large as C. parvum and closer in size to Cyclospora cayetanensis . Although Cyclospora autofluoresces while Cryptosporidium does not ( 15 ), C. muris could still be easily misdiagnosed, since few laboratory workers would be familiar with C. muris or its morphology. | Cryptosporidium muris , predominantly a rodent species of Cryptosporidium , is not normally considered a human pathogen. Recently, isolated human infections have been reported from Indonesia, Thailand, France, and Kenya. We report the first case of C. muris in a human in the Western Hemisphere. This species may be an emerging zoonotic pathogen capable of infecting humans.
Keywords: | Cryptosporidiosis can be a debilitating diarrheal disease. While infections are normally acute and self-limiting in immunocompetent persons, cryptosporidiosis can be life threatening in those with compromised immune systems. In humans, cryptosporidiosis is caused predominantly by Cryptosporidium parvum or C. hominis (the latter was previously known as the C. parvum human genotype), and major outbreaks of the disease have been clearly associated with contaminated drinking water ( 1 ).
Recently, another species of Cryptosporidium , C. muris , has been suggested to be of concern to human health. C. muris is a parasite first identified in the gastric glands of mice ( 2 ). Experimental transmission studies have shown that the parasite readily infects multiple nonrodent hosts including dogs, rabbits, lambs, and cats (3). C. muris –like organisms have also been reported as opportunistic infectious agents in immunocompromised nonhuman primates (4). In the past 2 years, five cases of infections with C. muris or C. muris –like parasites have been reported from HIV-positive and healthy persons in Kenya ( 5 ), France ( 6 ), Thailand ( 7 ), and Indonesia ( 8 ). In this paper, we report on the first documented case of C. muris in a human in the Western Hemisphere. The parasite was recovered during the summer of 2002 in stools of an HIV-positive Peruvian woman with severe diarrhea. This finding was confirmed by light microscopy, polymerase chain reaction (PCR)–restriction fragment length polymorphism (RFLP), and DNA sequencing.
The Study
In 2002, we conducted a year-long collaborative study on the epidemiology of Cyclospora cayetanensis infections in Perú. As part of that study, we collected approximately 100 stool samples in 2.5% potassium dichromate solution from persons in Lima and Iquitos with Cyclospora infection. Fecal samples were initially identified as Cyclospora -positive in Lima, and then transported to the United States for additional confirmation using wet mount and Nomarski interference contrast microscopy.
Two stool samples, which were taken on two sequential days from an HIV-positive woman who was 31 years of age, contained oocysts that appeared, based on morphology, to be Cryptosporidium muris . Low numbers of Cyclospora cayetanensis and Blastocystis hominis oocysts were also identified in the stool samples. The Cryptosporidium muris infection was initially identified by using wet mount microscopy with oocysts (n=25) averaging 6.1 (± 0.3) x 8.4 (± 0.3) μM (range 5.6–6.4 x 8.0–9.0) and a shape index (length/width) 1.38 (1.25–1.61) ( Figure 1 ). Numbers of oocysts were determined semiquantitatively in each sample by hemacytometer, with an estimated 737,000 and 510,000 oocysts/g recovered from the submitted samples on day 1 and day 2, respectively. The diagnosis of C. muris was later confirmed through DNA analysis.
HIV was first diagnosed in the patient in November 2000 by using enzyme-linked immunosorbent assay and Western blot (immunoblot). She arrived at the hospital clinic in June 2002 with fever and reported that she had been experiencing diarrhea for >3 months. The patient reported that she had lost approximately 25 lbs. in the past 7 months, consistent with HIV-wasting syndrome. Her chest x-ray was abnormal, but four direct sputum examinations for acid-fast bacteria using Ziehl-Neelsen staining were negative, as were efforts at culturing Mycobacterium tuberculosis .
Other laboratory values for this patient at the time of stool sample collection were as follows: CD4 cell count 66/μL; hematocrit 36%; leukocytes 4,100/μL with 4% bands, 55% neutrophils, 27% lymphocytes, and 0% eosinophils; urine examination normal; creatine 0.8 mg/dL; urea 21 mg/dL; glucose 105 mg/dL; serum glutamic oxalacetic transaminase 30 IU/L; serum glutamic pyruvic transaminase 46 IU/L; and bilirubin 0.9 mg/dL.
The diagnosis of Cryptosporidium in the patient’s samples was confirmed by a small subunit rRNA-based nested PCR, which amplified a portion of the rRNA gene (830 bp). Cryptosporidium spp. was determined by the banding patterns of restriction digestions of PCR products with Ssp I, Vsp I, and Dde I ( 9 ). Diagnosis was confirmed by DNA sequencing of three independent PCR products from each sample in both directions on an ABI PRISM 3100 (Applied Biosystems, Foster City, CA) instrument. Figure 2 shows the RFLP analysis of three PCR products from each sample with restriction enzymes Ssp I and Vsp I; these results suggest that these PCR products belonged to either C. muris or Cryptosporidium andersoni ( 10 ). Further RFLP analysis with Dde I showed banding patterns identical to C. muris ( 9 ; Figure 2 ). All DNA sequences obtained from the six PCR products were identical to those previously reported by Xiao et al. ( 10 , 11 ) from C . muris from a Bactrian camel, a rock hyrax, and mice (GenBank accession nos. AF093997 and AF093498) and another isolate recently found in an HIV patient in Kenya ( 5 ).
After the diagnosis of intestinal parasite infection, the patient was treated with TMP-SMX (trimethoprim 160 mg, sulfamethoxazole 800 mg) Forte twice a day for 1 week and then TMP-SMX once a day for Pneumocystis carinii pneumonia prophylaxis. The patient was also placed on AZT/3TC and nevirapine. The patient recovered with no further evidence of Cyclospora , Blastocystis , or C. muris in stool samples taken 2 months posttreatment. She became afebrile and had gained 5 kg as of 2 months’ posttreatment. Molecular analysis of a stool sample collected 122 days after the initial diagnosis confirmed that the patient had recovered from the C. muris infection. | Acknowledgments
We thank the laboratory support personnel at the Instituto de Medicina Tropical Alexander von Humboldt, Universidad Peruana Cayetano Heredia, Lima, Perú, especially Jenny Anchiraico.
This study was supported by Environmental Protection Agency award numbers R-82858602-0 (C.J.P.) and R-82837001-0 to (S.J.U.) and by a research grant (L.X.) from the Opportunistic Infections Working Group at the Centers for Disease Control and Prevention, Atlanta, GA.
Dr. Palmer is a research professor at the University of Florida. Her primary research interests are infectious and tropical disease with special emphasis on field-based research studies in the Americas. | CC BY | no | 2022-01-24 23:36:02 | Emerg Infect Dis. 2003 Sep; 9(9):1174-1176 | oa_package/e6/d5/PMC3016761.tar.gz |
||||
PMC3016762 | 14519236 | China holds the key to solving many questions crucial to global control of severe acute respiratory syndrome (SARS). The disease appears to have originated in Guangdong Province, and the causative agent, SARS coronavirus, is likely to have originated from an animal host, perhaps sold in public markets. Epidemiologic findings, integral to defining an animal-human linkage, may then be confirmed by laboratory studies; once animal host(s) are confirmed, interventions may be needed to prevent further animal-to-human transmission. Community seroprevalence studies may help determine the basis for the decline in disease incidence in Guangdong Province after February 2002. China will also be able to contribute key data about how the causative agent is transmitted and how it is evolving, as well as identifying pivotal factors influencing disease outcome. There must be support for systematically addressing these fundamental questions in China and rapidly disseminating results.
Keywords: | Severe acute respiratory syndrome (SARS) is a newly emerged disease, caused by a previously unknown coronavirus. The first known cases occurred in Guangdong Province in southern China in November and December 2002. During late February 2003, a physician who was incubating SARS traveled from Guangzhou, the provincial capital, to Hong Kong, Special Administrative Region of China, and stayed at a hotel. There, the virus was transmitted from him to local residents and to travelers, who became ill and transmitted disease to others when they returned to Vietnam, Singapore, Canada, and Taiwan, Province of China ( 1 ). SARS has now occurred in >8,450 people with >800 deaths worldwide.
The tally of SARS climbed rapidly in China through May 2003, then decelerated markedly during June. The disease has now been reported in 24 of China’s 31 provinces. By June 26, 2003, a total of 5,327 SARS cases and 348 deaths had been reported from mainland China, including 2,521 cases in Beijing and 1,512 in Guangdong Province.
Since February 2003, teams of technical consultants for the World Health Organization have been working in China to provide assistance to the Ministry of Health and provincial governments on public health responses to the SARS outbreak. A team that began working in China in March reviewed considerable clinical, epidemiologic, and laboratory data with scientists and officials from a variety of settings in Guangdong Province and Beijing. The team worked closely with colleagues from the National and Guangdong Provincial Centers for Disease Control, and together were able to establish that cases occurring in Guangdong beginning in November were clinically and epidemiologically similar to subsequent cases of SARS documented elsewhere.
The team observed detailed, comprehensive data collection forms, which are completed for activities and behaviors and clinical manifestations of patients with SARS. The team was informed that serum and respiratory secretion specimens collected from many patients from Guangdong were being held under appropriate storage conditions, awaiting further laboratory testing.
While a dedicated, collaborative international effort has resulted in substantial understanding of this disease with remarkable speed, critical information is still lacking. We detail a variety of knowledge gaps that should be addressed through a set of activities to optimize prevention and control of SARS within China and globally.
Emergence of SARS-Associated Coronavirus in Humans
Available evidence suggests that SARS emerged in Guangdong Province, in southern China. How and when did it emerge? Did the causative agent evolve in an animal species and jump to humans (or perhaps first to other animal species), or did the virus evolve within humans? The genetic sequence of the virus has been obtained in several laboratories, and phylogenetic analyses have shown that it is unlike other coronaviruses of animal and human origin. Indeed, the virus has been tentatively placed in a new fourth genetic group ( 2 , 3 ).
Why is it so important to answer the question of how SARS emerged? Most recently recognized novel emergent viruses have been zoonotic, usually with a reservoir in wildlife ( 4 , 5 ). Thus, SARS coronavirus, if zoonotic, may provide the basis for modeling and predicting the appearance of other potential zoonotic human pathogens. More importantly, the information may be crucial for control of SARS. If this disease is to be curtailed or eliminated by strict public health measures, blocking further animal-to-human transmission will be critical. Only about half of the cases in Guangdong are attributed to contact with a SARS patient. Transmission from an unknown, but persisting animal reservoir might explain this finding; however, a nonspecific case definition (i.e., many “cases” might not actually be SARS) and limitations in contact-tracing capacity are other potential explanations.
Finding a potential animal source is, however, a daunting task. The province is famous for its “wet markets,” where a bewildering variety of live fauna are offered for sale (sometimes illegally) for their medicinal properties or culinary potential. The opportunity for contact, not only with farmed animals but also with a variety of otherwise rare or uncommon wild animals, is enormous. More than one third of early cases, with dates of onset before February 1, 2003, were in food handlers (persons who handle, kill, and sell food animals, or those who prepare and serve food) (Guangdong Province Center for Disease Control and Prevention, unpub. data,).
Hypothesis-generating epidemiologic studies should be carried out immediately in China, focusing on early cases of SARS and cases in persons without known contact with infected persons. These studies should also collect information from appropriately selected controls (i.e., matched by categories such as community and age), regarding exposures to animals of any kind in any setting (including food preparation, dietary habits, pets, and a variety of other activities and behaviors in the community).
Plausible hypotheses generated by epidemiologic studies should be briskly followed by intensive, focused, laboratory studies where relevant, including surveys of specific animal populations to identify SARS-associated coronaviruses (by culture and polymerase chain reaction [PCR]) or to measure specific antibodies. Some virologic surveys have already been conducted among prevalent animal populations, including those known to harbor other coronaviruses or other viruses transmissible to humans or wild animals, handled and sold in the markets; a variety of animals have been reported to harbor SARS-associated coronavirus. However, whether these animals are transmitting virus or are recipients of virus transmission is not yet clear. Solutions will lie with identifying epidemiologic links, which should guide targeted animal studies. Molecular epidemiologic and genetic studies can then be helpful in evaluating viruses isolated from animals and from humans.
Natural History of the Epidemic
Since the earliest known cases were in Guangdong Province, China has had more time than any other location to observe disease incidence over time. Evidence from Guangdong Provincial Centers for Disease Control suggests that the disease incidence peaked in mid-February, and declined weekly through May. What were the reasons for the decline? Introduction of stringent infection-control measures in hospital settings undoubtedly resulted in reduced incidence in healthcare settings but would not likely have accounted for reductions in community transmission. Efforts have been made to reduce the interval between onset of illness and hospitalization (minimizing the potential for community transmission). This effort likely had substantial impact in reducing disease incidence, as shown elsewhere ( 6 ).
The initial hypothesis was that the virus attenuated after multiple generations of transmission; this hypothesis now seems unlikely. We note several other considerations. Were there a limited number of susceptible people within the population to begin with? Such a concept is possible if there had been earlier spread of a less virulent coronavirus, providing some immunity to a proportion of the population. If so, whether this occurrence was unique to Guangdong will be important to determine.
Alternatively, did the population develop widespread immunity to the causative agent itself? This scenario would require a good deal of asymptomatic or mildly symptomatic disease. At this stage, no reason exists to exclude the possibility of a much wider spectrum of disease than is currently appreciated, since the spectrum of illness has not been fully evaluated.
Another possibility is that a second agent might be required, in addition to coronavirus, to produce severe illness; if this is the case, the epidemiology (like seasonality) of the second agent (perhaps a less recently emerged pathogen for which there is already fairly widespread immunity), rather than coronavirus, may actually be responsible for the decline of the incidence of SARS in Guangdong.
Extensive seroprevalence studies will be helpful for sorting through these possibilities. Analyzing stored serum samples, collected before the onset of this outbreak, could be of immense value in evaluating the possibility of preexisting immunity. Some researchers have found human metapneumoviruses ( 7 ) and species of Chlamydia in patients with SARS, but the importance of these findings is unclear. Systematic evaluation of specimens available from all cases, severe cases, and healthy controls in China regarding the presence of antibodies to coronavirus, as well as hypothesized co-infecting agents, should be done. Important clues may come from seroprevalence and other epidemiologic studies in children. As in other affected countries, children were disproportionately less affected by SARS than adults. Carefully working through the bases for reduced incidence and severity may uncover cross-protecting infectious or immunizing agents or crucial host factors for protection.
“Super-Spreaders”
When documenting the source of person-to-person transmission of SARS has been possible, a substantial proportion of cases have emanated from single persons, so-called super-spreaders ( l ). While contact tracing is undoubtedly incomplete, most infected patients have transmitted illness to few other people. Understanding the differentiating characteristics of persons who transmit, especially patients who are able to transmit to several other people, often after minimal contact, may provide important clues for public health strategies focused on preventing transmission. In addition, better defining environmental settings or circumstances that facilitate high transmission rates would be helpful. China is not unique in documenting super-spreaders. The country could participate in multinational studies to define the characteristics of super-spreaders and their role in the epidemiology of SARS. Of particular interest is the virus load of super-spreaders, compared with those of other infected persons.
Little is known about the importance of fecal-oral transmission or about the length of time that virus shedding occurs in the gastrointestinal tract. Virus shedding in feces has major implications for control strategies and for the possibility of continued carriage and shedding by clinically recovered patients. China has the opportunity to explore the role of fecal spread in the transmission of SARS.
Evolution of the Virus
The causative agent is a coronavirus ( 8 – 10 ), and the entire genome of several strains has been fully sequenced by many laboratories globally ( 2 , 3 , 11 ). Tests have been developed to detect coronavirus genetic sequences by PCR. In addition, tests to detect SARS-associated coronavirus antibodies have been developed, but the sensitivity and specificity of these tests are low, especially early in the illness when public health and clinical needs are greatest. A good test for SARS would be important not only for diagnosis and management but also for investigating the origin of the disease and for defining its epidemiology.
If the causative agent can be isolated from stored specimens from the earliest group of patients (from November 2002 to January 2003), how their genetic sequences compare with those from viruses isolated later from various parts of China and elsewhere, and from animals from Guangdong and Guanxi Provinces, would be useful to know. Mutations may be important for a number of reasons. They may affect transmissibility and virulence; they may provide (or frustrate) therapeutic targets for new drugs; and they may pose challenges for development of diagnostic tests and vaccines. Specimens from Chinese patients provide the longest observation window with which mutational tendencies can be evaluated.
An analysis of 14 full-length sequences suggests that two genetic lineages might have arisen from Guangdong. One lineage is represented by the chain of transmission associated with the physician from Guangzhou who traveled to Hong Kong, Special Administrative Region, in February. The other lineage is associated with isolates from Hong Kong, Guangzhou, and Beijing ( 11 ). If two genetic lineages arose in Guangdong, were there two separate transmission events from an animal host to humans, or did the lineage diverge within humans? Specimens from early cases in Guangdong may be helpful in addressing this question.
Outcomes of Infection
Epidemiologic, immunologic, and microbiologic factors associated with severe outcome are not fully defined. Clearly, though, a principal determinant for poor outcome is advancing age. As with other respiratory diseases, age-related coexisting conditions reduce the capacity to compensate to conditions associated with severe disease. Understanding other specific factors that result in poor outcome will have value for optimizing therapeutic approaches.
Experienced clinicians disagree about the value of early treatment with ribavirin and high-dose corticosteroids, and some are reticent to ventilate patients because of high risk for transmission to healthcare workers associated with intubation. More data are needed to help define the most effective treatment strategy, particularly for areas with limited resources.
Extraordinary clinical expertise exists among health professionals in Guangdong Province. They have substantial experience with a variety of antivirals, antibiotics, alternative (herbal) medicines, and corticosteroids, and with using assisted ventilation in the treatment of patients with SARS ( 12 ). While randomized clinical trials have not been conducted, careful compilations of existing case series data would be helpful in evaluating the potential effectiveness of various management regimens.
The store of clinical data, accumulated from treating hundreds of SARS cases, needs to be put to good use. One priority is to investigate clinical, epidemiologic, and laboratory predictors of poor outcome. Such experience will supplement other recently published data from Hong Kong, Special Administrative Region ( 1 , 13 – 15 ), and Singapore ( 16 ).
Several questions remain unanswered. Do patients exposed to high viral doses (for which a short incubation period may be a surrogate) or to a co-infecting pathogen have poorer outcomes? What is the impact of multiple exposures to SARS-associated coronavirus, like that which occurred among healthcare workers early in the epidemic? Do patients infected early in the transmission cycle perform more poorly than those infected during subsequent cycles of transmission?
Learning from the SARS Epidemic
Seldom have intersections between politics, economic development, and public health been more graphically demonstrated. While awaiting the development of effective prophylactic and therapeutic options, many countries have had to muster substantial political will for quick and transparent steps to declare the presence of a lethal pathogen within their borders; conduct surveillance and report the results; use contact tracing, quarantine, and border control measures when needed; and apply stringent infection control measures in healthcare settings. Providing the general public with timely and candid information about the magnitude of the problem, the known risks, and how persons can protect themselves has also been necessary. These actions were necessary even when they appeared contrary to economic interests in the short run. As has been shown in China, delaying implementation can result in disastrous public health consequences extending beyond its borders, in addition to damage to the economy and national image.
The work outlined here involves descriptive and epidemiologic inquiry, fundamental to establishing an understanding of this new pathogen and disease. While refined and esoteric research will likely also be conducted, support must first be established for systematically addressing these basic questions and rapidly disseminating results through publication in international journals, presentations at international meetings, and in public communications. In China, in contrast with many other settings globally, scientific inquiry and dissemination of results to the international community are subject to institutional interference. The SARS pandemic has shown that virulent pathogens are beholden to no political philosophy or edict. Only careful and rapid application of knowledge and reason through a variety of public health measures has been effective in minimizing the spread and severity of the SARS epidemic. More information and data generated from studies of the epidemic in China are needed immediately to save lives and to prevent fear and disease, both in China itself and elsewhere in the world.
SARS became a public health emergency for China, where investment in health services has been given low priority for many years. Maintaining control in a country so large and diverse will be a major challenge for the months, and perhaps years, to come. Each of China’s mainland provinces (including municipalities with equivalent status, autonomous regions, and special administrative regions) is like a country within a country. Many are larger than most countries in Europe. Some, such as Shanghai, are wealthy and highly developed, while others such as Guangxi (bordering Guangdong and Vietnam) are poor and typical of developing countries. Given the potential for reemergence of SARS in the future, if sustained control measures are not in place in China, the possibility of controlling the global threat posed by the disease until new technology (i.e., an effective vaccine) is available may be slight. Key strategies include effective disease surveillance and reporting with early detection and isolation; hospital infection control during triage and treatment of cases; and transparent, open public communication about risk and disease magnitude.
China has recently begun to vigorously address the need for better surveillance, accurate reporting, and forthright public communication. Substantial epidemiologic, clinical, virologic, and immunologic expertise and interest are available within China to address the fundamental questions. International expertise is also available to provide guidance, feedback, and assistance when requested. Identifying the modest resources needed to implement the work should not be a barrier. Support from the government will be needed to carry out valid, transparent studies, and for permission to report the findings, regardless of the conclusions. SARS provides a jarring reminder of the preparedness that is needed to respond to emerging and existing disease threats; it highlights the need to reinvest in health in China, and strengthen public health programs, including surveillance systems and response capacity.
While disease incidence has abated in China and in other locations globally, the disease may still represent an important threat in the future. Many of the solutions to solve the multifaceted puzzle of SARS and to prevent future epidemics must come from China. Without solutions from that country, the degree of difficulty for sustained control of the problem globally is raised still higher. | Dr. Breiman is head of the Programme on Infectious Diseases and Vaccine Sciences at the International Centre for Diarrheal Disease Research, Bangladesh – Centre for Health and Population Research in Bangladesh. His research focuses on evaluating new vaccines for use in developing countries and on the epidemiology of emerging infectious diseases. Previously he directed the United States National Vaccine Program Office and was chief of the Epidemiology Section of the Childhood and Respiratory Diseases Branch, Division of Bacterial and Mycotic Diseases, National Center for Infectious Diseases, Centers for Disease Control and Prevention. | CC BY | no | 2022-01-27 23:36:15 | Emerg Infect Dis. 2003 Sep; 9(9):1037-1041 | oa_package/26/9f/PMC3016762.tar.gz |
|||||
PMC3016763 | 14519248 | Materials and Methods
Bacterial Strains and Growth Conditions
Isolates of VREF (n=108) were collected from nosocomial epidemics (n=16), clinical infections (n=20), clinical surveys (n=36), and community surveys (n=36) ( Table 1 ). The genotypes of these isolates have been described previously ( 11 , 12 ). Strains were considered epidemic if they were isolated from patients treated in the same hospital, in the same ward and with an overlapping time-relationship, and if AFLP patterns showed at least 90% similarity ( 12 ). Epidemic isolates were recovered from clinical sites, blood and urine, as well as from feces. Only one representative isolate from each outbreak was used for analysis. The number of patients involved in each outbreak varied from 4 to >50 ( 12 , 19 – 23 ). Isolates were considered to be derived from a clinical infection if obtained from a clinical specimen, such as the blood, urine and wounds. All surveillance isolates, from patients and healthy persons, were isolated from fecal samples. Surveillance isolates were either from the community or from clinical surveillance when obtained from hospitalized patients. The hospital-stay duration of these patients, when cultures were obtained, was not available.
Isolates of VSEF (n=92) were derived from clinical infections (n=73), clinical surveys (n=5), and community surveys (n=14). The isolates from clinical infectious sites were obtained from the SENTRY Antimicrobial Surveillance Program, and originated from different hospitals in several European countries (Portugal, Germany, United Kingdom, France, Spain, Italy, Austria, Turkey, Switzerland, Greece, and Poland). Fifty-seven strains were blood isolates, 5 were isolated from urine, 8 from wounds, and 2 from respiratory tract specimens. Patient information was not available. All VSEF isolates derived from clinical surveys of fecal samples were from the University Hospital Maastricht. All VSEF isolates from community surveys of fecal samples were collected in the Netherlands. All bacterial isolates were collected during the 1990s.
Identification and Susceptibility Testing
Enterococci were identified to the species level and were tested for the presence of the vanA gene by using a multiplex PCR described by Dutka-Malen et al. ( 24 ). Vancomycin and ampicillin/amoxicillin susceptibilities were determined by standard agar dilution methods, according to the National Committee for Clinical Laboratory Standards (NCCLS) guidelines ( 25 ). We considered MICs > 16 μg/mL for ampicillin or amoxicillin and > 8 for vancomycin to be resistant.
Esp PCR
All strains were screened for esp by PCR, with two different primer sets (esp 11 [5′-TTGCTAATGCTAGTCCACGACC-3′] to esp 12 [5′-GCGTCAACACTTGCATTGCCGAA-3′] and 14F [5′-AGATTTCATCTTTGATTCTTGG-3′] to 12R [5′-AATTGATTCTTTAGCATCTGG-3′]). PCR conditions included an initial denaturation at 95°C for 15 min for activation of the HotStarTaq DNA polymerase (QIAGEN GmbH, Hilden, Germany), followed by 30 cycles of 94°C for 30 sec, 52°C for 30 sec, and 72°C for 1 min, followed by an extension at 72°C for 7 min. Reactions were performed in 25 μL by using the HotStarTaq Master Mix (QIAGEN GmbH). Strains negative in PCR were checked for the presence of the esp gene by Southern hybridization, as described previously ( 12 ). For this check, we generated an esp -specific probe (956 bp) using primers esp 11 and esp 12 (see above).
Sequencing PurK
The purK gene encodes a phosphoribosylaminoimidazole carboxylase ATPase subunit involved in purine biosynthesis and is one of the seven housekeeping genes selected for multilocus sequence typing of E. faecium ( 13 ). A 492-bp fragment of the purK gene of a selection of strains, divided over all genogroups, was sequenced by using primers 5′-GCAGATTGGCACATTGAAAGT-3′ and 5′-TACATAAATCCCGCCTGTTTC/T-3′. PCR conditions included an initial denaturation at 95°C for 3 min, followed by 35 cycles of 94°C for 30 sec, 50°C for 30 sec, and 72°C for 30 sec, followed by an extension at 72°C for 5 min. Reactions were performed in 50 μL by using buffers and Taq polymerase (SphaeroQ, Leiden, Netherlands). The PCR products were purified with a PCR purification kit (QIAGEN GmbH) according to the manufacturer’s instructions. Subsequently, purified PCR products were sequenced directly with the ABI PRISM Big Dye Terminators cycle sequencing kit on an ABI PRISM DNA analyzer (Applied Biosystems, Foster City, CA). Sequences were aligned with BioNumerics (v. 2.5, Applied Maths, Kortrijk, Belgium) software.
AFLP
AFLP typing and computer analysis of AFLP-generated patterns of VSEF was done as described previously ( 11 ) with minor modifications. Briefly, chromosomal DNA was digested with Cfo I and Eco RI and ligated to a single adapter with Cfo I and Eco RI protruding ends in a simultaneous reaction, followed by PCR using adapter-specific primers. The amplification products were separated and detected by using POP6-polymer on an ABI PRISM 3700 DNA Analyzer (Applied Biosystems). For each sample, 1 μL of the PCR reaction mixture (8 x diluted) was added to 9 μL of Hi-Di Formamid containing 12.5 μL/mL of the internal size marker (GeneScan-500–labeled with the red fluorescent dye 6-carboxy-x-rhodamine) in a MicroAmp Optical 96-well reaction plate (Applied Biosystems). The analyses were run in 3 hours. Genescan software (Applied Biosystems) was used for collection of data during the analysis and the data were subsequently exported into BioNumerics (Applied Maths) for further analysis. The Pearson product moment correlation coefficient was calculated, and the unweighted pair group method with arithmetic averages was used for cluster analysis. Using this methodology, we described four genogroups of VREF ( 11 ). We analyzed all isolates of VSEF and defined a cluster of isolates as a set of individual strains with AFLP patterns that shared at least 65% of the banding patterns (criterion defined for four genogroups [ 11 ]). Subsequently, to determine the matching genogroup, we compared AFLP banding patterns of each individual VSEF isolate with AFLP banding patterns of a library of 404 VREF, representing the four different AFLP genogroups. The library included VREF recovered from pigs (n=108) and nonhospitalized persons (n=28) as representatives of genogroup A, and strains from poultry (n=32), hospitalized patients (n=196), and calves (n=40), representing genogroups B, C, and D, respectively ( 11 , 12 ).
VSEF isolates were identified by using the identification module in BioNumerics. The genetic distance used for further analysis is 100 minus the calculated Pearson product moment correlation similarity coefficient. The degree of matching was expressed by an identification factor, which is the quotient of the average genetic distance between the tested strain and each of the isolates in the genogroup divided by the average genetic distance within a genogroup. If the average distance of the tested strain to each of the genogroup members is almost equal to the average distance among all strains in a genogroup, the identification factor will approach the value of one. So the lower the value of the identification factor, the more likely the test strain belongs to a particular genogroup. | Results
Using our predefined cutoff points for cluster analysis, we identified four clusters of VSEF ( Figure 1 ). VSEF clustering seemed to be source-related. Clusters 1 (n=4) and 2 (n=12) contained surveillance isolates from community sources predominantly. All but one of the clinical infections isolates belonged to clusters 3 (n=66) and 4 (n=10), respectively.
On the basis of our calculations of the identification factors of VSEF and representative isolates of different VREF genogroups, we found that isolates of cluster 1 (n=4) resembled those of genogroup A, previously allocated to nonhospitalized patients and pig-derived VREF ( Table 2 ). Similarly, isolates of cluster 2 (n=12) fitted best in genogroup B. Isolates from cluster 3 (n=66) showed almost equal resemblance to isolates from genogroups B and C, and isolates from cluster 4 (n=10) were also most identical to isolates from genogroup C ( Table 2 ). The relationship between AFLP clusters of VSEF and sources was congruent with the previously described clustering of VREF. Almost all VSEF isolates (99%) derived from clinical infections clustered in clusters 3 and 4, clearly distinct from most VSEF isolates from community surveys (93%), which were found in clusters 1 and 2. A similar distribution was found previously among VREF with most (60%) isolates from clinical infections in genogroup C and most (89%) isolates from community surveys in genogroup A ( 11 ).
The presence of the variant esp gene in VREF and VSEF was strongly associated with a specific epidemiologic source because the presence of esp is higher in clinical infections and epidemic-associated isolates than in surveillance isolates ( Figure 2 ). VREF isolates associated with nosocomial outbreaks were esp -positive, except for one. Prevalences of the variant esp gene in clinical infectious isolates were 57% and 40% for VSEF and VREF, respectively (p=ns). Prevalence of the variant esp gene in clinical and community survey isolates was low among VREF (6% and 3%, respectively) and completely absent among the 19 VSEF isolates tested.
Associations similar to the variant esp gene were found between ampicillin resistance and epidemiologic source for enterococcal isolates ( Figure 2 ). All isolated associated with nosocomial VREF-outbreaks but one were resistant to ampicillin, as were 81% and 65% of infectious isolates of VSEF and VREF, respectively. Thirty-one percent of 36 nosocomial surveillance isolates of VREF were ampicillin resistant, as compared to none of five VSEF isolates obtained by clinical surveillance (p=ns). Finally, all but one isolate of VREF (n=36) and all VSEF isolates (n=14) obtained by surveillance of healthy persons were susceptible to ampicillin. When we combined these data, we found strong associations between the presence of the variant esp -gene and ampicillin resistance, both in VREF and VSEF: 98% of esp -positive VSEF and 92% of esp -positive VREF were resistant to ampicillin, as compared to 37% esp -negative VSEF and 20% esp -negative VREF isolates (p<0.0001).
The purK housekeeping gene was sequenced in 103 isolates: 64 VREF and 39 VSEF. The previously described type 1 allele was found in 39 isolates: 23 VREF and 16 VSEF. This specific allele type was associated with the presence of the variant esp- gene and ampicillin resistance but not with vancomycin resistance. The variant esp -gene was found in 25 (64%) of 39 isolates containing the purK type 1 allele and in only 1 (2%) of 64 isolates carrying other purK alleles (p<0.0001). Similarly, ampicillin resistance was detected in 36 (92%) of 39 isolates with purK type 1 allele and in only 5 (8%) of 64 isolates with other allele types (p<0.0001). In contrast, the vanA transposon was present in 23 (59%) of 39 isolates with purK type 1 allele and in 41 (64%) of 64 isolates with other allele types (p=ns). | Discussion
Our study demonstrates the genetic relatedness of clusters of isolates of vancomycin-resistant and -susceptible E. faecium strains from different epidemiologic sources and provides evidence for selection of an E. faecium subtype associated with hospital outbreaks. This subtype is characterized by the presence of both ampicillin resistance and the variant esp gene. Furthermore, our findings suggest random horizontal spread of the vanA transposon to multiple genogroups of E. faecium . We hypothesize that the rise in infections caused by VREF resulted from nosocomial selection of a specific ampicillin-resistant E. faecium genotype harboring the variant esp gene and subsequent horizontal transfer of the vanA transposon.
Our study confirms that the previously demonstrated dichotomy between VREF isolated from healthy persons and patients ( 11 ) also exists for vancomcyin-susceptible E. faecium isolates. In VREF isolates, we could identify four genogroups, which were associated with particular hosts and environments and in which most isolates from healthy persons clustered distinctly from patient isolates. We showed that vancomycin-susceptible isolates clustered into three of these groups and that VSEF isolates from healthy persons also clustered distinctly from patient isolates. The genetic relationship between isolates and the genetic distinction between the four genogroups were based on AFLP analysis. We recently confirmed these findings with multilocus sequence typing (MLST) ( 13 ). Other researchers have also demonstrated host specificity of E. faecium . Quednau et al. suggested host specificity of isolates from chicken, pork, and humans by comparing restriction endonuclease profiles ( 26 ). In contrast, a recent study by Vancanneyt et al. also used AFLP; they did not confirm host-specific clustering of E. faecium ( 27 ). They described two main genomic groups in a population of 78 E. faecium strains isolated from seven European countries, and both groups comprised strains from (healthy) humans, animals, and food. All human clinical strains clustered in the largest genogroup, as did all strains (n=16) containing the vanA gene. Our findings that the vanA transposon was present in isolates of all genogroups proves that acquisition of this transposon was not influenced by and did not affect the preexisting relationship between the bacterium and host, which is a result of a long-term coevolution and mutual adaptation.
The presence of four genogroups in E. faecium seems to parallel the phylogenetic structure in E. coli , in which four ancestral groups (A, B1, B2, and D) have been described with some level of host specificity ( 28 ). However, in contrast to our finding of a single genetic lineage in E. faecium related to clinical symptoms and carrying the esp virulence gene, clinical isolates of E. coli are more widely distributed among the different ancestral groups ( 29 ). Furthermore, MLST of pathogenic E. coli strains showed that different ancestral lineages have acquired the same virulence factors ( 30 ), indicating that pathogenic potential in E. coli is not confined to a single ancestral lineage, which is suggested by our findings for E. faecium . Human and animal pathogenic E. coli strains share closely related genotypes and carry similar virulence factor profiles, suggesting that certain E. coli strains are pathogenic for both animals and humans ( 31 ). Whether this holds true for pathogenic E. faecium strains is unknown.
We recently found that the presence of the variant esp -gene is associated with nosocomial outbreaks of VREF in three continents, although this gene was not found in VREF strains isolated from healthy persons or animals ( 12 ). The outbreak strains were also characterized by a specific allele type of the purK gene, one of the housekeeping genes sequenced in the MLST method ( 13 ). Recently, other investigators reported the presence of the variant esp gene in clinical isolates of VSEF, demonstrating that this gene is not linked specifically to the vanA transposon ( 15 – 18 ). The findings of our study confirm the strong association between the presence of the esp gene and the relation with hospital outbreaks and clinical infections among patients with VREF as well as VSEF. Although the esp gene is virtually absent among community isolates, the presence of esp among VSEF and VREF from clinical infectious sites apparently unrelated to hospital outbreaks implies that this gene is not exclusively related to epidemic strains. Excluding the outbreak potential of the esp -positive VSEF strains in this and other studies is difficult. Only few outbreaks with VSEF have been documented ( 32 – 34 ). We have investigated and could not demonstrate the presence of the variant esp gene in isolates from one hospital outbreak of ampicillin-resistant E. faecium in Norway (data not shown).
Little is known about the function of the variant esp-gene, although epidemiologic findings support its role as a virulence factor. In E. faecalis , the homologue of this gene encodes for the enterococcal surface protein, and the presence of this gene has been associated with enhanced adherence capacities to uroepithelial surfaces, but not with increased virulence, in a mice model ( 35 ). Moreover, in E. faecalis , the esp -gene was highly associated with biofilm-formation capacity ( 36 ). Increased adherence capacities and biofilm formation of esp -positive E. faecium strains might explain its association with hospital outbreaks.
Like the variant esp gene, ampicillin resistance was found more frequently in isolates associated with infections and nosocomial outbreaks, both in VSEF and VREF. This source-relationship is probably caused by selective pressure of β-lactam antibiotics that are used extensively in hospitals. Emergence of ampicillin resistance in E. faecium was already demonstrated in the early 1980s and seemed to precede the emergence of vancomycin resistance by 10 years ( 2 ). A correlation between high prevalences of the esp gene and antibiotic resistance among E. faecium isolates from hospitalized patients was also reported recently by Coque et al. ( 17 ).
Considering the sources of isolates, presence of ampicillin and vancomycin resistance, the presence of the variant esp gene, and the type 1 allele of the purK gene, we propose an evolutionary scheme for the specific genogroup of E. faecium associated with nosocomial outbreaks ( Figure 3 ). However, our findings might have been biased by the composition of our collection of isolates and the fact that the purK was sequenced in a subset of all isolates. The esp gene and ampicillin resistance can obviously co-occur in a distinct genetic lineage of E. faecium characterized by the type 1 allele of the purK gene. We propose that E. faecium strains containing the type 1 allele of the purK gene have acquired the esp virulence gene and that this E. faecium genotype ( purK -1, esp -positive) is prominently present among clinical relevant strains and virtually absent among survey isolates. Yet another substantial part of the clinical relevant strains with genotype purK -1 do not carry the esp gene, which emphasizes that other virulence genes of E. faecium apart from esp are involved in the development of infections. Ampicillin resistance was predominantly found among the purK -1 genotype and is almost absent among other E. faecium genotypes. This occurrence of resistance is, presumably, the result of selective antibiotic pressure. Chromosomal linkage of the purK -1 allele, the variant esp gene, and ampicillin resistance could have promoted this selection. Finally, glycopeptide usage in and outside hospitals, both in humans and animals, resulted in the selection of vancomycin-resistant strains in both the purK -1 genotype and the other genotypes. The presence of similar proportions of vancomycin resistance in all genotypes probably reflects horizontal transfer of the vancomycin-resistance transposon. This hypothesis implies the development of a hospital-adapted genogroup of E. faecium , characterized by the type-1 allele of purK , the variant esp -gene and ampicillin resistance, which has spread unnoticed, thereby creating a pool of strains with epidemic potential. Only after becoming vancomycin-resistant has this genogroup become recognized as clinically relevant. | The epidemiology of vancomycin-resistant Enterococcus faecium (VREF) in Europe is characterized by a large community reservoir. In contrast, nosocomial outbreaks and infections (without a community reservoir) characterize VREF in the United States. Previous studies demonstrated host-specific genogroups and a distinct genetic lineage of VREF associated with hospital outbreaks, characterized by the variant esp -gene and a specific allele-type of the purK housekeeping gene ( purK 1). We investigated the genetic relatedness of vanA VREF (n=108) and vancomycin-susceptible E. faecium (VSEF) (n=92) from different epidemiologic sources by genotyping, susceptibility testing for ampicillin, sequencing of purK 1, and testing for presence of esp . Clusters of VSEF fit well into previously described VREF genogroups, and strong associations were found between VSEF and VREF isolates with resistance to ampicillin, presence of esp , and purK 1. Genotypes characterized by presence of esp , purK 1, and ampicillin resistance were most frequent among outbreak-associated isolates and almost absent among community surveillance isolates. Vancomycin-resistance was not specifically linked to genogroups. VREF and VSEF from different epidemiologic sources are genetically related; evidence exists for nosocomial selection of a subtype of E. faecium , which has acquired vancomycin-resistance through horizontal transfer.
Keywords: | Enterococcus faecium has become an important nosocomial pathogen, especially in immunocompromised patients, creating serious limitations in treatment options because of cumulative resistance to antimicrobial agents ( 1 ). In the United States, the emergence of nosocomial E. faecium infections was characterized by increasing resistance to ampicillin in the 1980s and a rapid increase of vancomycin resistance in the next decade ( 1 , 2 ). The emergence of vancomycin-resistant E. faecium (VREF) in the United States illustrates the transmission capacities of bacteria and the possibility of a postantibiotic era for nosocomial infections in critically ill patients.
The global epidemiology of VREF is not well understood. In the United States, prevalences of colonization and infection are high among hospitalized patients, but a community reservoir of VREF in healthy persons or animals seems to be absent ( 3 , 4 ). In contrast, in Europe, colonization and infection rates within hospitals remain low, although colonization among healthy persons and animals is prevalent ( 5 – 10 ).
Previous studies suggested host-specificity of VREF genogroups ( 11 ), and isolates associated with nosocomial outbreaks seemed to be genetically distinct from nonepidemic VREF isolated from humans and animals ( 12 ). The differences between epidemic and nonepidemic isolates were based on genetic relatedness, as determined by amplified fragment length polymorphism analysis (AFLP), and the presence of an identical sequence of the purK housekeeping gene in epidemic strains ( 12 ). A recently developed multilocus sequence typing scheme for E. faecium confirmed that epidemic isolates belonged to a specific genetic lineage ( 13 ). Moreover, a variant of the esp gene, which has been found to be more prevalent among isolates of E. faecalis associated with infections ( 14 ), was found in all but one epidemic hospital-derived VREF isolate and not among community-derived VREF ( 12 ). Subsequently, other investigators described the variant esp gene in vancomycin-susceptible E. faecium (VSEF), and this gene appeared to be predominantly present among clinical isolates ( 15 – 18 ).These findings suggest the existence of a specific subpopulation of E. faecium , comprising both VREF as well as VSEF, associated with hospital outbreaks and infections.
In this study, we further investigated the genetic relationship between VREF and VSEF isolates, derived from different epidemiologic sources, such as hospital outbreaks, infections, and colonization among hospitalized patients and healthy persons. The genetic relatedness was linked to the presence of the variant esp gene and antibiotic resistance to ampicillin and vancomycin. On the basis of our findings, we constructed an evolutionary scheme describing the sequential steps in the development and selection of ampicillin- and vancomycin-resistant E. faecium strains. | Acknowledgments
We thank N. Bruinsma for vancomycin-susceptible Enterococcus faecium samples.
Ms. Leavis worked on this study while a medical intern at the Division of Acute Internal Medicine and Infectious Diseases at University Medical Center, Utrecht, Netherlands, and at the National Institute of Public Health and the Environment (RIVM), Bilthoven, Netherlands. Her research interests include the epidemiology of vancomycin-resistant enterococci and enterococcal pathogenicity mechanisms. | CC BY | no | 2022-01-25 23:38:08 | Emerg Infect Dis. 2003 Sep; 9(9):1108-1115 | oa_package/45/22/PMC3016763.tar.gz |
||
PMC3016764 | 14531382 | Keywords: | To the Editor: The worldwide pattern of severe acute respiratory syndrome (SARS) transmission in 2003 suggests that transmission has occurred more frequently in communities that share certain social and cultural characteristics. Of 8,500 probable cases since March, >90% were reported from China (including mainland, Hong Kong, and Macau) and Taiwan. Of the other 27 countries reporting SARS occurrences, 23 reported <10 cases and the others 1–3 cases. The small number of transmissions in these other countries suggests that the close contact required for transmission did not occur, whereas in China, community-based transmission has continued. In contrast, the relatively large number of cases in Canada, the United States, Singapore, and Vietnam (which comprise 7% to 10% of the total SARS cases worldwide) is related to the fact that relatively prolonged contact occurred because of the patients’ close cultural ties with China. Why does Japan have still no cases of SARS, despite its geographic proximity to the most affected areas? We suggest that transmission has not occurred because Japan remains a society mostly closed to non-Japanese persons and has a history of casual contact between its citizens and the travelers and noncitizens who reside there.
Hospitals have functioned as junctions for varied communities in spreading the SARS virus further. Because of SARS’ likely place of origin, the initial “community” included Chinese persons who then kindled the chain of transmission to other communities throughout the world. Daily, close contact between SARS patients and hospital personnel led to an unusually large number of infections among medical staff members. Effective prevention measures such as vaccines are not available and may be a factor in the spread of the infection.
Even in the era of globalization and mass air transit, most persons live inside a relatively small circle of community, made up of others of similar ethnicity, religious beliefs, educational level, and social class who live in the same vicinity; this sort of small circle has been described as “mutual coexistence” by anthropologist Kinji Imanishi ( 1 ). Basically, the SARS-associated coronavirus began circulating among members of such a community. This theory does not suggest that certain ethnic groups are predisposed to be susceptible to SARS.
Why have few cases of SARS occurred in children? All age groups are susceptible to the SARS virus, which is new to humans. However, adults have more chance to become infected through contacts in their daily lives, whereas children do not. Rapid isolation of the adult patients contributed to reduced frequency of exposure for children in that household, which is in contrast to the usual infectious diseases of childhood (since children do not have immunity against many age-old microbes).
Some contradictions exist for our interpretation of the SARS transmission pattern. Investigations have shown that in Canada, Hong Kong and elsewhere, some casual brief contact caused the infection or that the link between the source and the case was not at all clear. We may have missed other important routes of transmission, or a totally unknown element may be involved. Without an answer for this discrepancy, we note that the clinical virology for SARS, such as pattern of virus shedding and host immune response, is still developing ( 2 ). For example, a total of 19 cases in China were identified as SARS by coronavirus isolation, polymerase chain reaction, or serologic tests. For two case-patients, the results of three tests were positive; 10 case-patients had negative test results; and in 14 case-patients, the virus was not isolated. Interpreting these results is difficult. In the United States, 97% of the probable cases were attributed to a recent history of international travel to SARS-affected areas. Antibodies to SARS-associated coronavirus were demonstrated for 8 of 41 probable case-patients in convalescent-phase serum, bringing the proportion of laboratory-confirmed cases to 20%, even in the probable cases, and 0% among the suspected cases in the United States so far ( 3 ). These results are the best available by laboratories with the current limited technical knowledge. We are not persuaded that casual contact with SARS patients in unfamiliar settings results in contracting the disease.
The winter of 2003 will be critical for observing how the virus behaves, whether the winter climate accelerates the transmission, and how we handle that acceleration. Despite current global efforts, thin lines of transmissions may remain in China; the virus may flare up again. Officials in China and sites of the outbreak must interrupt as many chains of transmission as possible before October. Surveillance should also be intensified. Ongoing study to improve laboratory diagnosis and clinical virology is key, so that effective isolation can be practiced; at present, these measures are the only ones known to interrupt the transmission of SARS. The group on which to focus should be the community in close contact with previous outbreak areas. | CC BY | no | 2022-01-24 23:47:03 | Emerg Infect Dis. 2003 Sep; 9(9):1183-1184 | oa_package/ad/54/PMC3016764.tar.gz |
||||||
PMC3016765 | 14519237 | Severe acute respiratory syndrome (SARS) is now a global public health threat with many medical, ethical, social, economic, political, and legal implications. The nonspecific signs and symptoms of this disease, coupled with a relatively long incubation period and the initial absence of a reliable diagnostic test, limited the understanding of the magnitude of the outbreak. This paper outlines our experience with public health issues that have arisen during this outbreak of SARS in Hong Kong. We confirmed that case detection, reporting, clear and timely dissemination of information, and strict infection control measures are essential in handling such an infectious disease outbreak. The need for an outbreak response unit is crucial to combat any future outbreak.
Keywords: | Severe acute respiratory syndrome (SARS) originated in November 2002 in the Guangdong Province of China and, by February 2003, had spread to Hong Kong and subsequently to 32 other countries or regions, infecting approximately 8,459 patients and resulting in >800 deaths ( 1 ). The overall mortality rate is approximately 14% to 15%, ranging from <1% in persons <24 years of age to >50% in persons >65 years of age ( 2 ). The cause of SARS is not yet confirmed, but a novel coronavirus has been identified and resembles the virus found in civet cats ( 3 , 4 ). SARS is the latest in a series of new infectious diseases (e.g., HIV/AIDS, Ebola, Nipah, and Avian H5N1 influenza) that are adding additional stress to a healthcare system already dealing with the resurgence of established conditions (e.g., dengue, malaria, and tuberculosis). As global air travel is now commonplace and has facilitated the international spread of SARS, identifying and globally publicizing the lessons learned from the latest outbreak are important.
Risk Factors for Spread of SARS
Mode of Transmission
The mechanism of transmission of the agent or agents causing SARS is not yet fully understood but is probably mainly by droplet secretions, fomites, or person-to-person contact, as much of the transmission in Hong Kong has been limited to healthcare workers and family members. To date, no evidence of airborne transmission exists. In the Amoy Gardens outbreak in Hong Kong, aerosolization of fecal waste contaminated with the SARS agent has also been proposed to have contributed to transmission. The virus has been reported to be stable in feces and urine at room temperature for at least 1–2 days, and up to 4 days in stool from patients experiencing diarrhea ( 5 ). After drying on plastic surfaces, the virus can survive for up to 48 hours, although commonly used disinfectants and fixatives are effective against it ( 5 ).
Super-spreading patients may play a role in the spread of the disease. For instance, the Hong Kong index patient is thought to have infected persons who transmitted the virus worldwide, subsequently resulting in outbreaks of >300 patients in Amoy Gardens in Hong Kong and >60 cases in Singapore ( 6 – 10 ). These last two clusters may have been started by two persons undergoing hemodialysis. Another hemodialysis patient has been involved in the transmission of SARS in Toronto; therefore, such patients, who may have a relatively depressed immune system with associated high viral loads, may be unduly facilitating transmission of the virus. A more direct role of hemodialysis patients in the spread of viral infections has been previously observed in Edinburgh, Scotland, in the late 1960s, where transmission of hepatitis B was associated with mortality rates of 24% and 31% in renal patients and staff members, respectively ( 11 ).
Existing Medical Practices
Procedures, such as the use of ventilators and nebulized bronchodilators, have been reported to have lead to spread by droplet transmission and aerosolization of virus-containing particles ( 12 , 13 ). Similarly other procedures, such as cardiopulmonary resuscitation, use of positive airway pressure devices, bronchoscopy, endotracheal intubation, airway suction, and sputum suction are thought to increase risk for infection ( 7 ). Although the use of such equipment in the treatment of most pneumonias, except influenza, presents no risk to staff, the emergence of SARS has thrown into sharp focus the general safety of such routine practices, particularly when dealing with novel infectious agents. The SARS outbreak is unique in its propensity to infect healthcare workers; for instance, in China approximately 20% of cases are in healthcare workers, and early in the outbreak the rate was closer to 90% ( 14 , 15 ).
Simple measures such as hand washing after touching a patient, the use of an appropriate and well-fitted facemask, and the introduction of infection control measures at an early stage, as well as quarantine of patients, may have reduced transmission ( 12 ). Restricting visitors to the hospital would further reduce the risk for transmission into the community. However, despite stringent use of full infection control procedures, breakthrough cases of SARS still occurred in healthcare workers.
Coronavirus infections have been reported to infect lymphocytes, reducing their numbers in the Hong Kong patients by 30% ( 13 ). Immune-mediated cellular damage to the lungs has been reported ( 7 ) and has prompted the use of steroids in these patients. Given the role of super-spreading patients ( 10 ), who have relatively depressed immune systems, steroid use may further increase the viral load and prolong shedding of viable viral particles past the 1–2 weeks after symptoms disappear, potentially increasing the transmission of the disease and the duration of infectivity of the patient.
Complexity of the SARS Outbreak
Two overlapping sets of disease signs and symptoms have been reported, with some patients having varying degrees of enteric disease. Patients from China and at a number of Hong Kong hospitals have had relatively low rates of diarrhea (10% to 20%) ( 3 , 13 ), whereas patients from Amoy Gardens and Canada have had higher rates, 50% to 70% ( 9 , 16 , 17 ). Some of these differences may result from the timing of data collection, with collection of data later in the course of the patient’s illness including symptoms of diarrhea that may be associated with antibiotic therapy. However, these data suggest that possible differences in the mode of transmission, such as respiratory droplet compared to fecal-oral, or differences in the specificity of the organism to the respiratory or gastrointestinal tracts may also be present. Mutations in isolates from respiratory or gastrointestinal tracts from the same cattle infected with coronavirus have been previously reported ( 18 ); such mutations may contribute to the observed differences in symptoms.
The nonspecific disease signs and symptoms, long mean incubation period (6.4 days), long time between onset of symptoms and hospital admission (from 3 to 5 days) ( 6 ), and lack of a reliable diagnostic test in the early phase of the illness ( 19 ) can lead to potential transmission to frontline healthcare workers and the community. Similarly, the signs and symptoms in elderly patients, in whom the primary disease phase may be muted without evident fever, may further contribute to the spread of SARS. Additionally, as with other diseases, misdiagnosis can have fatal consequences. For example, in 2001, an airline cabin crew member infected with malaria was misdiagnosed by two physicians, who did not identify the fact that she had recently traveled to a malaria-endemic area ( 20 ). She was treated for common cold and died within 1 week of a malaria diagnosis by a tropical medicine specialist. Misdiagnosis of a case of SARS, particularly in a super-spreader in whom the disease symptoms may differ, could lead to rapid dissemination through the population. The nonspecific features and lack of an early diagnostic test have also led to difficulty of diagnosis with a potential threat to the community if such patients are discharged.
Disseminating Information
The accuracy and timeliness of the reporting and dissemination of data relating to SARS are important issues affecting public perception, and hence, fear, as well as the implementation of programs to limit spread of the disease. The initial inadequate reporting of cases in China ( 21 ) has tarnished the country’s reputation, led to mistrust as to the magnitude of the outbreak, and may have hindered implementation of preventative measures. Similarly, media attention, which plays a major role in the widespread dissemination of information, has a tendency to sensationalize information, leading to misconceptions over community preventative strategies, government and institutional procedures, and the magnitude of the outbreak. On the other hand, lack of information led to the development of public myths, with people in Guangdong believing that boiling white vinegar would protect them from infection and leading to carbon monoxide poisoning from charcoal burning to heat the vinegar ( 22 ).
Challenges to the Medical Community and Future Directions
SARS presents formidable challenges to the healthcare community with medical, social, political, legal, and economic implications. All countries have to be prepared at a number of levels to deal with the threat posed by the SARS epidemic and any other novel infectious disease. The healthcare sector should consider a few issues: 1) SARS has emphasized the need for stringent infection control measures in hospitals on a regular basis, in anticipation of the next epidemic. While the measures may be in place, are we sure that they are being properly implemented at all times? 2) Healthcare workers should always follow simple, but stringent hygienic practices (e.g., washing hands before and after seeing a patient, even when no epidemic is apparent). 3) Appropriate history taking, to obtain important information, such as recent travel history or contacts with possibly infected persons, when a patient with a fever is seen, could help to quickly identify persons at risk and reduce spread. 4) Given the association with a number of super-spreaders and renal dialysis patients, strict quarantine procedures should be implemented if such persons are suspected of having SARS. 5) The concepts of specificity and sensitivity need to be widely understood and applied. Although the need for rapid diagnostic tests is important, introducing tests with inadequate sensitivity and unknown specificity should be prevented, as the data cannot be interpreted. A negative test does not always exclude a disease, and discharging patients later diagnosed and readmitted could have serious consequences. 6) The use of high-risk medical procedures that may inadvertently spread the disease through aerosolization of the agent should be evaluated with potential new diseases in mind. Other high-risk procedures should also be reconsidered with regard to infection control to limit risk from the use of intubation, cardiopulmonary resuscitation, and positive airway pressure devices. 7) Quarantine and isolation procedures and contact tracing need to be instituted early in the outbreak, and access to hospitals treating such patients needs to be restricted to limit spread into the community. 8) Environmental hygiene needs to be maintained. In the wake of the SARS outbreak, the Hong Kong government has introduced a number of measures to improve public hygiene, including closely monitoring the integrity of sewage disposal systems (deficiencies that were a possible source of the Amoy Gardens outbreak). The government has increased penalties for spitting, which still remains a commonplace habit.
As with the outbreak of avian influenza, in which humans became infected through the purchase of live poultry, a process that still continues, the SARS virus appears to have been contracted from an animal source (possibly civet cats [ 23 ]) used for human consumption. Close contact between humans and animal vectors in the southern China region has been responsible for a number of epidemics, including influenza A. A reduction in exposure to animal viral reservoirs should reduce the occurrence of such events. To that end, the Chinese government has increased implementation of laws that prevent the consumption of wild animals.
Timely communication and exchange of complete, accurate information are important during any epidemic. Difficulties in obtaining information from all relevant sources could delay appropriate analyses, reporting of the situation, and implementation of necessary actions. Plans for integration of appropriate agencies should be made in advance of any epidemic. The data collected should be two-tiered to include essential information required to control the outbreak, such as clinical details and contact information, as well as more detailed data that will enable ongoing or retrospective evaluation to determine, for instance, mode of transmission, which remains unconfirmed.
An epidemic like SARS has an impact on many sectors of the society. Leadership is essential to coordinate activities and information dissemination in order to minimize confused messages and public panic. Coordination should be maintained with all relevant sectors including the health professionals, policymakers, community leaders, media, and the public.
Early detection and handling systems need to be consolidated to prepare for future epidemics. To this end, the Hong Kong government has announced the allocation of HK$1 billion (US$1=HK$7.8) to fund a center for disease control. The role for the center remains to be clarified but should include monitoring for novel infections and research into existing agents. The center should also include an outbreak response unit that can be called on to spearhead coordinated action in a timely manner. The team should include infectious disease and public health specialists, epidemiologists, media spokespersons, administrators with suitable connections to frontline healthcare units, and other statutory bodies to enable collation and dissemination of important information and risk communication to relevant stakeholders. The unit will require legislative power to enable the rapid initiation of control measures both in the hospitals and the community.
As no prophylaxis vaccination or specific proven treatment is yet available against SARS, prevention is the only measure that one can take to prevent epidemics. Communicating the risks and preventive measures in an effective and acceptable manner is important.
SARS has had a significant impact on the local healthcare system; a high proportion of patients require intensive care, coupled with prolonged hospitalization, overloading the system. Similarly, the ready transmission to hospital care workers reduced the availability of knowledgeable healthcare workers to treat other patients and colleagues, and this further limited the ability of the hospitals to cope with the current outbreak. In summary, the current SARS outbreak provides a timely reminder of the importance of maintaining basic healthcare practices at all times so that, when the next new disease strikes, we are well prepared to deal with it. Establishing an outbreak response unit within the healthcare sector should be a priority with appropriate resources. | Dr. Abdullah is a research assistant professor in the Department of Community Medicine, University of Hong Kong. He is a physician specializing in public health medicine. His chief research interests include tobacco control and epidemiology and prevention of infectious diseases. | CC BY | no | 2022-01-31 23:35:49 | Emerg Infect Dis. 2003 Sep; 9(9):1042-1045 | oa_package/8a/65/PMC3016765.tar.gz |
|||||
PMC3016766 | 14531385 | On June 17–18, 2003, in Kuala Lumpur, Malaysia, the World Health Organization (WHO) sponsored a conference entitled SARS: Where Do We Go From Here? The purpose of the conference, which was attended by over 900 scientific and public health experts from 43 countries, was to review available knowledge and lessons learned and to identify key priorities for the future. Three overarching questions were addressed: Can severe acute respiratory syndrome (SARS) be eradicated? Are current control measures effective? Are current alert and response systems robust enough?
The first day included summaries of the history of the epidemic, global, and regional responses coordinated by WHO through its headquarters in Geneva and the Western Pacific Regional Office in Manila, respectively; and national responses in the People’s Republic of China (PRC), including in the Hong Kong Special Administrative Region of PRC, Singapore, Vietnam, Canada, and United States. Nine presentations summarized scientific, clinical, public health, psychosocial, and communications aspects of the SARS outbreak. On the second day, breakout groups met and presented recommendations on the topics of epidemiology and public health, possible role of animals, environmental issues, modeling the epidemic, clinical diagnosis and management, reducing transmission in healthcare settings, blood safety, reducing community transmission, preventing international spread, surveillance and response coordination, effective communication, and preparedness. Background materials for the conference, slide presentations at the plenary sessions (including the breakout group reports), and the text of speeches by the Director General of WHO and other dignitaries are available on the Web (URL: www.who.int/csr/sars/conference ).
Beginning in March 2003, after WHO recognized, through its Global Outbreak Alert and Response Network (GOARN), an outbreak of severe respiratory illness with high transmissibility in healthcare settings and international spread through airline travel, WHO issued a series of global alerts, travel advisories, and recommendations for diagnosis, clinical management, and prevention of transmission. Evolving information was discussed by virtual networks of experts, including virologists, clinicians, and epidemiologists. Field teams composed of staff from GOARN partners were quickly mobilized to assist affected countries in enhancing surveillance and containment measures, which included isolating cases, implementing strict infection control measures, identifying and following-up with contacts, and making recommendations to travelers to prevent international spread.
From a global perspective, the SARS epidemic demonstrated the importance of a worldwide surveillance and response capacity to address emerging microbial threats through timely reporting, rapid communication, and evidence-based action. The importance of international collaboration coordinated by WHO and the need for partnerships among clinical, laboratory, public health, and veterinary communities were emphasized. From the national perspective, lessons learned included the need for the following: strong political leadership at the highest levels to mobilize the entire society; speed of action; improved coordination between national and district levels in countries with federal systems; increased investment in public health; updated legislation regarding surveillance, isolation, and quarantine measures; and improved infection control in healthcare and long-term-care facilities and at borders.
Can SARS Be Eradicated?
The breakout groups concluded that it is too soon to tell if SARS can be eradicated, but substantial reasons for concern exist. Chains of person-to-person transmission can likely be terminated, provided no reservoir of asymptomatic carriers, chronic infection, or seeding of new settings (e.g., Africa) exists. But if an animal reservoir of the SARS coronavirus exists, as suggested by some studies, eradication would be very difficult. Fecal shedding of virus by infected persons and apparent virus stability in the environment could pose additional barriers to eradication, although these circumstances were not major modes of transmission in the recent epidemic. Research priorities include better understanding of the epidemiologic and virologic parameters of infection and transmission, including “superspreading events”; the possible role of animals, including host range and factors leading to emergence; the environment; and analysis of the effectiveness of specific interventions in controlling the epidemic. Additional priorities include standardization of diagnostic assays and reagents, development of a reliable front-line diagnostic test for use early in illness; facilitating the ability to ship diagnostic specimens; and development of animal models to improve understanding of pathogenesis and evolution of clinical disease and to use in vaccine development and antiviral drug testing.
Are Current Control Measures Effective?
Currently recommended measures to prevent transmission in healthcare settings are generally effective when applied, but require proper infrastructure, training, and consistent practice. Infection control capacity and practice in many healthcare settings need improvement. A minimum global level of safe practice (standard precautions, supplemented by risk-based precautions) should be established. Studies are needed to determine optimal protective measures (e.g., type of mask) and when they should be used. Appropriate protective measures (e.g., isolation facilities and masks fit-tested for individual workers) should be more widely available.
Measures to control community transmission (i.e., outside of healthcare settings) and prevent international spread require further evaluation. Such measures include public information campaigns, contact tracing and sometimes quarantine, hotlines to report fever, temperature screening in public places, recommendations to travelers, and entry and exit screening at borders with questionnaires and temperature checks. Control measures in the community would likely have the greatest yield if focused on links between healthcare settings and the wider community, with contact tracing prioritized according to the nature of exposure, but further evaluation is needed. Home or institutional quarantines, when used, should ensure financial and psychosocial support and daily needs of the affected persons. Stigmatization of affected persons and groups was identified as an important issue. In an attempt to reduce stigmatization, one country’s president reportedly proclaimed quarantined persons to be “heroes in the nation’s battle against SARS.” Some participants stated that visible measures to control community and international spread were important in restoring public and business confidence and as deterrents, regardless of the yield of SARS cases detected.
Are Current Alert and Response Systems Robust Enough?
Current systems are robust in that SARS is being controlled, but many processes are not sustainable because of limited capacity. Surveillance priorities include developing a sensitive “alert” case definition in areas at greatest risk for recurrence, developing a front-line laboratory diagnostic test to identify patients with SARS coronavirus infection during periods of high incidence of other respiratory illnesses, improving laboratory diagnostic capacity and laboratory-based surveillance, and developing integrated information tools that allow real time analysis of clinical, epidemiologic, and laboratory data.
Response coordination priorities include development of contingency plans, including ensuring coordination and surge capacity at global, regional, and national levels; development of laboratory and information technology systems; and the ongoing revision of the international health regulations to focus on containing emerging infectious diseases.
Underlying any response is the need to communicate information in a transparent, accurate, and timely manner. Effective communication requires training, understanding, and use of a range of different media. Developing further the current communication systems and our understanding of risk communication is vital if future outbreaks are to be controlled quickly and effectively and the health, economic, and psychosocial effects of major health events are to be minimized. | CC BY | no | 2022-01-31 23:44:39 | Emerg Infect Dis. 2003 Sep; 9(9):1191-1192 | oa_package/47/c0/PMC3016766.tar.gz |
|||||||
PMC3016767 | 14519251 | Methods
Data Source
From February 2, 1998, through February 15, 1999, the Emerging Infections Program’s Foodborne Diseases Active Surveillance Network (FoodNet) conducted a telephone-based population survey in Connecticut, Minnesota, and Oregon, and selected counties in California, Georgia, Maryland, and New York (total population 29 million). Each month, approximately 150 residents in each state were interviewed. After screening to remove business and nonworking telephone numbers, an outside contractor contacted respondents by telephone using a random-digit-dialing, single-stage sampling method ( 31 ).
These contractors conducted the interviews using methods similar to those used in the Behavioral Risk Factor Surveillance System ( 32 ). All interviews were conducted in English. Using a standardized questionnaire, they asked one respondent per household about his or her knowledge, attitudes, and recent practices regarding antibiotic use. All members of the household were eligible for selection. Institutional review boards at the Centers for Disease Control and Prevention and all participating states approved the study.
Interviewers obtained verbal consent from all study participants before administering the questionnaire. They informed participants why the information was being collected, and how it would be used, and read them a statement informing them that their participation was voluntary before the start of the interview. No personal identifiers were included in this dataset.
Survey Questionnaire
Five items (two questions and three statements) addressing participants’ knowledge, attitudes, and practices regarding antibiotic use were included in the survey. Recent antibiotic use referred to antibiotic use in the past 4 weeks. Respondents who took an antibiotic were asked whether the antibiotic was prescribed by their physician for a current illness or for a previous illness or if the antibiotic was prescribed for someone else. For the question, “Are you aware of any health dangers to yourself or other people associated with taking antibiotics?” respondents’ knowledge of health dangers associated with taking antibiotics was classified into the following categories: emerging drug resistance, allergies/reactions, antibiotics may kill “friendly”/“good” microbes, it is unhealthy to take drugs/chemicals in general, misuse/overuse of antibiotics, multiple reasons, other, don't know, or refused. Answers to survey items 1 and 5 were yes/no. For statements 2, 3, and 4, participants were asked to respond according to the following 5-point Likert scale: 1=strongly agree, 2=agree somewhat, 3=unsure, 4=disagree somewhat, and 5=strongly disagree. We classified those who answered “strongly agree” or “agree somewhat” to the antibiotic knowledge questions as having agreed and those who answered “strongly disagree” or “disagree somewhat” as having disagreed. Those who refused to answer a question were not included in the analysis.
In addition to eliciting participants’ responses to these questions, the survey also recorded demographic characteristics of the participants, including their sex, age, income level, education, race, state, and place of residence. Respondents’ place of residence was categorized as urban if they reported living in a city or town of > 50,000 residents. Presence of children in the household (yes/no), month of interview, and medical insurance status were also recorded. Respondents were classified as being “with insurance” if they reported any of the following as their type of insurance: health maintenance organization, preferred provider organization, traditional indemnity insurance, Medicaid, Medicare, or other. If respondents reported their type of insurance as “don’t know” or if they refused to answer the question, they were not included in the analysis.
To simplify our analysis, we coded persons indicating Hispanic ethnicity as Hispanic, even if they also identified themselves by race (e.g., a white-Hispanic male would be coded for race as Hispanic). For our multivariable analysis, we grouped persons identified as Asian, Pacific Islander, American Indian, or Alaskan Native into the category called “other.” We also added those who responded “don’t know” or “unsure” to the attitude questions to the “agree” group to divide respondents into two groups: those who responded correctly (disagree) and those who did not (agree or don’t know). For our multivariable logistic regression, we grouped respondents who answered “don’t know” to the question, “Are you aware of dangers associated with antibiotics?” with those who answered “no.” Persons responding “don’t know” to the question, “In the past 4 weeks, have you taken antibiotics?” were not included in the analysis. We evaluated respondents’ education and income levels as continuous variables.
Statistical Analysis
To compensate for respondents’ unequal probability of selection and allow population estimates to be made, we weighted the data following procedures from the Behavioral Risk Factor Surveillance System ( 33 ) and based our weighting on the number of residential phone numbers, the number of people per household, and the 1998 postcensus estimates for the age- and sex-specific population of the FoodNet sites (B. Imhoff, pers. comm.). We did not include race in the poststratification weight since some site-sex-age-race groups contained <10 survey participants.
We analyzed the data using SUDAAN (SUrvey DAta ANalysis, v7.5.2, Research Triangle Institute, Research Triangle Park NC), a specialized statistical procedure for analyzing complex sample survey data, and ran the analysis using SAS (Statistical Analysis Software, v6.12) (SAS Institute, Inc., Cary, NC) This software adjusts for the complexity of the sampling design (unequal weighting and clustering) and uses Taylor series linearization methods to estimate variances. Because the ratio of sample size to population size was small, we approximated the sample design by a “with-replacement” design for purposes of variance estimation in SUDAAN. Any bias resulting from such replacement sampling will be in the conservative direction.
We examined respondents’ attitudes toward, and awareness of, antibiotic use by their age, sex, race, income level, education, state, place of residence, medical insurance status, presence of children in household, and month of the interview. We then tested the relationships between respondents’ demographic characteristics and their responses to the questions and statements about antibiotics using chi-square tests for independence. We used the results of the bivariate analyses to develop two multivariable logistic regression models: 1) a model assessing the effects of respondents’ awareness of antibiotic dangers on their attitudes toward and expectations of antibiotics; and 2) a model assessing the influence of respondents’ attitudes on their recent antibiotic use.
Because of the complexity of the analyses, we used only second-degree product terms to assess interaction effects. Results of the logistic regression models are reported as odds ratios (ORs) with 95% confidence intervals (CIs). The level of significance is p=0.05. | Results
The sample consisted of 12,755 respondents: 7,254 females and 5,501 males. Of these 12,755, a total of 1,975 were <18 years old or of an unknown age and thus were excluded from the analysis ( Table 1 ). Of the remaining 10,780 respondents, 12% reported taking antibiotics within the 4 weeks before the interview ( Table 2 ). Those who took antibiotics within the prior 4 weeks were more likely to be female (13.9% overall, 65% of all who took antibiotics), have medical insurance (12.6%, p<0.01), and live in rural or farm areas (12.9% and 17.6%, respectively, p=0.02). In addition, antibiotic use varied by age group, with the highest use among persons 25–39 years old (13.2%) and those >60 (13.7%) ( Figure ). We found no significant differences in antibiotic use among groups defined by race, education level, income, state, month of interview, and having children in the household. Of those who took antibiotics (n=1,253), 91% reported using an antibiotic prescribed for a current infection, while 9% reported using an old prescription or someone else’s. No demographic variable was significantly associated with whether respondents used antibiotics obtained to treat their own current illness.
Of the 10,780 respondents, 27% believed taking antibiotics when they had a cold prevented more serious illness (survey item 2, Table 2 ), 32% believed taking antibiotics when they had a cold made them recover more quickly (survey item 3), and 48% expected a prescription for antibiotics when they were ill enough from a cold to seek medical attention (survey item 4). Respondents agreeing with any one of these statements were significantly more likely (p<0.01) to be male, younger (18–24 years), nonwhite, not college educated, and earning <$30,000 per year ( Figure ). We also found significant differences by place of residence, with respondents living in rural or farm areas being more likely to agree with the statements. Respondents with children were more likely to agree with survey item 2 (28% vs. 26%), item 3 (34% vs. 31%), and item 4 (50% vs. 46%): all differences had p values <0.01. Responses varied among states (p<0.01), with residents of Maryland and Georgia consistently having higher levels of agreement than residents of the other study areas. (For item 2: 27% and 38% vs. 22%–26% [other states] item 3: 35% and 41% vs. 26%–31% [other states], and item 4: 50% and 56% vs. 40%–48% [other states]). Agreeing with the statement, “By the time I am sick enough to see a doctor because of a cold, I usually expect a prescription for antibiotics,” did not vary significantly by month of interview or health insurance status. However, not having insurance was significantly associated with agreement to the statements, “When I get a cold, antibiotics help me to get better more quickly” (42% vs. 27%, p<0.01), and “When I have a cold, I should take antibiotics to prevent getting a more serous illness” (40% vs. 25%, p<0.01). Being interviewed from September through January was also associated with agreeing with these statements (p< 0.05 and p<0.02, respectively).
Fifty-eight percent of respondents were not aware of health dangers associated with taking antibiotics ( Table 2 ). Persons not aware of dangers associated with antibiotic use were significantly (p<0.01) more likely to be male and younger and to live in rural or farm areas. They were also significantly more likely to have less education, lower income, and no insurance ( Figure ). We found no association between awareness of the dangers of antibiotic use and the month of the interview or having children in the household. Of those aware of health dangers, 58% mentioned factors related to the emergence of drug resistance as a consequence of antibiotic use, 27% mentioned allergies/reactions, 9% recognized that antibiotics kill “good” microbes, and 5% agreed that “it is generally unhealthy to take antibiotics.”
Multivariable Analysis
Associations between Attitude Statements and Awareness of Dangers
We constructed three independent models to assess the relationship between participants’ knowledge of the dangers of antibiotics (and demographic characteristics) and each of the three different attitude statements as the outcome. Each of these relationships was significant in the univariate and multivariable analyses ( Table 3 ).
Participants not aware of adverse effects of antibiotic use were 2.5 times more likely to agree with the statement, “When I have a cold, I should take antibiotics to prevent getting a more serious illness” (95% CI 2.14 to 2.92). In addition, the demographic variables of age, sex, race, income level, education level, and state were all significant predictors of agreement. We also found significant interactions between the awareness variable and race and education, as well as interactions between age and gender.
We also found a significant association between participants agreeing with the statement, “When I have a cold, antibiotics help me to get better more quickly,” and their being aware of health dangers associated with indiscriminate use of antibiotics (OR 2.29, 95% CI 1.99 to 2.65). Those agreeing with this statement were more likely to be older (40–59 years old: OR 2.20, 95% CI 1.32 to 3.66; and >60 years old: OR 2.08, 95% CI 1.22 to 3.25).
Participants not aware of dangers were 1.96 times more likely to agree with the statement, “By the time I am sick enough to talk to or visit a doctor because of a cold, I usually expect a prescription for antibiotics” (95% CI 1.72 to 2.23). The other demographic variables in the model significantly associated with participants’ responses to this statement were age, sex, income level, education level, insurance, state, and place of residence.
Association between Antibiotic Use and Attitude Statements and Awareness of Dangers
Using another multivariable model, we examined the association between respondents’ taking antibiotics in the prior 4 weeks and their attitudes toward and knowledge of the adverse effects of antibiotic use ( Table 4 ). The overall model was adjusted for participants’ sex, age, education, race, household income, state, place of residence, child in the house, and insurance. After adjusting for these demographic variables, we found that only one attitude statement remained a predictor of recent antibiotic use. Participants agreeing with the statement, “When I have a cold, antibiotics help me to get better more quickly,” were 1.50 times more likely to have recently taken an antibiotic.
Paradoxically, participants aware of dangers related to antibiotic use were 1.37 times more likely to have taken antibiotics in the previous 4 weeks (95% CI 1.11 to 1.69) even though awareness of these dangers was not a univariate predictor of antibiotic use (OR 0.99, 95% CI 0.49 to 1.98). Of note, only one attitude statement was significant in predicting antibiotic use, suggesting that all of the statements are measuring similar things ( Table 4 ). | Discussion
The results of this FoodNet survey showed that 12% of adult respondents had used antibiotics during the prior month, most (91%) of which were prescribed for a current infection. Extrapolating from the survey data, we estimate that every adult in the United States in 1998 used antibiotics an average of 1.4 times and that approximately 1 in 10 adults who used antibiotics did so without seeing a physician.
The results also suggest that peoples’ knowledge and attitudes regarding antibiotic use can be substantially improved and that improved knowledge may be important for efforts to reduce the misconceptions and misguided expectations contributing to inappropriate antibiotic use. Overall, 53% of respondents to this population-based survey reported at least one misconception that may put them at unnecessary risk for infection with resistant bacterial pathogens, and 58% were not aware of the health dangers associated with antibiotic use. Nearly half (48%) of the respondents indicated that they expected an antibiotic when they visit a doctor.
This survey identified persons in demographic groups who had both higher levels of misconceptions and lower levels of knowledge about the potential adverse impact of antibiotics. These groups included persons of lower socioeconomic status, lower educational status, males, those in younger age groups, and the elderly. Efforts to reach these groups must be a part of any educational efforts to change patient expectations and to reduce the corresponding pressure on providers to prescribe antibiotics inappropriately.
The results of this study did not show a consistent direct link between misguided expectations and higher levels of recent antibiotic use. In part, this lack may have been due to the design of the survey, which focused on collecting frequency data and did not aim to define the reasons for antibiotic use. In addition, in our analysis, we found that the three attitude statements were similar measures of a person’s opinions on antibiotic use. The statements have the same demographic predictors and association with the knowledge variable and, in reality, they appear to measure the same thing ( Table 3 ).
We did not find an association between recent antibiotic use and lower knowledge levels. Before the analysis, we assumed that persons lacking knowledge about the dangers associated with antibiotic use would be more likely to take antibiotics. However, we found that study participants aware of these health dangers were actually more likely to have taken antibiotics in the prior 4 weeks. Persons of higher socioeconomic status (higher education and income) have better access to health care and are more likely to use antibiotics in general; we did find that people who took an antibiotic recently were more likely to have medical insurance. Another possible explanation is that those who recently took antibiotics may have learned about the adverse effects of antibiotic use from their physician or pharmacist or from their personal experience with antibiotic side effects. Future epidemiologic studies of antibiotic use in diverse populations should be designed to collect information on why participants use antibiotics to distinguish between appropriate and inappropriate antibiotic use.
This type of study has several other important limitations. A telephone survey creates the possibility of selection bias because it may not reflect the population being surveyed ( 32 ). In addition, the survey catchment population did not include persons who refused to participate, did not have a telephone, did not speak English, or could not respond because of physical or mental impairment. However, the weighting process adjusted for age- and sex-based differences in rates.
Another limitation is the cross-sectional nature of this study. Each participant was assessed only once, and the study was not designed to detect recent changes in opinion. Furthermore, the indicators used measured self-reported behavior not actual behavior. We did not attempt to validate responses on the basis of actual observation, and the survey did not determine whether the antibiotic use was appropriate.
Additionally, respondents may have misunderstood the statements about colds and antibiotics. For example, if they had previous experience with what they thought was a cold, and a physician diagnosed a bacterial ear infection, they may have responded that antibiotics help them get better more quickly when they have a cold ( 17 ). In addition, several studies have shown that patients often do not have accurate knowledge of antibiotics ( 15 , 34 ). Hong et al., for example, found that patients often could not identify whether a medication was an antibiotic or not and that many patients considered “antibiotics” to be any prescription medication ( 34 ).
This study focused only on antibiotic use among adults. Antibiotic use is, however, highest among children, as is the potential for its misuse. In fact, we found that respondents with children in the household were more likely to agree with the attitude statements, demonstrating that it is often parents who influence their children’s perceptions of antibiotic use.
The results of this analysis demonstrate that population-based surveys can contribute to efforts to monitor and reduce inappropriate antibiotic use. The magnitude of recent antibiotic use among adults, as well as widespread lack of awareness about and inappropriate attitudes toward such use indicate that continued population-based surveys could be useful in efforts to monitor trends in antibiotic use. Furthermore, such surveys have the potential to effectively monitor antibiotic knowledge, attitudes, and practices among demographic subgroups of concern. Knowing the magnitude of the problem and the groups who misuse antibiotics most frequently will help public health officials develop and fund intervention efforts, including public information campaigns.
However, our findings also point out some important issues that need to be addressed if this surveillance tool is to be used to full effect. First, additional population-based studies are needed not only to measure antibiotic use but also to determine the reasons that people use them. Such studies should explore the motivations, expectations, and incentives that lead persons to use or not use antibiotics. Second, future studies should include more clearly defined measures of patients’ knowledge. Better measures of knowledge may involve asking respondents to differentiate between antibiotics and other types of prescription medicine and to identify types of infections requiring antibiotics. A more thorough evaluation of respondents’ attitudes may also be useful. To this end, focus groups may help develop questions that better monitor the general population’s attitudes toward antibiotics. Finally, longitudinal tracking of these types of studies will provide important information for the assessment of public health programs. | Recent antibiotic use is a risk factor for infection or colonization with resistant bacterial pathogens. Demand for antibiotics can be affected by consumers’ knowledge, attitudes, and practices. In 1998–1999, the Foodborne Diseases Active Surveillance Network (FoodNet) conducted a population-based, random-digit dialing telephone survey, including questions regarding respondents’ knowledge, attitudes, and practices of antibiotic use. Twelve percent had recently taken antibiotics; 27% believed that taking antibiotics when they had a cold made them better more quickly, 32% believed that taking antibiotics when they had a cold prevented more serious illness, and 48% expected a prescription for antibiotics when they were ill enough from a cold to seek medical attention. These misguided beliefs and expectations were associated with a lack of awareness of the dangers of antibiotic use; 58% of patients were not aware of the possible health dangers. National educational efforts are needed to address these issues if patient demand for antibiotics is to be reduced.
Keywords: | Antimicrobial resistance is a rapidly increasing problem in the United States and worldwide. A well-documented risk factor for infection or colonization with resistant bacterial pathogens is recent antibiotic use, particularly within 4 weeks or 1 month before exposure ( 1 – 6 ). As a result, one of the primary strategies to prevent and control the emergence and spread of resistant organisms is to reduce the selective pressure of overuse and misuse of antibiotics in human medicine ( 7 ).
Several studies have identified and examined specific causes of the misuse of antibiotics, including unnecessary prescribing ( 8 – 14 ) and patient demand ( 15 – 17 ). Factors contributing to inappropriate prescribing practices have been elucidated. In particular, numerous studies of adults have shown that patients’ expectations or physicians’ perceptions of those expectations affect the physicians’ prescribing behavior ( 10 , 13 , 16 – 24 ).
To solve the problem of antibiotic misuse, a more thorough understanding of what influences the development and expression of patients’ expectations must be gained. Understanding patients’ knowledge, attitude, and practices may facilitate more effective communication between the clinician and patient, as well as aid in the development of strategies to educate patients and the public ( 25 ). Several lines of evidence suggest educational interventions directed at patients and clinicians can increase patients’ knowledge and awareness, as well as reduce the frequency with which clinicians prescribe antibiotics inappropriately ( 26 – 30 ).
Our investigation, an analysis of data from a national population-based cross-sectional survey, provides a glimpse of the current knowledge, attitudes, and practices regarding antibiotic use among patients. We also attempt to identify demographic characteristics associated with particular knowledge, attitude, and practices and to determine whether a person’s attitudes toward and knowledge of risks associated with taking antibiotics are associated with recent antibiotic use. Identifying subgroups of the population with high levels of antibiotic use and with misconceptions about antibiotic use will help public health officials target and track the impact of interventions. Other information obtained from this population-based survey will provide with further insight for the development and evaluation of health education and prevention strategies. | Acknowledgments
We thank the Emerging Infections Program’s Foodborne Diseases Active Surveillance Network (FoodNet) for providing the data we used in this analysis and the many persons who contributed to the analysis.
Ms. Vanden Eng is a master’s degree candidate in biostatistics at the University of Michigan, Ann Arbor, Michigan. She conducted this study while working on a master’s of public health degree in infectious disease epidemiology at Yale University School of Public Health, New Haven, Connecticut. | CC BY | no | 2022-01-27 23:35:57 | Emerg Infect Dis. 2003 Sep; 9(9):1128-1135 | oa_package/30/bc/PMC3016767.tar.gz |
||
PMC3016768 | 14519243 | Materials and Methods
Vaccine
The plasmid DNA, pCBWN, codes for the prM and E glycoproteins of WNV. The plasmid was purified from Escherichia coli XL-1 blue cells with EndoFree Plasmid Giga Kits (QIAGEN, Inc., Santa Clarita, CA) and suspended in 10 mM Tris buffer, pH 8.5, at a concentration of 10.0 mg/mL. For IM vaccination, the DNA vaccine was formulated in phosphate-buffered saline (PBS), pH 7.5, at a concentration of 1.0 mg/mL. For oral exposure, the dry-microencapsulated DNA was suspended in PBS, pH 7.5, at a concentration of 2.0 mg/mL.
Microencapsulation
The method for microencapsulating DNA was adapted from procedures previously described for virus and subunit vaccines and isolated proteins ( 14 – 16 ). We performed all steps with sterile reagents and aseptic technique. Two 10-mg aliquots of WNV cDNA were transferred to separate test tubes with enough water to make 9-mL volumes. Resulting suspensions were mixed on a clinical rotator until solution was complete; 1 mL of 0.6% w/v aqueous sodium alginate (Fluka Chemical Co., Ronkonkoma, NY) solution was added to each tube, and the contents of each were gently inverted 20 times. Each DNA/alginate solution was pumped at 1.2 mL/min through a 76-μm orifice in a 1-mm internal diameter glass tube against the side of which a 20-KHz laboratory sonicator probe was firmly pressed. The emerging train of droplets was directed into a modified T-tube, through which a recirculated 40 mL of 0.25% w/v neutral aqueous spermine hydrochloride (Sigma-Aldrich Corp., St. Louis, MO) solution was pumped at 10 mL/min. A placebo microcapsule formulation was prepared by using alginate reagent without DNA. Resulting microcapsule suspensions were allowed to equilibrate for 30 min, pelleted at 500 x g for 20 min, and washed three times by decanting, suspending, and repelleting. Wash liquids were reserved for measuring the DNA that escaped encapsulation. Placebo and vaccine formulations and washes were frozen at –20°C and lyophilized overnight, then suspended in 5 mL of PBS to produce a final concentration of 2 mg/mL of the encapsulated DNA.
Crows
Fish crows ( C. ossifragus ) were captured with a rocket-propelled net at various locations in Maryland. Birds were transported to a biosafety level 3 laboratory at the U.S. Army Medical Research Institute of Infectious Diseases, allocated into four groups, and placed in stainless steel cages (3–4 birds/cage); blood was collected for evidence of antibodies against flaviviruses. Birds were provided a mixture of cat and dog food ad libitum and water. This diet was supplemented with hardboiled eggs as well as vitamin supplements.
Plaque Assay
Serial 10-fold dilutions of the blood samples from each crow were made in standard diluent (10% heat-inactivated fetal bovine serum in medium 199 with Earle’s salts, NaHCO 3 , and antibiotics). These samples were tested for infectious virus by plaque assay on Vero cells in 6-well plates (Costar, Inc., Cambridge, MA) as previously described ( 17 ), except that the second overlay, containing neutral red stain, was added 2 or 3 days after the first overlay.
Plaque-Reduction Neutralization Assay
Serum samples were assayed for WNV-specific antibodies by using the plaque-reduction neutralization test (PRNT), as previously described ( 18 ). Briefly, each serum sample was diluted 1:10 in standard diluent (as above) and mixed with an equal volume of BA1 (composed of Hanks’ M-199 salts, 1% bovine serum albumin, 350 mg/L of sodium bicarbonate, 100 U/mL of penicillin, 100 mg/L of streptomycin, and 1 mg/L of fungizone in 0.05 M Tris, pH 7.6) containing a suspension of WNV (NY99-4132 strain) at a concentration of approximately 200 plaque-forming units (PFU)/0.1 mL, such that the final serum dilution was 1:20 and the final concentration of WNV (the challenge dose) was approximately 100 PFU/0.1 mL. After 1-h ncubation at 37°C, we added the serum/virus mixtures onto Vero monolayers in 6-well plates, 0.1 mL per well in duplicate. We determined the mean percentage of neutralization for each specimen by comparing the number of plaques that developed (see Plaque Assay section) relative to the number of plaques in the challenge dose, as determined by back titration. Preliminary samples were screened for antibodies to WNV in the same manner, as well as for neutralizing antibodies to St. Louis encephalitis virus, a closely related flavivirus that may cross-react serologically with WNV ( 19 ) and may partially protect against WNV infection ( 20 ).
Experimental Design
The crows were placed in four groups: 1) those inoculated IM with vaccine, 2) those that had oral vaccine, 3) positive controls (i.e., those that received placebo inoculation and viral challenge), and 4) room controls (i.e., those that received placebo inoculation and placebo challenge). After an acclimatization period of approximately 1 month, the 10 crows in group 1 (9 fish crows and 1 American crow [C. brachyrhynchos ] ) were inoculated IM with 0.5 mg of the DNA vaccine in a total volume of 0.5 mL (0.25 mL in each breast). The 9 crows in group 2 (8 fish crows and 1 American crow) were given 0.5 mg of the encapsulated DNA vaccine orally in 0.25 mL of PBS, and 20 fish crows (groups 3 and 4) were each inoculated and orally exposed as above except that a placebo was used in place of the vaccine. Blood was collected weekly from the jugular vein and the serum tested for neutralizing antibodies to WNV. Six weeks after vaccination, all birds in groups 1, 2, and 3 were inoculated subcutaneously with 0.1 mL of a suspension containing 10 5 PFU (10 6 PFU/mL) of the 397-99 strain of WNV, which had been isolated from the brain of an American crow that died in New York City during the fall of 1999 and passaged once in Vero cells before use in this study. The crows in group 4 were inoculated with 0.1 mL of diluent. Three or four crows in each group were bled (0.1 mL) from the jugular vein each day; each bird was bled every third day. Blood samples were added to 0.9 mL of diluent + 10 U of heparin/mL. Blood samples were frozen at –70°C until tested for infectious virus by plaque assay. | Results
Serologic Response
While neutralizing antibodies developed in 5 of the 9 fish crows that received the vaccine by the IM route at the 80% neutralization level for WNV by 14 days after vaccination, neutralizing antibodies to WNV did not develop in any of the remaining fish crows (8 orally exposed to vaccine and 20 placebo-exposed) in the same time period ( Table 1 ). An antibody response at the 78% level developed in one of the remaining IM-vaccinated fish crows. Thus, a serologic response developed in six (67%) of the nine fish crows that received the vaccine by the IM route. However, by day 42 after vaccination, none of these crows retained a response at the 80% neutralization level.
Viremia Profiles and Survival
All of the mock-challenged crows survived. Similarly, all nine fish crows that received the IM vaccine survived ( Table 1 ). However, 5 of 10 fish crows that received the placebo vaccine and 4 of 8 fish crows that received the oral vaccine died when challenged with virulent WNV. The difference in survival rates between the fish crows that received the IM vaccine and either of the other two groups was significant (Fisher exact test, p < 0.03). A veterinary pathologist examined all crows that died during these studies, and signs typical of WNV infection in avian hosts (i.e., heart necrosis) were observed in all of these birds. (These data will be described in a separate article on WNV viral pathogenesis in fish crows.) Viremias were detected in all 10 crows that received the placebo inoculation, 7 of 8 fish crows that received the oral vaccine, and 6 of 9 fish crows that received the vaccine by the IM route ( Table 1 ). Virus was not detected in any of the crows that received the placebo challenge. Mean logarithm 10 peak viremia titers were significantly lower (T > 2.75, df > 15, p < 0.017) in the fish crows that received the vaccine by the IM route (mean + S.E. = 2.9 + 0.4) than in fish crows that received the placebo vaccine (mean + S.E. = 4.3 + 0.3) or fish crows that received vaccine by the oral route (mean + S.E. = 5.2 + 0.8). The mean peak viremia titers for fish crows that received the placebo vaccine or the DNA vaccine by the oral route were not significantly different (T=1.1, df=16, p=0.287). In both the oral vaccine and placebo groups, fish crows that died had higher viremia than those that survived their infection with WNV ( Table 2 ). Because birds were bled only every third day, accurately determining the duration of viremia in individual fish crows was not possible. Viremias were detected on days 1 to 6 after infection, and 9 of 10 birds that were viremic on day 1 were still viremic on day 4. However, only five of eight birds that were viremic on day 2 were still viremic on day 5, and only three of six birds that were viremic on day 3 were still viremic on day 6. No birds were viremic 7 days after infection. Thus, most viremias apparently lasted approximately 5 days during this study. | Discussion
Although the DNA vaccine failed to induce a long-lasting immune response, fish crows vaccinated with this vaccine by the IM route all survived challenge with virulent WNV. In contrast, oral administration of this vaccine failed to elicit an immune response, nor did it protect fish crows from a lethal challenge with WNV. The death rate in these crows (4 [50%] of 8), was identical to that observed in the placebo-vaccinated group (5 [50%] of 10) and in a second group of unvaccinated fish crows (4 [50%] of 8) tested later (M.J. Turell and M. Bunning, unpub. data). Although no deaths occurred in the IM-vaccinated fish crows, low-level viremia, consistent with that observed in the birds that survived their WNV infection in the other groups, did develop in six of the nine crows. Therefore, a single dose of the DNA vaccine did not elicit complete protection and sterile immunity to WNV infection. Additional studies need to be conducted with multiple doses of vaccination both by the IM as well as by the oral route to determine whether multiple doses might provide greater protection against WNV infection.
During the course of these studies, we determined that we had two American crows mixed in with the fish crows, one in the oral and one in the IM-vaccinated groups. High viremias (>10 6 PFU/mL of blood) developed in both of these crows, and they died after challenge with virulent WNV. These data, based on a single bird in each group, were not included in the data presented in this report. Both hooded crows ( 8 ) and American crows ( 10 ) are highly susceptible to infection with WNV with nearly 100% case-fatality rates. In contrast, fish crows appear to be less likely to succumb to the infection.
The continued spread of WNV infection across the United States and reported deaths in raptors and rare captive birds in zoologic parks indicate the need to develop an effective avian vaccine for WNV. To break the transmission cycle, the vaccine must be able to substantially reduce the level of viremia below the level needed to infect a feeding mosquito, which is about 10 5 PFU/mL of blood ( 21 ). By this standard, the vaccine performed reasonably well, with no vaccinated fish crow having a recorded viremia > 10 4.7 . In contrast, 3 of 10 placebo-vaccinated fish crows had viremias > 10 5 PFU/mL of blood, and 5 of 10 had a peak viremia > 10 4.8 PFU/mL of blood. However, because the crows were bled only every third day, determining the actual peak viremias in these birds was not possible. If the goal of the vaccine is to protect rare and endangered avian species from death, rather than to prevent transmission, then the occurrence of a low-level viremia is not critical. | A DNA vaccine for West Nile virus (WNV) was evaluated to determine whether its use could protect fish crows ( Corvus ossifragus ) from fatal WNV infection. Captured adult crows were given 0.5 mg of the DNA vaccine either orally or by intramuscular (IM) inoculation; control crows were inoculated or orally exposed to a placebo. After 6 weeks, crows were challenged subcutaneously with 10 5 plaque-forming units of WNV (New York 1999 strain). None of the placebo inoculated–placebo challenged birds died. While none of the 9 IM vaccine–inoculated birds died, 5 of 10 placebo-inoculated and 4 of 8 orally vaccinated birds died within 15 days after challenge. Peak viremia titers in birds with fatal WNV infection were substantially higher than those in birds that survived infection. Although oral administration of a single DNA vaccine dose failed to elicit an immune response or protect crows from WNV infection, IM administration of a single dose prevented death and was associated with reduced viremia.
Keywords: | West Nile virus (WNV), a mosquito-borne flavivirus, was recognized for the first time in the Western Hemisphere during summer 1999 in New York City and was associated with human, equine, and avian deaths ( 1 – 4 ). This virus is transmitted by a variety of mosquito species, mostly in the genus Culex ( 5 – 7 ). The New York 1999 strain of WNV differed genetically from other known strains of WNV except for an Israeli strain isolated from a dead goose in Israel in 1998 ( 1 ). With the exception of a laboratory study in Egypt involving hooded crows ( Corvus corone ) and house sparrows ( Passer domesticus ) ( 8 ), only these two nearly identical strains are known to kill birds ( 9 , 10 ). In 2000, WNV was detected in >4,000 bird carcasses in the United States ( 11 ), and the overall mortality rate was considered much greater. Several deaths attributed to WNV in the United States have occurred in valuable captive birds in zoologic collections ( 12 ). Currently, no treatment or vaccine is available for susceptible birds.
Vaccination may protect birds from lethal WNV infections. Accordingly, we examined a DNA vaccine developed for use in horses ( 13 ) for its ability to protect crows, a species known to be highly susceptible to lethal infection with this virus ( 8 , 10 ). | Acknowledgments
We thank R. Lind, P. Rico, and S. Duniho for their excellent assistance in caring for the crows; R. Schoepp for his assistance in bleeding the crows; D. Dohm and J. Velez for technical assistance; J. Blow, R. Schoepp, C. Mores, P. Schneider, and K. Kenyon for editorial assistance; E. Peterson for administrative support; D. Rohrback, S. Bittner, K. Musser, B. Riechard, M. Castle, C. Dade, G. Timko, and B. Davenport for their work in capturing the crows; and the staff at Umberger Farm for allowing us to use their property for the fieldwork.
This work was supported by a grant from The American Bird Conservancy, Pesticides and Birds Campaign, Washington, D.C.
Research was conducted in compliance with the Animal Welfare Act and other Federal statutes and regulations relating to animals and experiments involving animals and adheres to principles stated in the Guide for the Care and Use of Laboratory Animals, National Research Council, 1996. The facility where this research was conducted is fully accredited by the Association for Assessment and Accreditation of Laboratory Animal Care, International.
Dr. Turell is a research entomologist at the United States Army Medical Research Institute of Infectious Diseases, Fort Detrick, Maryland. His research interests focus on factors affecting the ability of mosquitoes and other arthropods to transmit viruses. | CC BY | no | 2022-01-24 23:50:38 | Emerg Infect Dis. 2003 Sep; 9(9):1077-1081 | oa_package/5c/80/PMC3016768.tar.gz |
||
PMC3016769 | 14531386 | Walls (1896)Without consideration, without pity, without shame
they have built big and high walls around me.And now I sit here despairing.
I think of nothing else: this fate gnaws at my mind;for I had many things to do outside.
Ah why didn’t I observe them when they were building the walls?But I never heard the noise or the sound of the builders.
Imperceptibly they shut me out of the world.
From The Complete Poems of Cavafy, copyright 1961, renewed 1989 by Rae Dalven, reprinted by permission of Harcourt, Inc.
Van Gogh painted The Prison Courtyard while “imprisoned” himself, in the Saint-Paul-de-Mausole asylum in Saint Rémy. He died 5 months later of a self-inflicted gunshot wound, the culmination of his long struggle with physical and mental illness ( 1 ).
The first Dutch master since the 17th century, van Gogh did not become an artist until 10 years before the end of his life. His early interests were literature and theology. The son of a protestant minister, he had a strong sense of mission, which was reinforced by his dislike of industrial society and his work as lay preacher among poor coal miners in Belgium. That experience with human misery influenced his art, which had an air of mysticism and austerity and, like that of his contemporary Honoré Daumier, often featured the oppressed and the downtrodden (e.g., The Potato Eaters, 1885) ( 2 ).
Van Gogh was influenced by Degas, Seurat, and other leading French artists. His use of pure color to define content places him ahead of his contemporaries as a forerunner of expressionism. “I will paint with red and with green the terrible passions of humanity,” he wrote. “...Instead of trying to reproduce exactly what I see before my eyes, I use color more arbitrarily to express myself forcibly.” As if he knew that his artistic career would be very brief, van Gogh painted with a sense of urgency, almost scarring the canvas with thick overlaid brush strokes that were distinct, deliberate, and intense. He wanted “to exaggerate the essential and to leave the obvious vague” ( 3 , 2 ). His paintings became vehicles of his emotional condition, shifting like his moods from the brightest landscapes (Sunflowers, Irises) to the darkest manifestations of personal symbolism (Starry Night, Wheat Field with Crows).
Van Gogh created his best work between 1888 and 1890, when he went to Arles in the south of France ( 3 ). There, in the Mediterranean countryside, he painted landscapes of almost pulsating light, as he tried to convey the spiritual meaning he believed animated all things. Exaggerating his figures with vibrant hues, he set them against thick, rough circles like halos, freeing them from static background and hurling them into infinity. Wishing to form an artists’ colony in the region, he invited Paul Gauguin to join him “in this kingdom of light.” The brief collaboration (less than 2 months) produced many noted works but ended abruptly when van Gogh became violent, attacked his friend with a straight razor, then in remorse cut off his own ear and offered it to a local prostitute.
Gauguin departed for Tahiti, while van Gogh, his health in a downward spiral, entered the asylum in Saint Rémy, where with the approval of his doctors he continued to paint. No one knows the cause of van Gogh’s angst. His seizures, hallucinations, violent mood swings, and increasing anxiety have been variously diagnosed as neurologic disorder, depression, alcoholism, venereal disease, chemical or metabolic imbalance, and behavioral disorder possibly caused by a virus ( 4 ). While illness largely defined what the artist could and could not do (he only painted when he was lucid), art gave him reason to continue living. Even in confinement, his work extended far beyond his personal circumstance.
The Prison Courtyard on this month’s cover of Emerging Infectious Diseases expresses the artist’s hopelessness and despair. In the lower part of the painting, thirty-three inmates form a human corona, pacing heads down, in defeated rote and joyless resignation. In spite of the shared misery and monochrome prison garb, they are not uniformly anonymous; some faces can be deciphered, particularly the one in the center, whose blond hair is lighted by an imperceptible sun’s ray. That is van Gogh himself in what has been interpreted as a “metaphoric self-portrait” ( 1 , 5 ).
Merged with the pavement, the prison walls loom high above the inmates’ heads, overpowering the canvas with finality and forcefulness. The harsh, impenetrable structure, so devoid of beauty, encages the inmates outside the common web of human interaction. These literal walls painted by van Gogh “in captivity” allude to the harsher metaphorical walls of his unknown illness and his spiritual isolation.
The causes of aberrant behavior that leads to imprisonment are largely unknown, as are the causes of many diseases and their consequent spiritual isolation. When an old microbe, a coronavirus, causes a new disease, severe acute respiratory syndrome, the unknown nature of the disease and the risk for contagion require containment to arrest spread of infection. To prevent and control physical illness, the exposed and the infected (in the case of SARS many healthcare workers) may also have to face spiritual isolation. | CC BY | no | 2022-01-25 23:38:16 | Emerg Infect Dis. 2003 Sep; 9(9):1194-1195 | oa_package/b0/96/PMC3016769.tar.gz |
|||||||
PMC3016770 | 14519245 | Methods
We compared the proportions of drug-resistant S. pneumoniae isolates reported by participating clinical laboratories from the Active Bacterial Core Surveillance (ABCs) sites to proportions obtained by aggregation of existing antibiograms produced by the same ABCs laboratories.
Active Laboratory-Based Surveillance
ABCs, a laboratory-based active surveillance system in CDC’s Emerging Infections Program, tracks invasive disease caused by S. pneumoniae and other bacterial pathogens of public health importance ( 12 ). Surveillance areas included in this analysis were: California (CA) (San Francisco County), Connecticut (CT) (entire state), Georgia (GA) (20-county area, including Atlanta), Maryland (MD) (6-county area including Baltimore), Minnesota (MN) (7 counties), New York (NY) (7 counties), Oregon (OR) (3-county area including Portland), and Tennessee (TN) (5 counties). The total population under surveillance was 17 million. A case of invasive pneumococcal disease was defined as the isolation of S. pneumoniae from a normally sterile site (e.g., blood, cerebrospinal fluid) from a surveillance area resident. Surveillance personnel routinely contacted all clinical microbiology laboratories in their site to identify cases and conducted audits every 6 months to ensure complete reporting.
Pneumococcal isolates collected through ABCs were sent to reference laboratories for susceptibility testing by broth microdilution according to the methods of the National Committee for Clinical Laboratory Standards (NCCLS) ( 13 ). Isolates were defined as susceptible, having intermediate resistance, or resistant to agents tested according to NCCLS definitions ( 14 ).
Antibiograms
We requested existing antibiograms from all clinical laboratories participating in ABCs. The antibiograms were to cover the most recent 12-month period for which completed data were available at the time of the inquiry (1997 for GA, TN, CA, MN, OR, MD, and CT; 1998 for NY). Any identifying information (e.g., hospital name) obtained during collection of antibiogram data was removed before the data were forwarded to study investigators at CDC. Surveillance personnel also used a standardized questionnaire to query each hospital’s infection control practitioner or microbiology supervisor regarding the production and distribution of local antibiograms and whether antibiogram data included sterile isolates, nonsterile site isolates, or duplicates isolates from a single patient.
We compiled total numbers of S. pneumoniae isolates identified from the ABCs sites along with the percent of intermediate and resistant isolates, focusing on nonsusceptibility to penicillin, macrolides, and extended-spectrum cephalosporins (e.g., cefotaxime, ceftriaxone). We defined nonsusceptible isolates as those that were of intermediate and high-level resistance or that were simply described as not susceptible to the antibiotic tested. We aggregated data obtained from the participating hospitals within each ABCs site to produce summary antimicrobial susceptibility percentages. When generating tables comparing percent of nonsusceptible pneumococcal isolates estimated by antibiograms and by ABCs, we used only antibiogram data for the year in question (1997 for all sites excluding New York [1998]); antibiograms covering other periods were excluded from this portion of the analysis. Also, we used only antibiograms that included both the total number of isolates tested and the percent nonsusceptible for each of the antibiotics evaluated; this system allowed for aggregation of the laboratory’s data with those from other laboratories. If only a subset of isolates were tested against erythromycin and extended-spectrum cephalosporins, we excluded these results from the aggregated total for erythromycin, cephalosporins, or both. We also calculated the percent of laboratories that included S. pneumoniae susceptibility testing to a variety of other antimicrobial agents and the percent of laboratories generating antibiograms that included susceptibility testing of gram-negative bacteria.
To compare the proportions of resistant and susceptible S. pneumoniae isolates detected by the two surveillance methods, we examined the proportion of hospitals whose aggregated antibiogram data fell within a range of ± 5% and ± 10% compared with that detected through active surveillance. | Results
Generation of Antibiograms
One hundred and forty-five ABCs laboratories completed the surveys; these laboratories conducted antibiotic susceptibility and other testing for a total of 170 (85%) of the 199 hospital laboratories participating in ABCs at the time the study was undertaken. Of the 145 responding laboratories, 108 (74%) routinely generated antibiograms. The 108 antibiograms created include pneumococcal susceptibility testing results for 140 (70%) of the 199 ABCs hospital laboratories. In-house microbiologists typically generated the antibiograms (83%), while infection control practitioners (7%) and pharmacists (10%) created the remaining. Nearly all laboratories included both sterile site (98%) and nonsterile site (92%) isolates in the antibiograms. Ninety-five percent included inpatient, and 79% included outpatient isolates. Forty-six laboratories (43%) included duplicate isolates from individual patients in their antibiograms.
When asked how pneumococcal isolates with intermediate susceptibility were categorized, survey responders stated that their laboratory characterized these isolates as intermediate (37%), resistant (32%), susceptible (5%), and nonsusceptible (22%). This question did not specify the antibiotic tested. Only 25 (23%) laboratories generated antibiograms that included data distinguishing isolates intermediate and resistant to penicillin; 77% only indicated whether the isolates were susceptible or nonsusceptible.
The average number of isolates included in the summary antibiograms was nearly double the number collected through active surveillance; the mean number of pneumococcal isolates (per site) tested for penicillin susceptibility was 417 (range 69–850) for ABCs and 826 (range 383–1,291) for summary antibiograms. Hospitals (n=40) that excluded duplicate isolates from antibiograms averaged similar numbers of isolates (mean 89 isolates) tested for penicillin susceptibility as did hospitals (n=34) whose antibiograms included multiple isolates from a single patient (mean number of isolates tested 88).
Of the 140 hospital laboratories whose pneumococcal antibiotic susceptibility testing results were summarized in antibiograms, 96 (70%) created antibiograms with penicillin-susceptibility results in a format that could be aggregated for the year in question. The proportion of laboratories in each site that generated usable penicillin susceptibility data ranged from 70% (MD) to 100% (NY and MN). Antibiograms included susceptibility-testing results for macrolides (63%) and third-generation cephalosporins (57%). The proportion of laboratories for which this susceptibility information was in a format that could be aggregated, however, was smaller for macrolides (44%) and third-generation cephalosporins (39%). For the eight sites, the proportion of penicillin-nonsusceptible isolates from ABCs ranged from 14.5% (NY) to 38.4% (TN), whereas antibiograms yielded a range of 18.5% (CA) to 41.7% (TN) ( Table 1 ). For all sites the overall proportion of isolates nonsusceptible to penicillin according to antibiograms was within 10 percentage points of the population- and laboratory-based surveillance (ABCs); for six sites it was within 5 percentage points. The proportion of penicillin-nonsusceptible isolates for each site identified by antibiograms was higher than that generated by ABCs (median difference: 3.65%; range 8.6% to 1.8%). No correlation existed between site-specific levels of penicillin resistance and the magnitude of difference between site-specific penicillin resistance identified by the two methods.
The proportions of pneumococcal isolates nonsusceptible to a third-generation cephalosporin and to erythromycin were lower than the proportion of penicillin-nonsusceptible isolates, regardless of the method used ( Tables 2 and 3 ). Similar to the results for penicillin, the percent of strains nonsusceptible to third-generation cephalosporins or erythromycin as detected by antibiograms tended to be greater than the percent nonsusceptible detected by ABCs. In contrast, the range of the differences for third-generation cephalosporins and erythromycin detected by the two surveillance methods was larger than the range of differences for penicillin as measured for each ABCs site. The magnitude of the difference in overall susceptibility to third-generation cephalosporins determined by the two surveillance methods was <10% for seven of eight sites and <5% for five sites. The magnitude of the difference in susceptibility to erythromycin as determined by the two surveillance methods was <10% for all sites and <5% for only four sites.
In addition to penicillin, cephalosporins, and macrolides, submitted antibiograms included susceptibility testing results for a variety of other antibiotics that included the following: trimethoprim/sulfamethoxazole (35%), vancomycin (59%), clindamycin (47%), gentamycin (3.9%), and one or more fluoroquinolones (14%). Thirty-eight percent of antibiograms returned for analysis also included antimicrobial susceptibility testing results for various gram-negative bacteria. | Discussion
The results of our study suggest that antibiograms may be an adequate method for conducting drug-resistant S. pneumoniae surveillance for many health departments, illustrating the comparability of aggregated antibiograms that include both sterile and nonsterile site isolates to active, laboratory- and population-based surveillance for invasive isolates. For more than half the comparisons between the two methods, the difference in antibiotic resistance detected was <5 percentage points, and for 23 (96%) of the 24 comparisons the difference was <10 percentage points. No significant differences in comparability of the two methods were noted between high- and low-resistance areas. This study indicates that antibiograms may be an alternative tool for evaluating penicillin nonsusceptibility in a region and validates the earlier findings of the Oregon study, conducted in an area of relatively low antibiotic resistance ( 11 ).
Although the estimates of level of resistance obtained from antibiograms approximated that from ABCs, aggregated antibiogram data tended to show a higher proportion of nonsusceptible isolates within each site and for each antibiotic evaluated. This trend is likely due to the inclusion of nonsterile (noninvasive) site isolates. In studies from centers that include both sterile and nonsterile isolates, nonsterile site isolates have been found to be equally or more resistant ( 15 – 17 ). The reason for this difference is unclear but may reflect differences in serotype distribution between strains causing invasive and noninvasive disease. Disparity in results between clinical and reference laboratories could also contribute to this trend; use of the E test (AB Diodisk, Solna, Sweden) by clinical laboratories might vary from the referent method (broth microdilution) by one half or one dilution ( 18 ). In this study, we were unable to examine the role of laboratory error or differences in susceptibility-testing methods as a reason for differences in results from antibiograms compared with those from active surveillance.
Compared to penicillin, differences between the two surveillance methods were greater for extended-spectrum cephalosporins and erythromycin. This finding may be because of smaller numbers of isolates included in the antibiograms, fewer laboratories that included susceptibility testing of S. pneumoniae to these antibiotics, or greater disagreement between clinical and reference laboratory results. We could not include antibiogram-susceptibility testing results for some hospital laboratories because only a subset of the pneumococcal isolates that were tested for penicillin nonsusceptibility were also tested for susceptibility against third-generation cephalosporins (20 laboratories) and erythromycin (13 laboratories). The potential explanations for why these laboratories tested only a subset of pneumococcal isolates (i.e., only penicillin-nonsusceptible isolates were tested) against the same antibiotics were not indicated on the antibiograms.
We chose to evaluate the comparability of the two surveillance methods by observing how often the percent nonsusceptible isolates estimated by aggregated antibiograms differed by <5 and 10 percentage points from that estimated by ABCs active surveillance. As there exists no standardized or absolute level of antimicrobial drug resistance that would dictate a change in empiric treatment of pneumococcal infections, we chose a priori two conservative thresholds of difference. A healthcare provider may not modify empiric therapy based on the differences found in our study, and the magnitude of differences reported here are likely not relevant from a public health perspective. Trends of pneumococcal antibiotic resistance over time may be of more clinical and epidemiologic relevance than an absolute level. Knowledge of local trends may help communities assess regional antibiotic use and evaluate the effects of local educational measures promoting the judicious use of antibiotics. As this study did not span multiple years, we could not document the ability of antibiograms to detect trends. However, given that sentinel surveillance conducted in ABCs sites has been shown to detect pneumococcal resistance trends over time ( 19 ) and that in our study antibiograms provided site-specific point estimates of antibiotic resistance similar to those measured by active sruveillance, antibiograms may be able to follow trends in pneumococcal antimicrobial resistance at the local level.
Drawbacks to this surveillance method include the inability to evaluate resistance to multiple drugs. Relatively few drugs can be evaluated because of laboratory variations in antibiotics selected for susceptibility testing by antibiograms. Health departments that wish to monitor emerging resistance patterns to antibiotics, such as vancomycin or fluoroquinolones, might consider a method other than aggregated susceptibility tables, or they might encourage hospital laboratories within a defined community to standardize their susceptibility panels to facilitate aggregation of results. Another limitation of antibiograms is the inability to distinguish between intermediate- and high-level resistance to penicillin; 77% of antibiograms in our study expressed resistance as percent nonsusceptible rather than distinguishing between intermediate and resistant isolates. This distinction has become relevant for treatment of some infections. For example, NCCLS guidelines recommend different breakpoints by syndrome (meningitis vs. nonmeningitis) for some agents ( 20 ). Aggregating antibiograms is useful for infections that are generally community-acquired, but antimicrobial resistance in hospital-acquired infections should be evaluated based on the knowledge of the particular institution’s flora. Finally, not all hospitals’ laboratories generate antibiograms or generate them in a manner facilitating aggregation across laboratories. However, we demonstrated the comparability of the two surveillance methods despite the fact that the penicillin-nonsusceptibility results, as measured by antibiograms, was known for only 96 (48%) of the 199 ABCs hospital laboratories.
This study should help clinicians and public health personnel in state or local health departments determine which surveillance tool for obtaining estimates of antibiotic-resistant S. pneumoniae is best suited to their specific region or community by providing background information on two alternative systems; the benefits and limitations of each system may be reviewed to determine the most useful and practical surveillance tool for a particular region. Antibiograms are relatively inexpensive and easy to use. Although not measured in our study, epidemiologists in Oregon found that the cost of active surveillance was approximately 70 times that of aggregating antibiograms for the three-county study area ( 11 ); the high cost of this type of surveillance, however, is partially due to the fact that ABCs is an integrated system that accomplishes multiple objectives in addition to susceptibility testing of pneumococcal isolates ( 12 ). Most hospitals and laboratories routinely generate antibiograms; therefore, obtaining this information is relatively easy and within the capacity of local health departments. Active surveillance, on the other hand, excludes duplicate isolates for a single patient or infection and is able to provide extensive additional information such as risk factors for resistant infections, outcome data, and other laboratory testing such as serotype determination. Active surveillance also limits case and isolate collection to persons who are residents of the defined surveillance area, allowing for calculation of rates of disease. Furthermore, active surveillance provides individual patient-level data, allowing assessment of the impact of specific interventions such as pneumococcal conjugate vaccination of infants and young children. Attainment of patient-level data through active surveillance also permits detection of possible changes in the incidence of resistant pneumococcal infections (e.g., because of a general decrease in cases of pneumococcal infection among children receiving pneumococcal conjugate vaccine) that might go unnoticed if only the proportion of resistant isolates were tracked (i.e., as done by antibiograms).
Increasing antibiotic drug resistance is a problem that is global in scale and that has practical implications for the treatment and outcome of invasive infections from S. pneumoniae and other bacteria of public health importance. Clinicians and researchers are now acknowledging the importance of preventing resistant infections through appropriate use of antibiotics and vaccines. Surveillance data are needed to monitor the success of these campaigns and to raise awareness of the problem. Because most local laboratories generate antibiograms routinely, collecting aggregating antibiogram data is an inexpensive and readily available method of measuring local antibiotic resistance levels. Although providing less information than active surveillance, aggregated antibiogram data are a generally accurate way for health departments to generate needed community-specific estimates of pneumococcal resistance. | Community-specific antimicrobial susceptibility data may help monitor trends among drug-resistant Streptococcus pneumoniae and guide empiric therapy. Because active, population-based surveillance for invasive pneumococcal disease is accurate but resource intensive, we compared the proportion of penicillin-nonsusceptible isolates obtained from existing antibiograms, a less expensive system, to that obtained from 1 year of active surveillance for Georgia, Tennessee, California, Minnesota, Oregon, Maryland, Connecticut, and New York. For all sites, proportions of penicillin-nonsusceptible isolates from antibiograms were within 10 percentage points (median 3.65) of those from invasive-only isolates obtained through active surveillance. Only 23% of antibiograms distinguished between isolates intermediate and resistant to penicillin; 63% and 57% included susceptibility results for erythromycin and extended-spectrum cephalosporins, respectively. Aggregating existing hospital antibiograms is a simple and relatively accurate way to estimate local prevalence of penicillin-nonsusceptible pneumococcus; however, antibiograms offer limited data on isolates with intermediate and high-level penicillin resistance and isolates resistant to other agents.
Keywords: | Infections from Streptococcus pneumoniae tax the healthcare system in the United States and other countries. Scientific advances have been made in the treatment and prevention of pneumococcal infections through antibiotics and licensure of vaccines for both adults and children; however, the last few decades have witnessed the emergence of S. pneumoniae resistance to antibiotics ( 1 ). In a multistate, population-based surveillance system that follows invasive disease from pneumococcus and other bacterial pathogens, the percent of isolates resistant to penicillin reached 24% in 1998; concurrent increases in resistance to other antimicrobial drugs have also been noted among penicillin-resistant pneumococci ( 1 , 2 ). Implications of drug-resistance extend beyond the laboratory and into clinical practice as treatment failures from drug resistance have been reported with meningitis ( 3 – 5 ) and otitis media ( 6 , 7 ). In some studies, increased death and disease in patients hospitalized with pneumonia caused by high-level β-lactam-resistant pneumococci have been reported ( 8 , 9 ).
Measuring pneumococcal resistance to penicillin and other antibiotics enables epidemiologists and healthcare providers to monitor trends, develop guidelines for optimal empiric therapy, and provide impetus for and ascertain the success of educational efforts promoting the judicious use of antibiotics. Antimicrobial resistance is not uniform across the United States ( 10 ). Nonsusceptibility to penicillin among invasive pneumococcal isolates has been shown to range from 15% to 35% among populations in the Centers for Disease Control and Prevention’s (CDC) national surveillance system ( 1 ).
The ideal method for accurate tracking of antimicrobial-resistance patterns in a community may be active, laboratory-based surveillance systems that collect strains for susceptibility testing in a reference laboratory. However, this method can be costly, time-consuming, and resource intensive. Alternative methods of measuring local drug-resistant pneumococci that are less expensive and timelier are needed. One alternative is to use aggregated antibiograms. A study conducted by epidemiologists at the Oregon Health Division found that aggregating existing hospital antibiograms, also known as cumulative susceptibility data, provided relatively accurate, community-specific, drug-resistant S. pneumoniae data when compared with active-, laboratory-based surveillance, the standard criterion for invasive disease. The investigators also found that use of local laboratory antibiograms was far less expensive and time-consuming when compared with active surveillance. Whether Oregon’s results can be generalized is unknown; however, only 12 hospitals in one city (Portland) were surveyed, and the percent of S. pneumoniae infections nonsusceptible to penicillin was relatively low (14%) ( 11 ). We compared the two methods of surveillance in a larger study that involved sites in geographically disparate areas and represented a larger fraction of the national population and varying degrees of drug resistance across the United States. Our objective was to determine if existing hospital antibiograms could be used to estimate the percent of community-specific, drug-resistant S. pneumoniae in multiple sites. | Acknowledgments
We thank the surveillance officers, clinical microbiologists, laboratory directors, and managers working in ABCs sites for contributing data for this study and Paul Cieslak, Richard Besser, and Anne Schuchat for their helpful and insightful comments on the manuscript.
Dr. Van Beneden is a medical epidemiologist in the Respiratory Diseases Branch, Division of Bacterial and Mycotic Diseases, Centers for Disease Control and Prevention. Her research interests include public health surveillance systems for community-acquired bacterial infections, antimicrobial resistance among streptococci, study of vaccines against pneumococcal disease, and group A streptococcal disease. | CC BY | no | 2022-01-31 23:38:12 | Emerg Infect Dis. 2003 Sep; 9(9):1089-1095 | oa_package/6b/01/PMC3016770.tar.gz |
||
PMC3016771 | 14519247 | Materials and Methods
Selection of Patients
Patients who arrived at the outpatient Gastroenterology Clinic of the Rift Valley Hospital in Nakuru, Kenya, with uninvestigated symptoms of dyspepsia for at least the previous 3 months were included in the study. Inclusion criteria were 1) presence of at least two of the following symptoms; upper abdominal pain or discomfort, bloating, nausea, vomiting, or early satiety; 2) persistent or recurrent symptoms occurring at least three times per week during >6 months in the year or years preceding the study; 3) absence of nocturnal or postprandial symptoms of gastroesophageal reflux; 4) no previous abdominal surgery except for uncomplicated appendectomy, cholecystectomy, or hernia repair.
For every dyspeptic patient, a sex- and age-matched control was recruited from a convenience sample of asymptomatic persons from the local community of Nakuru by public advertisement. Dyspepsia in the control group was excluded by clinical interview and a structured screening questionnaire.
Gastrointestinal Symptom Questionnaire
All participants (patients and asymptomatic participants) were interviewed by one of the authors (S.O.), a local Kenyan physician, to assess symptoms. A bowel disease questionnaire formulated on the basis of a previously validated instrument (the Bowel Disease Questionnaire) was used, modified, and shortened to accommodate local Kenyan needs ( 9 ).
Demographic and Socioeconomic Status
Participants were questioned about demographic data and current and childhood socioeconomic status. Age was coded into five categories (0–20, 21–30, 31–40, 41–50, and >50 years of age); among adults > 21 years of age, occupation was classified as manual versus nonmanual (clerical, professional, homemaker); educational attainment as less than or at least eighth grade; number of siblings as less than or at least 7; residence as urban or rural; tobacco use as ever or never smoked cigarettes; alcohol use as less than or at least 1 L of beer or 0.5 L of wine (average 50 g ethanol) per week.
Determination of H. pylori status
Whole blood was obtained from all participants. Anti- H. pylori immunoglobulin (Ig) G was determined with the Helisal Rapid Blood Test kit (Cortecs Diagnostics, UK). This test achieved 89% sensitivity and 91% specificity versus histologic examination and urease testing in Australian adults ( 10 ).
Kits were stored at 4°C and equilibrated to room temperature before use. The tests were performed according to the manufacturer’s instructions. All results were read by one of the authors (S.O.). Our laboratory recently evaluated the Helisal test in 20 Israeli adults (20–70 years of age, median 42 years of age), and demonstrated the test to be 100% sensitive and 90% specific ( 11 ).
Statistical Analysis
Bivariate analyses were performed by using the Fisher exact test for categorical variables and the Student t test or Kruskal-Wallis two-sample test for integer and continuous variables. Multivariate analyses were performed by applying backwards-elimination logistic regression to all demographic and socioeconomic variables evaluated in the bivariate analyses; parsimonious models were developed, which included only age and those variables associated with a mutually adjusted p value of <0.10. Only participants >21 years of age were included in the models investigating the role of education, occupation, and family size on the H. pylori –dyspepsia relationship. All p values were two-tailed. | Results
Seropositivity for H. pylori was found in 98 (71%) of 138 symptomatic patients and 70 (51%) of 138 asymptomatic participants (odds ratio [OR], 2.4; 95% confidence interval [CI], 1.4 to 4.0; p<0.001). In the asymptomatic participants, the prevalence of H. pylori infection increased with age, from 18% in the 0- to 10-year age group to 48% in the 11- to 20-year age group, peaking (68%) in the 31- to 40-year age group. In the dyspeptic patients, the prevalence of H. pylori infection was 60% to 73% in all age groups ( Table 1 ). Among persons < 21 years old, H. pylori infection was more prevalent in those with symptoms than those without (17 [71%] of 24 vs.12 [38%] of 31; OR, 4.1; 95% CI, 1.1 to 14.9; p=0.02). Similarly, H. pylori seropositivity showed a significant association with dyspepsia among persons 21–30 years of age (35 [73%] of 48 vs. 36 [48%] of 74; OR, 2.6; 95% CI 1.2 to 6.7; p=0.01), but not among persons >30 years of age (46 [70%] of 66 vs. 20 [63%] of 32; OR, 1.4; 95% CI, 0.4 to 2.8; p=0.8).
On bivariate analysis, infection with H. pylori , older age, female sex, working as a manual laborer ( > 21 years of age), less education, and larger family size (>7 siblings) were associated with dyspepsia in adults ( Table 2 ). H. pylori infection was associated with dyspepsia after adjusting for age, sex, and urban residence (OR, 2.0; CI, 1.1 to 3.3; p=0.02), and among adults, after adjusting for these factors and for education, family size, and occupation (OR, 2.4; 95% CI, 1.1 to 4.9; p=0.02) ( Table 3 ).
This H. pylori serologic study in residents of the Nakuru District of Kenya found the expected relationship between H. pylori prevalence and age among asymptomatic participants. However, among persons with dyspepsia, the prevalence was consistently high for all ages, which yielded an unequivocal association between H. pylori infection and dyspepsia among persons < 30 years of age.
Other studies of Africans with dyspepsia have yielded a mean prevalence of 65% (range 60% to 71%), which is consistent with our results ( 12 , 13 ). Our recruitment strategy was very similar to the strategy of a study conducted in Cape Town, South Africa ( 13 ). In that 1993 study, H. pylori prevalence among a subset of Africans of non-Caucasian descent with nonulcer dyspepsia attending a gastroenterology clinic was 71%, the same as in our study. However, since the South African study did not include healthy controls, no generalizations can be made about the association between H. pylori and dyspepsia in different part of the continent. Recently, healthy Nigerian adults and dyspeptic patients were found to have similar prevalence of H. pylori infection (80% vs. 88%), but the sample size (50 persons) may have been too small to detect the moderate effects found in our and other’s studies, particularly in the subgroup of persons <30 years of age ( 14 ).
The role of H. pylori in dyspepsia is poorly understood ( 15 , 16 ). Dyspeptic symptoms are common in sub-Saharan Africa ( 17 ); in some regions, they may account for up to 10% of hospital admissions ( 18 ). Because healthcare resources in Kenya are limited, physicians direct diagnostic tests for patients in whom a definitive diagnosis is important for treatment (e.g., those with peptic ulcer or gastric cancer). Since a large fraction of the dyspepsia in younger Africans is attributable to H. pylori, and since dyspepsia in this age group is likely to represent a benign process, a test-and-treat strategy may be appropriate in this age group. This approach involves H. pylori testing of uninvestigated dyspeptic patients without severe symptoms or signs suggestive of underlying malignancy (unexplained recent weight loss, dysphagia, hematemesis or melena, anemia, previous gastric surgery, and palpable mass). Those with positive results would undergo H. pylori eradication therapy before endoscopy is considered. Those test ing negative would undergo endoscopy only if dietary and behavioral maneuvers do not ameliorate the complaints.
Our study has several limitations. We did not investigate the underlying causes of dyspepsia, so we may have included an unknown number of participants with peptic ulcer disease or other organic pathology. Additional limitations include the use of convenience (self-selected) controls as a proxy for population controls and the use of prevalence ORs (to make the crude and adjusted risks commensurate) instead of risk ratios. These factors may have led us to overestimate the association between H. pylori and dyspepsia. On the other hand, the use of a test with imperfect sensitivity and specificity may have led us to underestimate this association.
In a recent, randomized, placebo-controlled trial in a developed country, eradication therapy proved successful in a subset of patients with nonulcer dyspepsia ( 19 ). However, these findings were not confirmed in another trial of similar design ( 20 ). This disparity suggests either that the relationship between H. pylori and nonulcer dyspepsia is weak or that dyspepsia is a heterogeneous disorder. Thus, the effectiveness of a test-and-treat strategy in the developing world may vary by the population studied or by biological and cultural differences in the definition of dyspepsia.
This study demonstrates that in Nakuru, Kenya, H. pylori infection is associated with dyspepsia, particularly in persons < 30 years of age. Since solid evidence exists that H. pylori eradication prevents the development ( 21 ) and recurrence ( 22 ) of gastric carcinoma and promotes regression of B-cell lymphoma of the mucosa-associated lymphoid tissue (MALT) tissue of the stomach ( 23 ), the proposed test-and-treat strategy may be an efficient use of health resources in Kenya and perhaps other African countries. | The prevalence of Helicobacter pylori infection was studied in 138 patients with dyspepsia in a hospital in Nakuru, Kenya, and in 138 asymptomatic sex- and age-matched controls from the same population. Anti– H. pylori immunoglobulin (Ig) G was more prevalent in dyspeptic than asymptomatic persons (71% vs. 51%), particularly those <30 years old (71% vs. 38%). H. pylori seropositivity was associated with dyspepsia after adjusting for age, sex, and residence (urban or rural). Among adults, the association between H. pylori infection and dyspepsia remained after adjusting for the above factors and for educational attainment, family size, and manual occupation. H. pylori infection in asymptomatic residents of Nakuru, Kenya, was more prevalent in older persons, with a rate of 68%, than in those 31–40 years of age. However, young persons with dyspepsia had an unexpectedly high prevalence of H. pylori infection. H. pylori test-and-treat strategy should be considered in Kenyan patients with dyspepsia, particularly in persons < 30 years of age.
Keywords: | Dyspepsia is a complex set of symptoms, rather than an indication of a specific disease, and defies simple categorization. Many causes of dyspepsia exist, including Helicobacter pylori . H. pylori may also produce different symptoms in different people. Moreover, what is known about variations in host susceptibility and H. pylori virulence has not been correlated with specific symptoms ( 1 ).
Many patients with upper gastrointestinal symptoms who seek health care do not have follow up treatment. In 60% of the investigated patients, results of tests to rule out peptic ulcer disease, gastro-esophageal reflux disease, and gastric cancer are normal, and the diagnosis is functional dyspepsia ( 2 ). The benefit of treatment to eradicate H. pylori in functional dyspepsia remains controversial ( 3 , 4 ). To manage uninvestigated dyspepsia in developed countries, some authors recommend screening patients >50 years of age without severe symptoms with a noninvasive test for H. pylori, and then treat those with positive results with H. pylori –eradicating drugs ( 5 ). However, in Africa, a disparity exists between the high prevalence of H. pylori infection (>90% in many areas) ( 6 ) and the occurrence of clinically important disease (“the African enigma”). This finding has led researchers to postulate that H. pylori does not play a major role in the etiologic findings of upper gastrointestinal system pathology apart, from gastritis ( 7 , 8 ). Thus, a noninvasive H. pylori test-and-treat strategy in a primary care setting in an economically depressed area, such as Africa, should be based on data that show an association between dyspepsia and H. pylori infection. The aim of our case-control study was to investigate the association between H. pylori infection and dyspepsia in Nakuru, Kenya. | Acknowledgment
We thank Phyllis Curchack Kornspan for her editorial and secretarial services.
Dr. Shmuely is deputy chief of the Department of Internal Medicine C, Rabin Medical Center, Beilinson Campus, affiliated with the Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel. His research interests are epidemiology and clinical aspects of Helicobacter pylori infection. In 1997–98, he worked as a visiting scholar at Stanford Medical Center, California, on the transmission of H. pylori . Together with his colleagues, he is associated with the H. pylori Research Institute, Department of Gastroenterology, Rabin Medical Center, Beilinson Campus, Petah Tikvah, Israel. | CC BY | no | 2022-01-27 23:42:11 | Emerg Infect Dis. 2003 Sep; 9(9):1103-1107 | oa_package/b7/78/PMC3016771.tar.gz |
|||
PMC3016772 | 14519252 | Materials and Methods
The network is a collaboration among 12 laboratories in 9 countries in Europe to allow more rapid and internationally harmonized assessment of the spread of foodborne viral pathogens. The project is coordinated by the National Institute of Public Health and the Environment in Bilthoven, the Netherlands. Participants are virologists and epidemiologists with active research programs in (foodborne) enteric viruses from Spain (Barcelona, Valencia, Madrid), Italy (Rome), France (Nantes, Dijon), Germany (Berlin), the Netherlands (Bilthoven), the United Kingdom (London), Denmark (Copenhagen), Sweden (Solna), and Finland (Helsinki). In addition, groups from Slovenia and Hungary participate.
The overall objectives for the complete study are as follows: 1) to develop novel, standardized, rapid methods for detection and typing of enteric viruses, particularly NV, to be used in all participating laboratories; 2) to establish the framework for a rapid, prepublication exchange of epidemiologic, virologic, and molecular diagnostic data; 3) to study the importance of enteric viruses as causes of illness across Europe, with a special focus on multinational outbreaks of infection with NV and hepatitis A virus; 4) to provide better estimates for the proportion of NV infections that can be attributed to foodborne infection; 5) to determine high-risk foods and transmission routes of foodborne viral infections in the different countries and between countries; 6) to describe the pattern of diversity of NV within and between countries and identify potential pandemic strains at the onset; and 7) to investigate the mechanisms of emergence of these strains, including the possibility of spillover from animal reservoirs
The central research goal is to better understand the mechanisms of emergence of variant NV strains. We hypothesized that the observed epidemic shifts might be caused by displacement of endemic variants attributable to a large seeding event with a variant that subsequently spread through the population by secondary and tertiary waves of transmission, or possibly by a smaller seeding event of a highly transmissible new variant, generated by genetic mutation or recombination. To address these questions, we built a European surveillance structure for outbreaks of viral gastroenteritis, including food- or waterborne outbreaks. The first phase of the project, described in this report, was designed to review existing surveillance systems for viral gastroenteritis, to design and agree on a minimum dataset for collection during the second phase of the project; to review and evaluate currently used methods for detection and genotyping of NV with the aim of standardizing methods for virus detection in gastroenteritis outbreaks; and to build a database of combined epidemiologic and virologic data for use by all participants. The overriding aim was to facilitate the early detection of potentially emerging variant strains. Upon completion of this phase, we will begin “ enhanced surveillance”, i.e., standardized surveillance for viral gastroenteritis outbreaks to study objectives 4–7. | Results
Review of Current Methods in Europe
From the outset, it was recognized that the best approach in developing an international surveillance scheme for foodborne viruses would not be the standardization of practice, but rather the harmonization of existing practices. To achieve this, a number of surveys were undertaken to determine diagnostic capabilities, genotyping techniques, and the status of surveillance of viral gastroenteritis outbreaks among project participants. The results of these surveys are highlighted below.
Virus Detection and Genotyping
The scale of diagnostic capability in laboratories varies widely, and a range of diagnostic tests (electron microscopy, reverse transcription–polymerase chain reaction [RT-PCR], and enzyme-linked immunosorbent assay) and characterization methods are used (including heteroduplex mobility assay, reverse line blot, microplate hybridization, and sequencing ( 7 – 9 ). Laboratories in all countries now use molecular techniques (RT-PCR) for NV detection ( 10 ).
A comparative evaluation of RT-PCR assays was done by analysis of a coded panel of stool samples that had tested positive (81 samples) or negative (9 samples) for NV. Samples provided by four laboratories were included, as well as a samples representing the currently known diversity of NV genotypes. Full details of this study have been published ( 11 ). This evaluation showed that no single assay is best, although sensitivities range from 55% to 100%. Most differences were seen when comparing assay sensitivities by genogroup. Based on pre-set scoring criteria (sensitivity, specificity, assay format, length of sequence), one primer combination was ranked as the assay of first choice for laboratories starting surveillance, and protocols and reagents have been made available to all participants on request.
On the basis of the aggregated data from the sequence database, alignments were made of the regions in the viral RNA that contain the primer-binding sites for the set of primers with the highest ranking for the diagnostic evaluation to generate more optimal designs of primers ( 12 ). These primers, protocols, and reference reagents have been made available to several groups in the field.
Outbreak Investigations
While all countries in the network now have the diagnostic capability to recognize outbreaks of NV, the structure of their national surveillance differs and therefore, so do the epidemiologic data collected on viral gastroenteritis ( 10 , 13 ). Some countries investigate outbreaks of gastroenteritis irrespective of the size or possible mode of transmission (United Kingdom, the Netherlands); others primarily investigate outbreaks that appear as foodborne from the onset (Denmark, France) ( 10 ). Similarly, coverage of the laboratories involved ranges from regional (Italy) to national, although different levels of underreporting are likely to exist ( 10 ) These differences, as well as differences in the laboratory test protocols, will be taken into consideration when interpreting aggregated data in the later stages of the project. For the purposes of comparing data across Europe, however, the key finding was that most countries maintain a national database of NV outbreaks (as opposed to individual cases). Although the proportion of the population that these databases effectively survey and the completeness of clinical information collected vary, we recognized that we could network national outbreak surveillance by agreeing on a minimum dataset. That dataset would include the causative organism, mode and place of transmission, diagnostic results, case details, food vehicles, and viral typing information.
Also agreed upon were clinical definitions of a case and an outbreak of viral gastroenteritis based on Kaplan’s criteria ( 14 ), as follows. A case of gastroenteritis was defined as a person seen with 1) vomiting (two or more episodes of vomiting in a 12-hour period lasting > 12 hours), or 2) diarrhea (two or more loose stools in a 12-hour period lasting > 12 hours, or 3) vomiting as defined in 1) and diarrhea as defined in 2). An outbreak was defined as follows: 1) Patients living in more than one private residence or resident or working in an institution at the time of exposure; 2) cases linked by time and place; 3) vomiting in > 50% of total cases ( 14 ); 4) mean or median duration of illness of total cases from 12 to 60 hours; 5) incubation period (if available) of total cases between 15 and 77 hours, usually 24–48 hours ( 14 , 15 ); and 6) testing of stool specimens for bacterial pathogens. (This step is not mandatory;, however, if tested, all specimens should be negative for bacterial pathogens.)
Development of Database
A major goal of the first year was to build a database into which historic information present in the participating institutes would be collected. The rationale behind this was that by combining this existing information, new observations (on seasonality of outbreaks or patterns of emergence of new variants, for example) might be possible. Without harmonization of data collection, the comparative analysis would clearly be limited. The historic database, however, also served as a pilot phase because the definitive format of the database is used in the enhanced surveillance program. Participants who had historic collections of sequences were asked to submit these, along with additional epidemiologic information, as described in Table 1 . Data were entered by using the Bionumerics (BN) package (Applied Maths, Ghent, Belgium), which allows storage, comparative analysis, and clustering of combined epidemiologic and biological experimental data (e.g., sequences, reverse line blot results, enzyme immunoassay data). The entries were either uploaded from the public domain or submitted as unpublished sequences from participating laboratories. Publicly available sequences were included to provide a customized report for database searches, e.g., genotype to which the sequence belongs.
Since September 2001, participants have been able to access the database directly through a password-controlled Internet connection. At present, the database contains >2,500 entries, mostly on NV, but including some hepatitis A virus, astrovirus, and Sapovirus ( Tables 2 and 3 ). Upcoming variants will first be subjected to a search of the historic database to determine if the viruses have been seen before in Europe. An automated search tool is available and has been made accessible through the Internet to participants. Partners interested in analyzing the data can obtain the complete dataset, provided they adhere to the confidentiality agreements signed by every partner. Interested parties outside the project group can access the database under certain conditions through the coordinator or one of the participants. This access is not restricted to groups in the participating countries. The limiting factor is the target region used for virus characterization, which has not been standardized globally. A database search will be performed upon request (for groups outside the network). Results are then communicated to them and to the person who submitted any matching sequences. After that initial linking, follow-up discussions and investigations of possible common-source events can be done by the groups involved.
Prospective Enhanced Surveillance
Comparative Evaluation of Diagnostic/Genotyping Methods
The different PCR primers used among the European group all target a highly conserved region within the viral polymerase gene. Sequences of the amplicons from the various diagnostic PCRs overlap and therefore, can be compared to gain inferences on the molecular epidemiology and the spread of NV variants ( 11 ). Rapid characterization techniques, notably the reverse line blot ( 9 ) and heteroduplex mobility assay ( 7 ), are also used within the network; the typing data generated by these techniques can also be accommodated by the database.
Comparative Evaluation of Data
After agreement on a minimum epidemiologic and virologic dataset, we made a standard Web-based questionnaire available to all participants behind a password-protected site (available from: URL: www.eufoodborneviruses.net ). Using Web-based Active Server Pages (ASP) technology, investigators have full access to the secure outbreak database ( Figure ). Investigators are asked to enter information that is available as soon as an investigation begins on an event occurs that meets the outbreak definition. A unique reference number is assigned to each outbreak, which is the key used to access records and to update diagnostic or typing data, for example, as an investigation continues.
The database also collects information on the level of evidence (i.e., microbiologic, epidemiologic, circumstantial) implicating food or water as a mode of transmission. Pop-up windows are used to define these criteria, since a range of public health scientists use the system. Other features of the ASP technology, including drop-down menus, are used to standardize the data collected. Descriptive information from outbreaks (number of people exposed, number of people ill, number of controls infected, symptoms) is collected when possible, to allow comparisons of the clinical characteristics of different NV genotypes. Preliminary data suggest that such differences exist.
One of our main scientific objectives is to explain the mechanism behind the emergence of new variant strains. Essential for the early detection of such emerging variants is a rapid reporting network. The initial suspicion of “something strange” may be from clinicians who investigate outbreaks (e.g., a sudden increase in the number of reports), or from one of the laboratories (e.g., finding the same variant in several outbreaks). The central database is used to facilitate both types of reports. The real power in this format of data exchange is that immediately after entry or update of information, the data are in the database and can be accessed by other collaborators. The database can be searched for common virologic (sequence) or epidemiologic (e.g., a food vehicle) characteristics that would trigger further investigation of links between outbreaks.
Recognition of International Outbreaks
This model has proved successful in recognizing a number of internationally linked events. Clusters of cases in Denmark, Finland, and the Netherlands were all linked to oysters imported from France. Another foodborne outbreak traced in part through the network followed the concluding dinner of an international conference in Finland. Symptoms began the day after the conference, when many attendees had returned to their home countries. Approximately 40 persons were affected, and the same NV variant (Melksham) was detected from cases in Finland, Sweden, and the Netherlands. A dessert item was implicated by cohort study. Importations of hepatitis A from Peru into Spain and from Ibiza, Spain, to Germany have also been recognized through the network. Full details of these outbreaks will be published elsewhere. | Discussion
Microbial food safety is considered an important public heath issue but historically has focused on control of bacterial contamination. Several recent publications, however, show that outbreaks of foodborne infection attributable to viruses are common and may in fact be an important public health concern for several reasons: most clinical laboratories involved in outbreak investigations do not have access to routine diagnostic methods for detecting NV, user-friendly methods for use in these laboratories are only now becoming available and need to be validated, foodborne transmission of NV is quite common, and food microbial quality control largely relies on indicators for the presence of fecal bacteria, which may not correlate with the presence of enteric viruses ( 2 , 16 ). Although foodborne viruses are increasingly studied, no validated methods yet exist for reliably detecting them in food items. In all, these facts indicate that through foodborne transmission an enteric viral pathogen (NV) can escape detection, possibly resulting in large epidemics.
In the United States, molecular detection techniques are being implemented in state public health laboratories under the guidance of the Centers for Disease Control and Prevention (CDC), which is building an infrastructure for reporting of outbreaks of food-related illness attributable to enteric viruses (Calicinet). In Europe, no central institute yet exists with the authority to do this, so the best efforts to date are the voluntary disease-specific surveillance networks, such as Enternet (which monitors trends in foodborne bacterial pathogens), and the European Influenza Surveillance Scheme (designed to monitor influenza virus activity across Europe) ( 17 , 18 ). We have built such a surveillance network for enteric viruses, using NV as a target organism. NV was an obvious choice: an increasing number of publications illustrate that it is one of the most important causes of outbreaks of gastroenteritis, including food- and water-related outbreaks (reviewed in 2). CDC estimates that up to 66% of all food-related illness in the United States may be due to NV ( 19 ). From a community-based case-control study in the Netherlands, risk-factor analysis for NV, based on information collected throughout a 1-year cohort study, suggested an association between NV and a complex score that was used as a proxy for food-handling hygiene. On the basis of this approach, an estimated 12% to 15% of community-acquired illness may be due to food- or waterborne modes of transmission (with 85% attributed to contact with a symptomatic person in or outside the household) (de Wit et al., unpub. data). The proportion of foodborne outbreaks reported in the countries participating in our network ranges from 7% to 100%, but that range merely reflects the differences in the selections used in the different surveillance systems and cannot be used to estimate the true impact of foodborne illness caused by NV ( 10 ). While definitive data still need to be collected, the consensus is that NV is an important cause of food-related infection and disease.
Foodborne transmission of viral gastroenteritis has not historically been acknowledged as a public health priority, which means that our surveillance system is inevitably restricted to groups that already have an active program in the field. Ideally, we would like to build a network of national institutes represented by both epidemiologists and microbiologists involved in outbreaks of viral gastroenteritis; however, at present this ideal is not possible for all of Europe. By networking the existing information, assessing comparability of data through studies of primers and protocols used, and examining data from current surveillance, we are able to paint a bigger picture from the fragmented information that is available.
The standardized outbreak questionnaire, accessible through the Internet, is designed to collect a minimum dataset about all outbreaks. However, participants who perform more detailed epidemiologic or virologic investigations can also submit additional data. The minimum dataset will suffice to answer the basic questions for the surveillance, i.e., what is the reported incidence of NV outbreaks across Europe, and which proportion is considered to be due to food- or waterborne transmission.
A key feature of any disease surveillance system is its use as an early-warning tool, in this case for international common-source outbreaks. To facilitate this, several features were included in our database setup. Information on outbreaks can be updated with new information as it comes in, to avoid piling up information until the outbreak investigation has been completed (which may be months later). Both the epidemiologic data and laboratory data (mostly sequences) can be searched easily. Thus, participants can be alerted to similarities in disease profiles (e.g., outbreaks with imported fruits) or in sequences. Either signal can lead to contacts between participants to discuss possible indications for a joint investigation. Crucial in this discussion was the issue of confidentiality, both for patient and product information, and for data from investigations. The present modus operandi is that each participant signs a confidentiality agreement, which states that data submitted to the database are owned by the person submitting them (subject to each participant’s national regulations on patient and laboratory data); specific patient and product information is not entered into the database. If necessary for outbreak investigations, the groups involved will decide on a case-by-case basis what information may or may not be used by the consortium. Participants can obtain the complete information from the database for their own analysis, or choose to use it as a search tool and rely on the analysis done by a scientist employed on this aspect of the database, who is stationed in Bilthoven, the Netherlands. So far, five international outbreaks have been detected because of the network.
The food distribution chain in Europe is complex, and therefore the transmission of viruses across borders can occur by means of contaminated food. The surveillance network described here allows early detection of international common-source outbreaks of foodborne viruses. Most of the work to date has involved harmonization of methods for investigating outbreaks and detecting the viruses causing these outbreaks, as well as the development of a database system that facilitates the exchange of information between laboratories and institutes involved in viral gastroenteritis research and surveillance. The system’s strength is that it combines basic epidemiologic and laboratory data into a searchable repository. This network has demonstrated its potential to recognize transnational outbreaks. However, the network is inherently limited by the quality of data available at the national level, which is a reflection of the priority given to foodborne viruses. At present, we are undertaking a 2-year enhanced surveillance project to study the frequency and modes of transmission of viral gastroenteritis outbreaks across Europe. | The importance of foodborne viral infections is increasingly recognized. Food handlers can transmit infection during preparation or serving; fruit and vegetables may be contaminated by fecally contaminated water used for growing or washing. And modern practices of the food industry mean that a contaminated food item is not limited to national distribution. International outbreaks do occur, but little data are available about the incidence of such events and the food items associated with the highest risks. We developed a combined research and surveillance program for enteric viruses involving 12 laboratories in 9 European countries. This project aims to gain insight into the epidemiology of enteric viruses in Europe and the role of food in transmission by harmonizing (i.e., assessing the comparability of data through studies of molecular detection techniques) and enhancing epidemiologic surveillance. We describe the setup and preliminary results of our system, which uses a Web-accessible central database to track viruses and provides the foundation for an early warning system of foodborne and other common-source outbreaks.
Keywords: | Food-related illness is common worldwide, and bacterial pathogens have historically been associated with this mode of transmission. In recent years, however, the cause of most outbreaks of foodborne illness remained unknown, although a significant proportion were presumed to be viral ( 1 ). Additional research established the importance of viruses, especially the human caliciviruses belonging to the genus Norovirus (NV) ( 2 ). Transmission of these viruses is primarily from person to person, but numerous examples illustrate that NV are efficiently transmitted in food, water, or contaminated environmental surfaces. NV similar to, but not identical with, human strains have been found in cattle and in pigs ( 3 , 4 ). Studies in which viruses were molecularly characterized have shown that numerous variants co-circulate in the community but that occasionally shifts occur in which a single variant dominates over a wide geographic region ( 5 ). In 1995 to 1996, a worldwide epidemic was observed ( 6 ). The mechanism of emergence of these variants is unclear, but one hypothesis is that they represent widespread common-source events.
While it is clear that enteric viral infections are common, far less established is how common the foodborne mode of transmission is and how important it is in the epidemiology of these viruses. The challenge lies not so much in detecting outbreaks related to foodborne contamination at the end of the chain (the food handler in the nursing home or restaurant), because those are likely to be detected by routine outbreak investigation, with or without molecular typing. Linking NV outbreaks to common-source introductions nationally or internationally may be more difficult because of the high secondary attack rate that results from rapid person-to-person transmission. Thus, an initial seeding event will rapidly be masked by the occurrence of new cases or outbreaks, suggesting that person-to-person transmission is the primary mode of spread. The likelihood of detecting such seeding events relies on effective surveillance, which combines epidemiologic assessment of the outbreak and molecular typing to discover and track potential links between outbreaks. Such molecular tracing, however, requires knowledge on diversity of “resident viruses” in the region under study to be able to recognize unusual increases. Therefore, we established a combined research and surveillance network for foodborne viruses that was granted by the European Commission. This project group combines complementary expertise from the fields of diagnostic virology, molecular virology, epidemiology, and food microbiology to study modes of transmission of NV across Europe. Mapping these pathways allows better founded estimates of the proportion of illness that may be attributed to foodborne transmission and identification of high-risk foods, processing methods, or import and transport routes, which subsequently can be a focus of prevention programs. The data are important for assessing the risks associated with consumption of certain food items. Essential to the success of this project is the establishment of a common, central database, which is now used by all partners to compare data across Europe as soon as they are available. We describe this project and results from its first 18 months of operation. | Core funding for this project was obtained from the European Union under the 5th framework program, contract no QLK1-1999-C - 00594.
Dr. Koopmans is a veterinarian with a Ph.D. in virology. Since 2001, she has chaired the Virology of the Diagnostic Laboratory for Infectious Diseases of the National Institute of Public Health in the Netherlands, which focuses on reference diagnostics, molecular epidemiology, and outbreak management of a range of emerging diseases. She also is coordinator of the European Union–funded Foodborne Viruses in Europe Network. | CC BY | no | 2022-01-31 23:46:23 | Emerg Infect Dis. 2003 Sep; 9(9):1136-1142 | oa_package/83/43/PMC3016772.tar.gz |
||
PMC3016773 | 14519255 | Results
Growth of the isolate on PFA produced a buff-colored to slightly lavender, somewhat granular colony after 7 days’ incubation at 25°C. Repeat subcultures with extended incubation (up to 2 weeks) yielded colonies which were more definitely mauve-colored, consistent with those typically seen with P. lilacinus ( Figure 1 ). The isolate failed to produce sporodochia on carnation leaf agar (CLA, prepared in-house), a feature seen with Fusarium species (many of which are lavender) and failed to produce a diffusing yellow pigment on malt agar, a characteristic seen with the closely related P. marquandii. The conidiogenous cells from the initial slide culture, held 7 days and prepared from a PFA block, consisted predominately of single, long, tapering phialides, somewhat atypical for the species. Repeat PFA slide cultures from subcultures displaying a more typical macroscopic structure yielded complex fruiting heads with verticillate conidiophores and divergent phialides, typical for P. lilacinus ( Figure 2 ) . Conidiophore roughness, a feature described for P. lilacinus , was not observed, however, even after repeated subculturing and examinations. Smooth-walled, elliptical conidia occurred in long, tangled chains and measured approximately 2.0 X 2.5 μm. No chlamydospores were observed on any of the media examined. Temperature studies performed on two separate occasions (PFA) indicated 4+ growth at 25°C and 30°C, 2+ growth at 35°C, and no growth at 40°C.
Species of Paecilomyces known to produce pinkish to purplish colonies include P. javanicus, P. fimetarius, P. fumosoroseus, P. lilacinus, and P. marquandii. The first three species were excluded from consideration on the basis of the size of the conidia, as well as the lack of synnematal production for P. fumosoroseus ( 5 ). P. marquandii differs from P. lilacinus by the production of an intense yellow diffusible pigment, smooth-walled hyaline conidiophores, and the production of chlamydospores. Although the isolate in this case did display smooth conidiophores, no yellow pigment or chlamydospores were observed. The existence of intergrading forms between P. lilacinus and P. marquandii has been described ( 6 ). In such strains, characteristics of both species may be observed. On the basis of the characteristics above, the isolate was identified as P. lilacinus .
In vitro 48-hour to 72-hour MIC data in μm/mL for the isolate were as follows: amphotericin B, >16; 5-flucytosine (5-FC), >64; ketoconazole, 0.5/0.5; fluconazole, 32/64; itraconazole, 0.5/0.5; clotrimazole, 0.06/0.25; voriconazole, 0.25/0.25; terconazole, 4/8; and posaconazole, 0.125/0.125. | Conclusions
P. lilicanus rarely causes human infection. A MedLine review of English-language literature from 1966 to 2003 yielded approximately 60 reports of P. lilacinus infections in patients who were immunocompromised, had undergone ophthalmologic surgery, or had indwelling foreign devices ( 2 , 3 ). A Medline review in the same time period indicated only six cases of P. lilacinus infections among patients who lacked a readily identifiable risk factor. A review of the bibliographies of relevant articles yielded three additional reports, for a total of nine cases in apparently immunocompetent hosts. The salient features of these cases, as well as ours, are summarized in the Table .
The source of infection in most cases, including ours, is not easily identifiable. P. lilacinus has been isolated as a benign commensal organism on the toenails of immunocompetent hosts ( 15 ). In some cases, however, P. lilacinus has been pathogenic and has been implicated as a cause of onychomycosis in an immunocompetent adult ( 14 ). The low pathogenicity of this fungus in normal hosts is demonstrated by the indolent nature of two of the cutaneous infections listed in the Table ( 8 , 9 ), which were characterized by many years of chronic infection.
All isolates of the genus Paecilomyces should be tested for fungal susceptibility since clinical isolates of P. lilacinus frequently display considerable resistance. Isolates of P. lilacinus , for example, are usually resistant to amphotericin B and 5-flucytosine and susceptible to miconazole and ketoconazole, whereas isolates of the species P. variotii are usually susceptible to amphotericin B and 5-flucytosine ( 16 ). On the basis of breakpoints established for other fungi, the case isolate appeared resistant to amphotericin B, 5-flucytosine, fluconazole, and possibly terconazole. The approved azoles, itraconazole, ketoconazole, and clotrimazole, appeared susceptible, as did the investigational triazoles, voriconazole, and posaconazole.
Although the source isolate was susceptible to clotrimazole, the patient’s symptoms did not resolve after clotrimazole treatment. However, the duration of therapy with this agent and the degree of the patient’s adherence to the treatment regimen are unknown; one or both of these factors may have contributed to treatment failure.
Our review demonstrates that reports of P. lilacinus infections in immunocompetent hosts appear to be increasing. The four earliest cases occurred from 1972 to 1984, with one case reported every 3–5 years. The eight subsequent cases occurred between 1996 and 2002, for an average of slightly more than one new case per year.
We report the first case of P. lilacinus isolated from a vaginal culture in a patient with vaginitis, whose symptoms failed to improve after treatment with fluconazole. Her symptoms resolved after treatment with itraconazole, to which the case isolate was susceptible. P. lilacinus has been described as an emerging opportunistic pathogen in humans ( 17 ). In May 2002, the first case of disseminated P. lilacinus infection in an HIV-infected patient was reported ( 18 ). Our review suggests that P. lilacinus may be an emerging pathogen in immunocompetent adults as well. | Paecilomyces lilacinus, an environmental mold found in soil and vegetation, rarely causes human infection. We report the first case of P. lilacinus isolated from a vaginal culture in a patient with vaginitis.
Keywords: | Paecilomyces lilacinus , a saprobic filamentous fungus, found in soil, decaying vegetation, saunas, and laboratories (as an airborne contaminant), is an infrequent cause of human disease ( 1 , 2 ). Most cases of disease caused by the genus Paecilomyces occur in patients who have compromised immune systems, indwelling foreign devices, or intraocular lens implants ( 2 , 3 ). Rarely has disease been reported in immunocompetent hosts without any identifiable risk factor.
We describe the first case of P. lilacinus isolated from a vaginal culture in a patient with vaginitis and review the published literature addressing P. lilacinus infections in immunocompetent patients. Our review demonstrates that the reports of P. lilacinus infections in immunocompetent hosts have become more frequent in the last several years. This trend indicates that P. lilacinus may be an emerging pathogen.
Case Report
A 48-year-old woman reported vaginal itching and discharge of 5 months’ duration. Her symptoms had been recalcitrant to several courses of therapy for a presumptive diagnosis of candidal vaginitis. She had been treated initially with fluconazole, then sequentially with topical clotrimazole, ticoconazole ointment, and intravaginal boric acid gel. Her medical history was notable for mild gastritis (treated with omeprazole) and irregular uterine bleeding, controlled with hormone replacement therapy (a transdermal estrogen/progesterone combination). The patient was in a monogamous relationship with her husband but reported abstinence for several months because of the severity of her vaginal symptoms.
On physical examination, vaginal erythema with a white liquid vaginal discharge was observed. Although a potassium hydroxide (KOH) preparation was not obtained at baseline, the discharge grew P. lilacinus in pure culture.
The patient was treated empirically with itraconazole, 200 mg orally twice a day for 3 weeks. At the end of therapy, she reported complete resolution of her vaginal discharge and a significant decrease in her vaginal pruritus. A repeat vaginal culture was not obtained at her first follow-up appointment after completion of itraconazole therapy because the vaginal vault contained a large amount of blood. At an appointment 6 months later, she remained free of vaginal discharge; a vaginal fungal culture and KOH preparation performed at that time were negative.
The results of laboratory studies, including serum protein electrophoresis (with immunoglobulin [Ig] G, IgA, IgM) C3, C4, erythrocyte sedimentation rate, a complete blood count, CD4 cell count, and CD8 cell count were all within normal limits. Results of a test for antibodies to HIV were negative. An anergy panel (with Candida and Trichophytin used as controls) was reactive. A purified protein derivative was not placed because the patient had a history of a positive test result.
The patient’s isolate was forwarded to the Fungus Testing Laboratory, Department of Pathology, University of Texas Health Science Center at San Antonio, Texas, for confirmation of the identity and antifungal susceptibility testing, and accessioned into the stock collection as UTHSC 01-872. The isolate was initially subcultured onto potato flakes agar (PFA, prepared in house), which was prepared in-house, at 25°C, 30°C, 35°C, and 40°C (ambient air with alternating daylight and darkness). The isolate was subsequently plated onto carnation leaf agar (CLA [prepared in-house]) and malt agar (Remel, Lenexa, KS) at 25°C. Temperature studies were repeated after initial observations.
The case isolate was evaluated for susceptibility to antifungal agents by using the National Committee for Clinical Laboratory Standards broth macrodilution method M38-P ( 4 ). Briefly, the case isolate and the P. variotii control organism, UTHSC 90-450, were grown on PFA for 7 to 10 days at 25°C to induce conidial formation. The mature PFA isolate and control slants were overlaid with sterile distilled water, and suspensions were made by gently scraping the colonies with the tip of a Pasteur pipette. Heavy hyphal fragments were allowed to settle, and the upper, homogeneous conidial suspensions were removed. Conidia were counted with a hemacytometer, and the inoculum was standardized to 1.0 x 10 5 CFU/mL. Conidial suspensions were further diluted 1:10 in medium for a final inoculum concentration of 1.0 x 10 4 CFU/mL. Final drug concentrations were 0.03–16 μg/mL for amphotericin B (Bristol-Myers Squibb, Princeton, NJ) , ketoconazole (Janssen Pharmaceutica, Titusville, NJ) and clotrimazole (Schering-Plough, Kenilworth, NJ), 0.125–64 μg/mL for 5-flucytosine (Roche Laboratories, Nutley, NJ), fluconazole, voriconazole (Pfizer, Inc, New York, NY), and terconazole (Ortho-McNeil Pharmaceuticals, Inc., Raritan, NJ), and 0.015–8 μg/mL for itraconazole (Janssen Pharmaceutica) and posaconazole (Schering-Plough). | Acknowledgments
The authors thank David C. Perlman for his helpful comments and critical review of the manuscript and Carmen Yampierre for performing the initial microbiologic processing of the Paecilomyces lilacinus isolate.
Dr. Carey is an attending physician in the Division of Infectious Diseases at Beth Israel Medical Center in New York. Her research interests include tuberculosis and hepatitis among drug users. | CC BY | no | 2022-01-31 23:38:47 | Emerg Infect Dis. 2003 Sep; 9(9):1155-1158 | oa_package/cf/5f/PMC3016773.tar.gz |
|||
PMC3016774 | 14519250 | Methods
Selection of Patients
Participants were selected among HIV-positive patients who receive their medical care at the Comprehensive Care Center, an adult HIV-oriented primary care clinic located in Nashville that serves middle Tennessee and surrounding regions. Center records were retrospectively analyzed to identify patients seen at the clinic for any reason between March 1, 1999, and October 31, 1999 (the typical period of tick activity in middle Tennessee).
Symptomatic Patient Subset
Those patients discharged with diagnoses indicative of potential ehrlichial infection, according to the International Classification of Diseases, 9th Edition (ICD-9), were identified by means of a blinded database review. Specifically, patients who were assigned the following ICD-9 codes were selected for the study cohort: fever or fever of unknown origin (780.6), viral infection (079.9), upper respiratory infection (465.9) or respiratory disease (478.9) not otherwise specified, Lyme disease (088.81), rickettsial disease (specified, 083.8, or unspecified, 083.9), Rocky Mountain spotted fever (082.0), tick bite (088.89), and myalgias (729.1). To find potential participants who may have been missed in the original search, a second database search identified patients during the study period who were prescribed doxycycline, the therapy of choice in the empiric treatment of febrile illness during the tick season.
Asymptomatic/Other Patient Subgroup
The rest of the study cohort comprised patients who visited the center during the study period and who had plasma banked for serologic investigation (see “Plasma Collection”). To investigate the incidence of asymptomatic Ehrlichia infection, patients were selected from a blinded review of the center’s plasma sample log. Patients who had banked plasma samples from the pretick season (between September 15, 1998, and March 31, 1999) as well as from the posttick season (after October 31, 1999) and who had visited the center for routine follow-up during the study period were selected for study. Records of patients identified as asymptomatic were reviewed for symptoms suggestive of HME during the study period that were not encoded with an ehrlichiosis-compatible ICD-9 diagnosis.
Chart Review
The center’s data charts were analyzed for demographic data (age at start of study period, race, sex, number of clinic visits during the study period), past medical history, medication history (the administration of highly active antiretroviral therapy [HAART] and medication used as prophylaxis against opportunistic infections), and HIV status based on the most recent CD4 count and viral load drawn before the study period began (March 1, 1999). Charts from the symptomatic patients were further analyzed for symptoms suggestive of Ehrlichia infection, including the presence or absence of fever, headache, rash, fatigue, malaise, upper respiratory infection symptoms, nausea, vomiting, myalgias, abdominal pain, and mental status changes. Symptom history, laboratory parameters (peripheral leukocyte count, platelet count, aspartate aminotransferase, and alanine aminotransferase) at baseline and during the acute illness, and illness outcomes (including antibiotics prescribed, hospitalization, and death) were also collected. Insufficient data on tick exposure, tick bites, or outdoor activity were available to evaluate exposure risk factors for Ehrlichia infection.
Plasma Collection
Since 1998, the center has maintained a repository of plasma samples frozen at –70°C by retaining specimens obtained from patients during routine phlebotomy. All patients who choose to participate in the plasma banking provide written informed consent based on a protocol approved by the Vanderbilt University Institutional Review Board. The plasma log was cross-checked with the study participant list identified from the database review as outlined above. Participants with no banked plasma sample from before the onset of the study period (preseason sample) or at the onset of clinical symptoms (acute sample) were excluded. Samples from at least 4 weeks after the acute clinical illness or after the study period (postseason sample) were also identified for most persons. Persons with no further samples banked after their acute illness or the study period were included only in determination of seroprevalence.
Serologic Testing
All preseason and postseason samples were tested in a blinded fashion by indirect immunofluorescence assay for antibody reactive with E. chaffeensis with an assay previously described for human granulocytic ehrlichiosis, which has been widely employed for HME using different antigen substrates ( 13 ). A reciprocal antibody titer of > 64 was considered elevated and indicative of infection with E. chaffeensis . Seroconversion to E. chaffeensis was defined as a fourfold or greater increase in antibody titer between acute-phase or preseason and convalescent-phase samples.
Statistical Analysis
Incidence rates were described as the number of cases of seroconversion divided by the total population of interest. We used 95% confidence intervals determined by using Stata statistical software version 7.0 (Stata Corporation, College Station, TX). | Results
We initially identified a total of 176 patients from the center’s records; 43 were excluded because specimens for testing were unavailable, leaving 133 in our study cohort. Thirty-six (27.1%) had symptoms compatible with HME (29 found by screening of ICD-9 codes and for doxycycline use, 7 found after chart review of initial asymptomatic candidates), and 97 (72.9%) had no symptoms suggestive of this diagnosis. Characteristics of the cohort are shown in the Table . The median CD4 count was 370 cells/mm 3 . Symptomatic participants had significantly more visits (p<0.001) to the clinic during the study period and were significantly (p=0.035) more likely to have received antibiotic therapy (excluding doxycycline; data not shown). As doxycycline was used to select for symptomatic participants, doxycycline therapy was, not included in the analysis of antibiotic use. Other characteristics (specifically age, gender, baseline CD4 count, baseline viral load, use of HAART, use of prophylaxis for opportunistic infection [ Pneumocystis carinii pneumonia and Mycobacterium avium complex], and average number of visits during the study period) between the symptomatic and asymptomatic subgroups did not differ significantly.
None of the patient specimens obtained before the 1999 tick season had serologic evidence of prior Ehrlichia infection, resulting in a baseline seroprevalence of 0% for our cohort. Of the 122 patients with paired samples available (92 asymptomatic, 30 symptomatic), 1 patient had a clinical syndrome compatible with HME and had a significant rise in antibody titer to E. chaffeensis during the study period (initial titer 64; postseason titer 1,024). Clinically notable disease characterized by fever, myalgias/arthralgias, leukopenia, and thrombocytopenia developed in this patient after tick exposure and required a 4-day hospitalization. During this hospitalization, his diagnosis was confirmed by conducting a polymerase chain reaction assay on his serum, which was positive for E. chaffeensis . His symptoms resolved after a course of doxycycline. A second patient with a 10-day history of symptoms compatible with HME (fatigue, cough, and overall malaise), but no documentation of tick exposure or tick bite, had an initial acute-phase titer of 512, drawn when first seen by a clinician (10 days after symptom onset), fulfilling case criteria for probable Ehrlichia infection ( 14 ). This patient did not have an earlier preseason sample available for analysis but did have a postseason titer of 512 obtained 4 months after clinical illness, suggesting a prolonged elevation in antibody titer. He was thought to have an upper respiratory tract infection by his primary caregiver, and doxycycline was prescribed for his illness. His symptoms resolved without hospitalization.
These two cases resulted in a seroincidence among symptomatic patients of 6.67% (95% confidence interval [CI] 0.82, 22.1) and an overall incidence of 1.64% (95% CI 0.2, 5.8). No asymptomatic cases were identified in our cohort (upper 95% CI for seroconversion in the asymptomatic population, 3.2%). | Discussion
Researchers have conducted various serologic studies to ascertain the epidemiology of E. chaffeensis infection in specific populations. Carpenter et al. ( 10 ) found a seroincidence of 25.7% in febrile patients in North Carolina with a history of a recent tick bite. In a prospective seroepidemiologic study of residents living in a rural community in California, prevalence rates of 4.6% were reported, and most of the infected participants recalled no recent compatible illness ( 11 ). In a comparison of two golf-oriented retirement communities in middle Tennessee, one abutting a wildlife-management area and one 20 miles away from the area used as a control population, Standaert et al. found seroprevalence rates of 12.5% and 3.3%, respectively ( 12 ). A study on the seroprevalence in children residing in HME-endemic areas, including Tennessee, found a seroprevalence rate (as defined by E. chaffeensis antibody titer >1:80) of nearly 15% among children undergoing phlebotomy in Nashville ( 15 ). None of these studies, however, investigated the incidence rates for immunosuppressed persons, such as persons infected with HIV, who may be at increased risk for symptomatic disease after ehrlichial infection.
Our findings indicate that the prevalence and incidence of HME attributable to E. chaffeensis infection in an HIV-positive population are quite low in a cohort of HIV-positive patients receiving care at an urban HIV clinic within an HME-endemic region. The incidence rate in our study was similar to those previously reported in a cohort of healthy military recruits in an area endemic for E. chaffeensis (1.3%) ( 3 ). However, only 33.3% of seropositive persons in that study had a compatible febrile illness, and none of these symptomatic seroconverters were sufficiently ill to require medical care ( 3 ). In contrast, both of our case-patients had symptomatic disease of sufficient severity to require medical care, and one required hospitalization. Furthermore, none of our patients had serologic evidence of asymptomatic infection during the study period. Therefore, while the overall incidence of Ehrlichia infection was not increased in our cohort, these results are in agreement with other studies that indicate that HME can cause severe infection in HIV-positive persons.
Our patient with a diagnosis of probable HME had evidence of a sustained antibody response. The acute-phase serum sample and a convalescent-phase sample obtained 4 months later both had titers of 512. This finding suggests that the immune response mounted by HIV-positive persons against E. chaffeensis is durable and may persist for several months, similar to the response seen in HIV-negative persons. Because of the low rate of seroconversion in our cohort, we were unable to analyze data on specific risk factors (e.g., CD4 count or use of HAART) that might predispose persons with HIV infection to ehrlichiosis.
Our study had several limitations. The retrospective design placed constraints on the data that could be abstracted, thus introducing possible reporting bias. A prospective study, in contrast, would allow investigators to collect further information on exposure risks, such as level of outdoor activity, and could reduce the variability in symptom reporting found with our study. Our study population could also lead to bias and, as a result, limit generalizability of our results to the HIV-positive population as a whole. The Comprehensive Care Center draws patients from both metropolitan areas (Nashville) and rural communities; however, our cohort may have been more metropolitan and less likely to come into contact with wooded environments. Also, HIV-infected patients who regularly attended the clinic may have had more contact with the healthcare delivery system and thus been more likely to take regular antiretroviral medications that could reduce their viral burden and concomitant immunodeficiency. As a result, those patients who are noncompliant with follow-up (and, by extension, antiviral therapy) may be at greater risk for symptomatic infection and may have been missed in our analysis.
The use of serologic methods to determine actual prevalence and incidence rates for the HIV-infected population may also be problematic. A reduced antibody response to various antigens, including those contained in tetanus and pneumococcal vaccines, in HIV-infected patients has been described in previous studies ( 16 ). A potentially decreased ability to mount an immune response to E. chaffeensis may have led to false-negative antibody titers and an underestimate of the incidence of ehrlichiosis in this population. Such a finding was highlighted in two previous reports of HIV-positive persons with fatal E. chaffeensis infection who did not mount an antibody response during their illnesses ( 5 , 7 ). The serologic response to ehrlichial infection may also be blunted or inhibited by tetracycline therapy, which, when given early in the course of Ehrlichia infections, inhibits the development of a serologic response ( 10 , 12 ). Empiric treatment of febrile patients with a clinical picture resembling ehrlichiosis in our population thus could have blunted the antibody response and led to a falsely low seroincidence and prevalence.
In conclusion, we found that levels of HME infection in our HIV-positive cohort were similar to those in normal, healthy persons who received intense exposure to the outdoors in an HME-endemic area. However, both of our case-patients had clinical infections, one requiring hospitalization. Caregivers of HIV-positive patients in regions endemic for E. chaffeensis should consider ehrlichiosis as part of the growing list of potential opportunistic infections and maintain a high level of clinical suspicion for this disease. Prospective studies in HIV-positive populations are needed to fully understand the extent of infection with E. chaffeensis in these patients. | Manifestations of human monocytic ehrlichiosis (HME), a tick-borne infection caused by Ehrlichia chaffeensis , range from asymptomatic disease to fulminant infection and may be particularly severe in persons infected with HIV. We conducted a serologic study to determine the epidemiology of HME in HIV-positive patients residing in an HME-endemic area. We reviewed charts from a cohort of 133 HIV-positive patients who were seen during the 1999 tick season with symptoms compatible with HME (n=36) or who were asymptomatic (n=97). When available, paired plasma samples obtained before and after the tick season were tested by using an indirect immunofluorescence assay (IFA) to detect antibodies reactive to E. chaffeensis. Two symptomatic incident cases were identified by IFA, resulting in a seroincidence of 6.67% among symptomatic HIV-positive participants with paired samples available for testing and 1.64% overall. The baseline seroprevalence of HME was 0%. In contrast to infection in immunocompetent patients, E. chaffeensis infection in HIV-positive persons typically causes symptomatic disease.
Keywords: | During the last 25 years, the discovery of a number of newly identified infectious agents, such as Borrelia burgdorferi, Legionella pneumophila, and HIV, has raised concern in both the medical and lay communities about novel infectious threats to human populations. Among these emerging pathogens are several species of Ehrlichia , small, gram-negative bacteria transmitted by arthropod vectors that can cause human disease, such as human monocytic ehrlichiosis (HME). First described in 1986 ( 1 ), HME is caused by Ehrlichia chaffeensis , an organism transmitted primarily by the lone star tick ( Amblyomma americanum ) ( 2 ). Infection with E. chaffeensis can range from being clinically asymptomatic to causing a severe life-threatening illness. HME typically causes systemic symptoms (including fever, headache, malaise, rash, abdominal pain, nausea, and cough) and laboratory signs (leukopenia, thrombocytopenia, and elevated transaminase levels). Rarely, patients have fulminant infection with disseminated intravascular coagulation, sepsis, and adult respiratory distress syndrome, leading to death ( 2 ). Asymptomatic infection with E. chaffeensis may occur frequently, as suggested in a recent seroepidemiologic study in which 67% of military recruits in an E. chaffeensis –endemic area seroconverted without symptoms ( 3 ).
The risk for HME in immunocompromised patients is unknown; however, numerous case reports and reviews have described severe ehrlichial infection in immunosuppressed patients ( 4 – 6 ), including several reports of rapidly fatal infection with E. chaffeensis in AIDS patients ( 7 – 9 ). Diagnosis of HME in HIV-positive patients is often confounded by the fact that the signs and symptoms of ehrlichial infection mimic typical findings commonly associated with HIV infection, its complications, and the medications commonly used in treating such patients. Delayed consideration and diagnosis of ehrlichial infection may result in additional illness if antibiotic therapy is not instituted promptly.
Studies investigating the epidemiology of E. chaffeensis infection have focused on healthy persons living in regions endemic for E. chaffeensis or clinical findings among hospitalized case-patients ( 3 , 10 – 12 ). A systematic evaluation of the seroepidemiology of ehrlichial disease in HIV-infected persons has not been performed. We therefore conducted a descriptive seroepidemiologic study to ascertain the prevalence and incidence of E. chaffeensis infections in HIV-infected persons located in an area endemic for HME. | Acknowledgments
We are indebted to Ashgar Kheshi for his assistance with the database analysis, the attending physicians and care providers at the Comprehensive Care Center, John O’Connor and Richard D’Aquila for their careful review of the manuscript, and Steve Standaert and Christopher Paddock for their assistance with this study.
Dr. Talbot is an instructor of medicine in the Division of Infectious Diseases at Vanderbilt University School of Medicine. His research interests include hospital epidemiology and preventive medicine. | CC BY | no | 2022-01-25 23:40:19 | Emerg Infect Dis. 2003 Sep; 9(9):1123-1127 | oa_package/f6/be/PMC3016774.tar.gz |
||
PMC3016775 | 14519257 | Conclusions
Serologic study indicated that the antibody to SARS-CoV appeared as early as 9 days after disease onset and that a high level of antibody could last for 1–2 months after disease onset. Previous reports indicated that the mean time for IgG seroconversion was 20 days and may start as early as 9–10 days. Our finding supported the results of Peiris et al. ( 7 , 12 ). Levels and appearance of antibody to SARS-CoV did not seem to be influenced by the use of ribavirin and immunosuppressive or immunomodulatory agents (corticosteroid or IVIG, a blood product prepared from the serum of 1,000 to 15,000 donors per batch) ( 13 ).
Third, the long-term persistence (19–29 days after illness onset) of viral RNA in the serum and sputum specimens of the SARS-CoV-specific IgG seroconverters is an important finding. Prolonged shedding of viral RNA in respiratory secretions (11 days after illness onset), plasma (up to 9 days), and stool specimens (25 days) was documented previously ( 6 ). Further studies are needed to determine whether the viable viral particles existed in body fluids in the presence of high antibody to the virus. Finally, one SARS patient, who did not have an underlying coexisting condition and did not receive any immunosuppressive agents during hospitalization, did not have detectable antibody to SARS-CoV 24 days (>21 days) after illness onset. The serum and sputum RT-PCR for SARS-CoV were positive in this patient, and the sequence was confirmed. Whether the patient was anergic to SARS-CoV infection is unknown. A later serum sample taken in the convalescent stage should be tested to determine whether this patient subsequently seroconverts ( 7 ).
The upsurge of IgG antibody to SARS-CoV and its correlation with the progression of ARDS, necessitating ventilator support in four of the seven patients, was evident. Previous study suggested that an overexuberant host response rather than uncontrolled viral replication, contributed to severe clinical symptoms and progressive lung damage ( 12 ). Whether the addition of SARS-CoV–specific antibody in SARS patients further aggravated the preexisting overactive immune-mediated deterioration was unclear.
High concentrations of viral RNA, up to 100 million molecules per milliliter, were detected in a sputum sample from an index patient on day 9 ( 6 ). In the present series, a physician contracted the infection from a patient (patient 2) 12 days after the onset of symptoms, indicating that shedding of the virus from the respiratory tract of symptomatic SARS patients may last for > 12 days. Viral RNA in the sputum samples of patient 2 collected 12 days after the onset of symptoms supports this clinical finding.
Dual infection caused by M. pneumoniae and SARS-CoV was found in patient 5. No evidence of M. pneumoniae infection existed in patient 6 from the same cluster. This finding is similar to a previous report ( 6 ). Four of our patients had elevated IgG antibody titers for C. pneumoniae, and five had elevated antibody titers against parainfluenzavirus 1, 2, or 3 in acute-phase serum samples without a fourfold rise of titers in convalescent-phase serum samples. Whether the antibody responses of these patients reflected past infections from C. pneumoniae, parainfluenzavirus, or both, or merely a cross-reaction with antibody against SARS-CoV virus remains unclear.
As of May 16 2003, data of complete genomic sequences for 13 SARS-CoV strains isolated from Hong Kong, Singapore, China, Canada, Vietnam, and Taiwan were available in GenBank. The number of nucleotides ranged from 29,705 (SIN2677 strain) to 29,751 (TOR2) ( 14 , 15 ). Since February 2003, at least three different clusters of SARS outbreaks occurred in different parts of Taiwan, and five strains were identified from patients in these clusters. The availability of the sequence data of different strains in a given geographic area will have an immediate impact on the effort to trace the origins and transmission of SARS-CoV and develop novel rapid diagnostic tests and a vaccine.
In summary, analysis of these seven patients with virologically or serologically documented infections caused by SARS-CoV in Taiwan not only extended the knowledge of this emerging novel disease but also provided microbiologic and immunologic clues for the physicians caring for patients suspected of having this disorder. Viral RNA may persist for some time in patients who seroconvert, and some patients may lack an antibody response to SARS-CoV >21 days after illness onset. An upsurge of antibody response is associated with the aggravation of respiratory failure that required ventilator support. | The genome of one Taiwanese severe acute respiratory syndrome-associated coronavirus (SARS-CoV) strain (TW1) was 29,729 nt in length. Viral RNA may persist for some time in patients who seroconvert, and some patients may lack an antibody response (immunoglobulin G) to SARS-CoV >21 days after illness onset. An upsurge of antibody response was associated with the aggravation of respiratory failure. | In November 2002, cases of a life-threatening and highly contagious febrile respiratory illness of unknown cause were reported from Guangdong Province in southern China, followed by reports from Vietnam, Hong Kong, Singapore, Canada, the United States, and other countries ( 1 – 4 ). This illness was identified as a new clinical entity and designated as severe acute respiratory syndrome (SARS) in late February 2003. This disease has a high propensity to spread to healthcare workers and household members and may cause outbreaks in the community ( 1 – 4 ). Recent reports have demonstrated that a novel coronavirus, SARS-associated coronavirus (SARS-CoV), is associated with the pathogenesis of SARS ( 5 – 7 ). Laboratory diagnostic tests to analyze clinical specimens for SARS-CoV include reverse-transcriptase polymerase chain reaction (RT-PCR) specific for RNA and detection of specific antibody by using indirect fluorescence antibody and enzyme-linked immunosorbent assays ( 8 , 9 ). However, data on the timing and intensity of serologic responses after illness onset and the association of these responses with clinical manifestations of the disease are lacking.
In Taiwan, the first case of SARS occurred in a businessman working in Guangdong who was admitted to National Taiwan University Hospital (NTUH) on March 8, 2003. As of May 18, 2003, a total of 308 probable cases of SARS were reported by the Center for Disease Control, Department of Health, Taiwan ( 10 ).
The Study
This study included seven Taiwanese patients, treated at the National Taiwan University Hospital from March 8 to May 3, 2003, whose illness met the recent Centers for Disease Control and Prevention (CDC) and World Health Organization (WHO) case definition for probable cases of SARS ( 11 , 12 ). The patients were 26–53 years of age, and six were men. The incubation period ranged from 2 to 12 days. Of the seven patients, four had recently returned from China: two patients (patients 1 and 7) from Guangdong Province and two (patients 5 and 6) from Beijing. In addition, two family members (patients 2 and 3), and one healthcare worker (patient 4) were from a cluster, which had household or healthcare contact with a SARS patient, and two patients (patient 5 and 6) were from another cluster, which had close contact with a SARS patient in an airplane.
All patients had fever (body temperature >38°C) and dry cough. Other symptoms included malaise (five patients), myalgia (five patients), and rigor (four patients). All but one patient (patient 7) had loose stools or diarrhea 2–10 days after febrile episodes, and five, including the four cluster A patients, had aggravating diarrhea 9–14 days after febrile episodes. The mean interval between onset of symptoms and hospitalization was 7.3 days (range 4–12 days).
Pneumonia developed in all seven patients, acute respiratory distress syndrome (ARDS) developed in four (patients 1, 2, 3, and 6), and ventilator support was given 10–12 days after the onset of illness. Pancytopenia compatible with hemophagocytosis syndrome developed in patient 2. Five patients (patients 2, 3, 4, 5, and 6) received ribavirin, intravenous corticosteroid (methylprednisolone, 2 mg/kg/d), and intravenous immunoglobulin (IVIG, 1 gm/kg/d for 2 days). Interstitial pneumonia developed in patient 7, who responded well to ribavirin and antibiotic treatment. All patients survived.
Urinary antigen detection for S. pneumoniae and Legionella pneumophila serogroup I was negative in all seven patients. Serum from patient 5 was positive for Mycoplasma pneumoniae immunoglobulin (Ig) M (enzyme-linked immunosorbent assay [ELISA]) antibody with a fourfold increase in complement fixation (CF) antibody titer in acute- (<1:40) and convalescent-phase sera (1:160). An elevated Chlamydia pneumoniae CF antibody (1:32) but negative reaction for C. pneumoniae IgM (ELISA) antibody was found in the acute-phase serum sample from patients 1 and 6 and in the acute- (1:32) and convalescent-phase serum (1:32) samples from patients 5 and 7. The antibody titers of acute- and convalescent-phase serum samples for C. pneumoniae, C. trachomatis, C. psittaci, and L. pneumophila in the other patients showed no significant increase. Five patients (patients 1, 2, 4, 5, and 6) had elevated CF antibody levels (≥1:16) against parainfluenzavirus 1, 2, or 3. Cultures for influenza virus, parainfluenzavirus, mumps, respiratory syncytial virus, adenovirus, enterovirus, herpes simplex virus, varicella-zoster virus, and cytomegalovirus were negative from various clinical samples of these patients.
Nucleic acid was extracted from the sputum and serum samples and the infected Vero E6 cells by using a viral RNA kit (QIAamp, Qiagen Inc., Valencia, CA). Reverse transcription polymerase chain reaction (RT-PCR) for SARS-CoV was performed with 3 sets of primers (IN-6 and IN-7; Cor-p-F1 and Cor-p-R2; and BNIinS and BNIAs) developed by CDC and WHO Network Laboratory. The RT-PCR products were analyzed, and the unique fragment was cloned and sequenced ( 6 , 11 ). RT-PCR test results for SARS-CoV were positive in oropharyngeal swabs from patients 6 and 7; sputum from patients 1, 2, 3, 4, and 5; and serum specimens from patients 1, 2, 3, 4, 5, and 7. Cultures of all oropharyngeal swabs and serum specimens were negative.
Cytopathic effects in the Vero E6 cells were first found between day 3 and day 4 after injection of serum specimens from patients 3 and 4. The initial cytopathic effect was focal , with cell rounding, and was followed by cell detachment. Similar cytopathic effects developed rapidly (between day 2 and day 3) after subculture.
Ultra-thin sections were prepared for electron microscopy by fixing a washed infected Vero E6 cell pellet with 2.5% glutaraldehyde and embedding in Spurr’s resin. The SARS-CoV (range 60–80 nm in diameter) was identified by electron microscopy ( Figure 1 A and B ). RT-PCR from the infected Vero-E6 cells identified the same amplicon. Sequences of the amplicons from all patients were identical and were also identical to those from infected Vero E6 cells.
The genome of the SARS-CoV (TW1) (GenBank accession no., AY291451) strain from patient 3 was 29,729 nt in length. A comparison of TW1 sequences to the sequences described previously is summarized in the Table . The number of nucleotide differences between this TW1 isolate and the Urbani (AY278741), Tor-2 (AY274119), HKU-39848 (AY278491), and CUHK-W1 (AY278554) strains was 6, 3, 12, and 10, respectively.
IgG antibody to the SARS-CoV was detected by a standard indirect fluorescence antibody assay (IFA) with serial serum specimens from the seven patients. Spot slides for IFA were prepared by applying the suspension mixed with SARS-CoV–infected Vero E6 cells from one patient (patient 4) and uninfected cells. Slides were dried and fixed in acetone. The conjugates used were goat antihuman IgG conjugated to fluorescein isothiocyanate (Organon Teknika-Cappel, Turnhout, Belgium). The starting dilution of serum specimens was 1:25 ( 5 ). Ten serum samples obtained from 10 pregnant women during routine prelabor check-ups were used as control sera. Two IVIG products, one domestic (from Taiwanese donors) and one imported (Bayer, Leverkusen, Germany), were also tested for the presence of antibody.
All serum samples from the 10 pregnant women and the two IVIG products were negative for IgG antibody (<1:25) to SARS-CoV. Six patients had detectable IgG antibody to SARS-CoV during the course of illness, and all of them had at least fourfold elevation of antibody levels in acute- and convalescent-phase serum samples (peak levels range 1:400– ≥1:1600) ( Figure 2 ). Antibody titers (>1:25) of these six patients could be detected 9–18 days (mean 12.3 days) after the onset of illness. The antibody titer increased to a plateau 4–10 days after the appearance of antibody. The high antibody levels might last for 1 to >2 months after onset of illness ( Figure 2 ). One previously healthy patient (patient 7) with positive SARS-CoV RNA by RT-PCR from both sputum and serum specimens had no detectable antibody to SARS-CoV in serum specimens obtained 7, 10, 14, and 24 days after illness onset. Although the antibody levels reached a plateau in all patients, viral RNA persisted in the serum samples from patients 1 and 2 and sputum from patients 1 and 4 for 19 to 29 days after onset of their illness.
Although four patients had received ribavirin, corticosteroid, and IVIG treatment in the early stage of the disease, antibody was detected as early as 10–12 days after the onset of illness. The peak level of antibody was 1:400 in patients 2 and 6, 1:800 in patient 3, and > 1:1600 in patient 1. | Acknowledgments
We are indebted to many members of the frontline medical and nursing staff and laboratory personnel of the National Taiwan University Hospital for their management of these patients. We thank Professor Ding-Shinn Chen for his critical review and constructive comments on this manuscript.
Dr. Hsueh is an associate professor in the Departments of Laboratory Medicine and Internal Medicine, National Taiwan University Hospital, National Taiwan University College of Medicine. His research interests include mechanisms of antimicrobial resistance and molecular epidemiology of emerging pathogens. He is actively involved in a national research program for antimicrobial drug resistance (Surveillance from Multicenter Antimicrobial Resistance in Taiwan-SMART) and is a member of the SARS Research Group of National Taiwan University College of Medicine and National Taiwan University Hospital. | CC BY | no | 2022-01-24 23:39:45 | Emerg Infect Dis. 2003 Sep; 9(9):1163-1167 | oa_package/bb/b1/PMC3016775.tar.gz |
||||
PMC3016776 | 14519258 | Conclusions
Our study shows that 92% of nosocomial MRSA strains were sensitive to co-trimoxazole in 1997 as compared with 31% in 1988. Several factors may have influenced the emergence of co-trimoxazole–sensitive MRSA, including the reduced usage of this drug in our institution. According to the pharmacy records, usage of co-trimoxazole in our institution decreased progressively from 28 daily doses per 1,000 hospital days in 1990 to 17 daily doses per 1,000 hospital days in 1997 ( 3 ). A recent multicenter report from several Belgian hospitals showed an increase in co-trimoxazole susceptibility among MRSA isolates ( 4 ). These findings are in contrast with trends of increasing resistance of S. aureus to a variety of anti-staphylococcal drugs other than co-trimoxazole, since the beginning of the antibiotic era. These trends had culminated recently with the appearance of glycopeptide resistance in hospitals and methicillin resistance in the community ( 5 ). Whether our findings reflect an increase of co-trimoxazole–sensitive MRSA clone/s in our institution needs further exploration. In settings where co-trimoxazole is extensively used, a substantial increase of MRSA resistance to co-trimoxazole has been observed. For example, Martin et al. described a serial cross-sectional study of resistance to co-trimoxazole among all clinical isolates of S. aureus and other Enterobacteriaceae during a 16-year period at San Francisco General Hospital ( 6 ). In this study, resistance to co-trimoxazole increased from 0% to 48% in S. aureus isolates obtained from HIV-infected patients. The authors explained this increase of resistance to co-trimoxazole by the extensive use of this drug as prophylaxis against Pneumocystis carinii pneumonia.
Eventually, our data may favor the use of co-trimoxazole as a potentially cost-effective antimicrobial drug for treating MRSA infections. Co-trimoxazole has been shown to be effective against MRSA both in vitro and in vivo in mice ( 7 ), as well as in clinical reports on meningitis, septicemia, and endocarditis ( 8 , 9 ). A controlled comparative trial of intravenous co-trimoxazole versus intravenous vancomycin in 101 cases of severe S. aureus infections in intravenous drug users was conducted by Markowitz et al. ( 10 ) in 1992. The authors reported 100% cure rates for either drug in MRSA infections, including bacteremia. More recently, Stein et al. showed varying degrees of success in treating with co-trimoxazole orthopedic implant infections caused by S. aureus ( 11 ). Unfortunately, this study did not distinguish MRSA from methicillin-sensitive S. aureus strains.
Recent in vitro data have shown good activity of co-trimoxazole against clinical isolates of vancomycin-intermediate S. aureus ( 12 , 13 ) and vancomycin-resistant S. aureus ( 14 ). In some of these cases, co-trimoxazole in combination with surgical debridement and other anti-staphylococcal drugs has been used successfully ( 12 , 14 ). In clinical practice, cyclical usage of co-trimoxazole and vancomycin and possible other newer anti-MRSA drugs such as oxazolidinones and streptogramins may prove of value in slowing down rates of development of antibiotic resistance in MRSA. The in vitro–presented results, if confirmed in other institutions, in conjunction with anecdotal clinical data, should encourage the performance of a clinical trial of sufficient size to compare co-trimoxazole to vancomycin in treating MRSA infections. | Among bloodstream methicillin-resistant Staphylococcus aureus (MRSA) isolates from adult patients in a single hospital, susceptibility to co-trimoxazole increased progressively from 31% in 1988 to 92% in 1997 (p<0.0001). If also observed in other institutions, these findings should encourage the performance of a clinical trial of sufficient size to compare co-trimoxazole to vancomycin in treating MRSA infections.
Keywords: | Methicillin-resistant Staphylococcus aureus (MRSA) is a growing medical concern. During the last 2 decades, the rates of infections caused by MRSA increased among hospitalized patients in most developed countries ( 1 ). The aim of this study was to examine trends in antibiotic resistance of hospital-acquired bloodstream MRSA isolates from 1988 to 1997 in our institution.
The Study
Included in the analysis were all patients >18 years of age who had hospital-acquired bacteremia caused by S. aureus. The study took place at Rabin Medical Center, Beilinson Campus, Petach-Tikva, Israel, a 900-bed university hospital. Our center serves an urban population of approximately 1 million persons as both a first-line and tertiary facility. A prospective surveillance of all bacteremic episodes occurring at our medical center is performed continuously and, since 1988, has been incorporated into a computerized database for bacteremia. Episodes of bacteremia are detected by daily surveillance of the microbiology laboratory records, with an annual range of 700 to 900 episodes.
Antibiotic susceptibility was tested by using the disk diffusion technique on Mueller-Hinton agar, according to the procedures established by the National Committee for Clinical Laboratory Standards (NCCLS) ( 2 ). Plates were incubated at 30oC for 18 h and 40 h for methicillin (5 μg/disk) and at 37°C for 18 h for other antibiotics. Bacteremia was considered to be hospital-acquired if it appeared 48 h after admission.
During the study period, a total of 944 episodes of S. aureus bacteremia were documented. We found 598 (63%) hospital-acquired episodes, with an annual number of episodes ranging from 35 to 121. Among the hospital-acquired episodes, 270 (45%) were due to MRSA strains. During the recent decade, rates of resistance to methicillin were high but stable among the hospital-acquired isolates, ranging from 25% to 57%. Rates of susceptibility to co-trimoxazole among patients with hospital-acquired MRSA increased significantly from 31% in 1988 to 92% in 1997 (p=0.0001) ( Figure ).
The hospital-acquired MRSA isolates were persistently highly resistant to chloramphenicol (69% in 1988 and 100% in 1997; p=NS), gentamicin (89% in 1988 to 94% in 1997; p=NS), and ciprofloxacin (87% in 1988 to 96% in 1997; p=NS). The resistance to clindamycin (62% in 1988 to 92% in 1997; p=0.04), fusidic acid (6% in 1988 to 14% in 1997; p=0.03), and rifampicin (21% in 1988 to 76% in 1997; p=0.02) increased significantly. All isolates were sensitive to vancomycin. | Dr. Bishara is a specialist in internal medicine and infectious diseases, and he serves as a senior physician and infectious disease consultant at the Rabin Medical Center, Israel. His major research interests are infective endocarditis and cardiovascular and nosocomial infections. | CC BY | no | 2022-01-31 23:44:34 | Emerg Infect Dis. 2003 Sep; 9(9):1168-1169 | oa_package/71/d1/PMC3016776.tar.gz |
||||
PMC3016777 | 14531384 | Keywords: | To the Editor: Coxiella burnetii, a strict intracellular bacterium, is the etiologic agent of Q fever, a worldwide zoonosis. Humans are infected by inhaling contaminated aerosols from amniotic fluid or placenta or handling contaminated wool ( 1 ). The bacterium is highly infectious by the aerosol route. Two forms of the disease are typical: acute and chronic. Acute Q fever is the primary infection and in specific hosts may become chronic ( 1 , 2 ). The major clinical manifestations of acute Q fever are pneumonia and hepatitis. Less common clinical manifestations are aseptic meningitis and/or encephalitis, pancreatitis, lymphadenopathy that mimics lymphoma, erythrema nodosum, bone marrow necrosis, hemolytic anemia, and splenic rupture ( 2 ). The main clinical manifestation of the chronic form is culture-negative endocarditis, but infection of vascular grafts or aneurysms, hepatitis, osteomyelitis, and prolonged fever have also been described ( 1 , 2 ). Fluoroquinolones, co-trimoxazole, and doxycycline are active against C . burnetii in vitro, and ceftriaxone has been shown to have a bacteriostatic effect and could be effective in the phagolysosome of C. burnetii –infected cells ( 3 ). However, the treatment of choice for Q fever is doxycycline.
The incidence of this disease is largely unknown, especially in Asia. Q fever has been reported from Japan and China ( 1 ). Seroepidemiologic surveys have shown that subclinical infection is common worldwide. Large outbreaks of Q fever have also been reported in many countries in Europe ( 4 ). A case series of acute Q fever was diagnosed in a prospective study in patients with acute febrile illness who were admitted to four hospitals in northeastern Thailand: Udornthani Hospital, Udornthani Province; Maharat Nakornrachasema Hospital, Nakornrachasema Province; Loei Hospital, Loei Province; and Banmai Chaiyapod Hospital, Bureerum Province. Two serum samples were taken from these patients, on admission and at a 2- to 4-week outpatient follow-up visit, and stored at –20°C until serologic tests were performed at the Faculty of Medicine Siriraj Hospital, Mahidol University, and the National Research Institute of Health, Public Health Ministry of Thailand. All serum samples were tested for the serologic diagnosis of leptospirosis, scrub typhus, murine typhus, and dengue infection as previously described ( 5 , 6 ). After these serologic tests were performed, serum samples from patients with unknown diagnosis were sent for the serologic test for Q fever at Unité des Rickettsies, Faculté de Médecine, Marseille, France. The microimmunofluorescent antibody test, using a panel antigen of C. burnetii, Rickettsial honei, R. helvetica, R. japonica, R. felis, R. typhi, Bartonella henselae, B. quintana, Anaplasma phagocytophila, and Orientia tsutsugamushi , was used as described previously ( 6 ).
A total of 1,171 serum specimens from 678 patients were tested for Q fever. Nine case-patients (1.3%, eight male and one female) fulfilled the diagnosis of acute Q fever. The median age was 42 (range 15–62) years. All patients were rice farmers, and their farm animals were chicken and cattle. The median duration of fever was 3 (range 1–7) days before admission into the hospital. When initially seen, all patients had acute febrile illness, headache, and generalized myalgia (i.e., a flulike syndrome). Clinical manifestations of acute Q fever in these patients ranged from this flulike syndrome (three patients), pneumonitis (one patient), hepatitis (two patients), pneumonitis and renal dysfunction (one patient), hepatitis and renal dysfunction (one patient), to severe myocarditis and acute renal failure (one patient). An epidemic of leptospirosis has been occurring in Thailand since 1996 ( 7 ). All patients in this study received a diagnosis of either leptospirosis or acute fever of undefined cause; therefore, empirical therapy, including penicillin G sodium, doxycycline, and cefotaxime or ceftriaxone, was administered. The patient with hepatic and renal dysfunction was treated with co-trimoxazole. The patient who had severe myocarditis and acute renal failure was treated with a penicillin G sodium and doxycycline combination. He also received a dopamine infusion and hemodialysis. The median duration between admission and a reduction of fever was 3 days (range 1–7) in this case series.
Results of several seroprevalence studies, using the complement fixation test, conducted in both humans and animals suggest that C. burnetii infection has been widespread in Thailand since 1966 ( 8 ). The prevalence in asymptomatic persons varies from 0.4% to 2.6% ( 9 ), and studies in domestic animals show that the highest prevalence of this infection occurs in dogs (28.1%). The prevalence in goats, sheep, and cattle varies from 2.3% to 6.1% ( 9 ). However, this clinical case series of acute Q fever is the first diagnosed in this country. The disease was diagnosed in patients in four hospitals, situated in various parts of the northeastern region of Thailand. These data confirmed that Q fever is widespread in this country. The disease had been unrecognized previously because the specific serologic test was not widely available in Thailand.
A self-limited course was suspected in four cases in this series. However, severe cases, especially those with myocarditis, could be fatal. Therefore, doxycycline should be an empirical therapy for patients with acute febrile illness in areas where leptospirosis, scrub typhus, and acute Q fever are suspected, such as in rural Thailand. Further studies to investigate the epidemiology of Q fever in this country are needed. | CC BY | no | 2022-01-31 23:46:21 | Emerg Infect Dis. 2003 Sep; 9(9):1186-1188 | oa_package/c6/9c/PMC3016777.tar.gz |
||||||
PMC3016778 | 14519261 | Discussion
Before 2000, no cases of P. falciparum had occurred in Chinese immigrants living in northern and central Italy, despite a large immigrant population. An initial cluster of 22 cases was described during summer 2000 in the Lombardy Region ( 3 ). A cluster of six cases was detected in Tuscany during the same period ( 4 ). In both outbreaks, the researchers described high rates of severe disease. All patients were exposed to malaria during a prolonged journey to Europe (3–9 months) through a number of Asian and African countries.
From 2000 to 2002, a total of 10 sporadic cases were reported to the Italian Ministry of Health in 2001 (L. Vellucci, Directorate for Prevention, Ministry of Health, Italy, pers. comm.). The 2003 cluster prompted us to examine hospital records from August 2002, where we identified an additional, previously undetected, cluster of 12 malaria cases in four of our study hospitals (data not included in the table). The Ministry of Health had 26 confirmed P. falciparum cases during 2002 (L. Vellucci, pers. comm.), suggesting an ongoing (and possibly increasing) influx of Chinese laborers. Some differences exist between the later cluster and the 2000 cluster. In the 2003 cluster, the proportion of severe cases was lower than in the previous reports, with a patient with a fatal case first admitted to a general hospital where diagnosis of malaria was not considered; in the others, awareness of the possibility of malaria had been raised by the earlier cluster ( 3 , 4 ) and led to prompt diagnosis and treatment, with favorable outcome. A single African country, Côte d’Ivoire, was the transit country for most of the patients. In previous cases, a number of other African countries were used for transit. Visa processing for entry to Europe was arranged by the courier organization in Côte d’Ivoire. The clustering of cases suggests that the illegal immigrants arrive in Europe in groups. Although Italy was the final destination, at least some immigrants entered through France, which also has had reports of P. falciparum cases in Chinese immigrants (F. Legros, Centre National de Référence de l’Epidémiologie du Paludisme, France, pers. comm.). As malaria is probably underreported in Europe, additional cases may well have occurred.
Use of clandestine travel by air to emigrate from China, where sudden acute respiratory syndrome (SARS) is present, poses a threat for the African countries, where the introduction of SARS virus could have devastating consequences on their health systems with a potential overlap with the HIV epidemic. Other diseases could be spread or acquired by the immigrants in the countries of transit. While curtailing the huge, illegal immigrant system to Europe is difficult, we cannot overemphasize the need for a sound surveillance on imported infectious diseases in this continent.
Both clusters of malaria were detected early through Salute Internazionale Regione Lombardia (SIRL), a network on imported diseases of the Lombardy Region, in conjunction with the European Network on Imported Infectious Disease Surveillance (TropNetEurop). Any physician in Europe who sees a Chinese patient with a history of recent travel and a high fever should exclude malaria, besides considering the possible diagnosis of SARS. Respiratory symptoms are also frequent in uncomplicated malaria ( 5 , 6 ), and acute respiratory distress syndrome has long been recognized as one of the main features of severe malaria ( 7 , 8 ). | Between November 2002 and March 2003, 17 cases of malaria (1 fatal) were observed in illegal Chinese immigrants who traveled to Italy through Africa. A further cluster of 12 was reported in August, 2002. Several immigrants traveled by air, making the risk of introducing sudden acute respiratory syndrome a possibility should such illegal immigrations continue.
Keywords: | From November 2002 to March 2003, 17 cases of malaria were noted among illegal Chinese immigrants in seven hospitals across central and northern Italy (15 cases of Plasmodium falciparum , 1 case of P. malariae , and 1 mixed infection of P. falciparum and P. malariae ). One patient died. Until recently, imported malaria in this group of illegal immigrants from China was not detected by malaria surveillance institutions within Europe ( 1 ). Although malaria is still endemic in parts of China, transmission in these regions is low-level ( 2 ); the predominant species is P. vivax . P. falciparum transmission is confined to provinces bordering Laos and Viet Nam. None of the patients reported coming from those areas. Investigating the cluster proved difficult because of language problems and reticence to provide detailed information of travel, since the patients were illegal immigrants ( Table ). The fatal case occurred in a general hospital in northern Italy. The 20-year-old woman (case 7) was admitted with a high fever, severe hemolytic anemia (hemoglobin 4.4 g/dL), and metabolic acidosis. After 48 hours, because of hypotension, seizures, and subsequent coma, she was transferred to the intensive-care unit of a referral hospital for infectious diseases. The blood film showed a 70% parasitemia with P. falciparum. The patient died 96 hours after admission, despite aggressive drug therapy and plasmapheresis. | Acknowledgments
We are grateful to Loredana Vellucci, Stefania D’Amato, and Fabrice Legros for providing information on malaria in Chinese immigrants in Italy and France, respectively.
Dr. Bisoffi is the head of the Center for Tropical Diseases at the Sacro Cuore Hospital of Negrar, Verona, Italy, a referral center for imported diseases. His main research interests concern the surveillance and diagnosis of imported tropical and infectious diseases and the clinical decision-making in tropical medicine. He is the secretary general of the Italian Society of Tropical Medicine and teaches in several Italian and European institutes. | CC BY | no | 2022-01-26 23:35:44 | Emerg Infect Dis. 2003 Sep; 9(9):1177-1178 | oa_package/4e/35/PMC3016778.tar.gz |
||||
PMC3016779 | 14519240 | Methods
Study Population
In early March 2003, an outbreak of SARS occurred in the Prince of Wales Hospital (the teaching hospital of The Chinese University of Hong Kong). Our study participants were patients admitted to our hospital for suspected SARS during the first week of the outbreak ( 12 ). These patients fulfilled the World Health Organization definition for probable SARS cases ( 13 ). Briefly, patients had an acute onset of fever (>38°C, most with chills or rigor), dyspnea, myalgia, headache, and hypoxemia. Peripheral air-space consolidation subsequently developed in all study patients as observed on chest radiographs or thoracic computed tomographic scan; patients showed no response to antimicrobial drugs prescribed for typical and atypical pneumonia (β-lactams, macrolides, and fluoroquinolones).
During our study, we examined 48 patients who comprised our first group of SARS patients and had a clear history of exposure. Forty-five participants were adults (26 men, 19 women) 21–69 years of age (mean 35.4 years of age; standard deviation 11.5 years). The group included 26 healthcare workers and 7 medical students who worked in a ward (index ward) in the hospital where a few patients with SARS had stayed. The remaining 12 patients had been hospitalized or were visitors to the same ward. Three study participants were children (two boys, one girl) 2–7 years of age. All these children were living with persons who had been hospitalized or were visitors to the index ward and who had contracted SARS.
Virus Isolation
Nasopharyngeal aspirate (NPA) samples were taken from all patients by inserting a suction catheter into the nasopharyngeal area via the nostril. A low suction force was applied to collect approximately 0.5 mL fluid, which was then transferred into 2 mL of viral transport medium. All NPAs were added onto rhesus monkey kidney (LLC-MK2), human laryngeal carcinoma (HEp-2), Mardin Darby Canine Kidney (MDCK), human embryonic lung fibroblast, Buffalo green monkey kidney (BGM), and African green monkey kidney (Vero) monolayers. All cell cultures were incubated at 37°C, except for MDCK, which was incubated at 33°C. All NPAs were added to an additional LLC-MK2 cell culture tube and incubated at 33°C. Cell monolayers were examined daily for cytopathic effect. After 14 days of incubation, a hemadsorption test for LLC-MK2 and MDCK monolayers was performed. All cell cultures materials were kept frozen for subsequent analyses.
Human Metapneumovirus Reverse Transcription-Polymerase Chain Reaction (RT-PCR)
To detect HMPV, we used a nested RT-PCR focused on the F-gene. This RT-PCR was applied on all cell cultures, regardless of cytopathic effect. After one cycle of freeze-and-thaw, RNA was extracted from cell cultures by using the QIAamp Viral RNA Mini Kit (QIAGEN GmbH, Hilden, Germany), according to the manufacturer’s protocol. The outer primers were 5′-AGC TGT TCC ATT GGC AGC A-3′ for RT and amplification and 5′-ATG CTG TTC RCC YTC AAC TTT-3′ (R = A or G, Y = C or T) for amplification. These primers were designed on the basis of HMPV sequences available from GenBank. The reaction was carried out in a single-tube (Superscript One-Step RT-PCR and Platinum Taq; Invitrogen Corp., Carlsbad, CA) by using 0.2 μM of each primer and thermal cycling conditions of 50°C for 30 min and 94°C for 3 min; followed by 40 cycles of 94°C for 30 s, 52°C for 30 s, 72°C for 45 s, and a final extension at 72°C for 7 min. For the second round of amplification, we used 0.2 μM of inner primers 5′-GAG TAG GGA TCA TCA AGC A-3′ and 5′-GCT TAG CTG RTA TAC AGT GTT-3′. The PCR was conducted at 95°C for 15 min for denaturation of DNA templates and activation of the hot-start DNA polymerase (HotStarTaq, QIAGEN GmbH), followed by 40 cycles at 94°C for 30 s, 54°C for 30 s, and 72°C for 45 s, and a final extension at 72°C for 7 min. PCR products detected by agarose gel electrophoresis were analyzed for sequence homology with known HMPV strains. In addition to virus isolation, RNA was extracted directly from NPAs for HMPV RT-PCR by using the same protocol as for cell cultures.
Coronavirus RT-PCR
RNA was extracted from the supernatant of Vero cell cultures showing cytopathic effect by using the same method as for HMPV. Coronavirus was detected by RT-PCR with primers COR-1 (sense) 5′ CAC CGT TTC TAC AGG TTA GCT AAC GA 3′ and COR-2 (antisense) 5′ AAA TGT TTA CGC AGG TAA GCG TAA AA 3′, which had been shown to be specific for the novel coronavirus detected from patients with SARS ( 14 ). The RT-PCR for coronavirus was conducted similarly to HMPV (by using 0.6 μM of each primer and thermal cycling conditions of 54°C for 30 min, 94°C for 3 min; 45 cycles of 94°C for 45 s, 60°C for 45 s, 72°C for 45 s; and 72°C for 7 min).
Sequence Analysis
The nucleotide sequence of purified PCR products was determined by PCR-based cycle sequencing performed with the inner primers. Sequencing reactions were performed according to the manufacturer’s protocol (BigDye Terminator Cycle Sequencing Kit version 3.1, Applied Biosystems, Foster City, CA) and run on the ABI Prism 3100 Genetic Analyzer. All sequences were confirmed by repeated PCRs and sequencing from both directions.
Electron Microscopy
Selected cell cultures that showed cytopathic effect were examined by electron microscopy. Cell culture supernatants were coated on formvar-carbon grids and stained with 2% phosphotungstic acid.
Antibody Detection
To ascertain the HMPV culture results, we obtained paired serum samples (first sample collected within 5 days and second sample collected >14 days after onset of illness) and tested for HMPV antibody. HMPV–infected LLC-MK2 cells were coated on 12-well glass slide and fixed in acetone. The presence of antibody in serum samples was tested for by using the direct immunofluorescence technique.
Exclusion of Cross-Contamination and Test for Reproducibility
Specimen processing, viral culture inoculation, RNA extraction, RT-PCR amplification, and PCR product analyses were conducted in different rooms. Special care was taken to avoid contamination with RNase, and to avoid cross-contamination between reactions. During the inoculation of cell monolayers, we placed a negative control using the same cell line injected with maintenance medium after every fifth cell culture tube. These negative control cell culture tubes were also incubated, examined for cytopathic effect, and processed for RT-PCR as for cell culture tubes injected with specimens. For RNA extraction and RT-PCR procedures, we placed negative controls using cell culture medium to replace cell supernatant injected with NPAs or double distilled water to replace NPA sample after every fifth reaction. These negative controls did not show positive results, which indicated the absence of cross-contamination. To test the reproducibility of RT-PCR results, we repeated the testing of all positive samples and 30 randomly selected negative samples; all results were reproducible. We also spiked negative NPA samples with HMPV RNA and repeated the extraction and RT-PCR procedures. The results showed no inhibitors were present in the extracted RNA preparations. | Results
Of the 48 NPAs studied, we observed no cytopathic effect on HEp-2, MDCK, human embryonic lung fibroblast, and BGM monolayers. Eleven (22.9%) specimens showed cytopathic effect of diffuse refractile rounding of cells on Vero cell monolayers 2–4 days after incubation, progressed rapidly, and involved the whole monolayer within 12–24 hours. The same cytopathic effect was reproducible on passage to Vero cells, and appeared 1 to 2 days after incubation. These Vero cell cultures were all positive by the coronavirus RT-PCR. The Vero cell culture supernatants showing cytopathic effect were randomly selected for electron microscopy examination, and coronavirus particles were seen.
Five specimens showed cytopathic effect of focal refractile rounding of cells in LLC-MK2 monolayers. All these LLC-MK2 cell cultures had been incubated at 37°C. Unlike the cytopathic effect observed in Vero cells attributable to coronavirus, this cytopathic effect developed after 10 to 12 days of incubation and progressed slowly to detachment from the cell monolayer.
The HMPV RT-PCR examination of cell cultures was negative for human embryonic lung fibroblast, BGM cells, and Vero cells (including those positive for coronavirus). In contrast, HMPV RT-PCR showed a PCR product of the expected size (89 bp) from 25 (52.1%) isolation materials injected with specimens. The nucleotide sequences of the PCR products were identical to the F-gene fragment of HMPV (GenBank accession no. NC 004148) ( 1 ). We retrospectively examined the first round of PCR products of all positive samples. Those positive samples derived from direct NPAs did not show positive band, indicating a nested RT-PCR was necessary. However, most (27 [90%] of 30) of those derived from cell cultures showed a positive band of the expected size from the first round of PCR. The distribution of HMPV RT-PCR results on direct detection of NPAs and from different cell culture types is shown in the Table . Overall, the sensitivity of direct detection of NPAs using HMPV RT-PCR was 2 (8.0%) of 25 samples. In one of these two samples, we isolated the virus from three cell lines. In the other sample, we isolated virus from HEp-2 and LLC-MK2. Overall, HEp-2 was the most sensitive cell lines (22 [88.0%] of 25 HMPV positive samples); LLC-MK2 cells detected 6 (24.0%) of 25 samples, and MDCK cells detected 2 (8.0%) of 25 samples. Most (with the exception of three LLC-MK2–positive samples) showed positive results in HEp-2 cells. All six LLC-MK2 cell cultures positive for HMPV were incubated at 37°C; three of these positive cultures that had had the corresponding LLC-MK2 cell cultures incubated at 33°C showed positive results.
To ascertain that cell cultures with HMPV RT-PCR–positive results represented the isolation of HMPV, all LLC-MK2 (incubated at 37°C), HEp-2, and MDCK cell cultures, regardless of the HMPV RT-PCR findings, were passaged to LLC-MK2 cells for a prolonged incubation of 28 days. HEp-2 cells were not used for this purpose because HEp-2 cell monolayers are often difficult to maintain for >2 weeks. The results showed that all passages from HMPV RT-PCR–positive cell cultures showed cytopathic effect of focal refractile rounding of cells that occurred after 10 to 22 days of incubation ( Figure 1 ). The cytopathic effect progressed slowly to detachment from the cell monolayer ( Figure 2 ). The supernatants of these passages were also positive by the HMPV RT-PCR and had visible HMPV viral particles on electron microscopy examination ( Figure 3 ). The passages from HMPV RT-PCR–negative supernatants did not show positive results by the above tests. We also passaged five Vero cell cultures that were positive for coronavirus to LLC-MK2 cells in a similar way. All of these passages did not show cytopathic effect and were negative by the HMPV RT-PCR.
To reconfirm the fact that HMPV infections detected by this combination approach represented genuine infections, we coated HMPV-infected LLC-MK2 cells onto slides for antibody detection using the immunofluorescence technique. All HMPV culture–positive patients who had serologic evidence of infection had a more than fourfold rise in antibody titers, and 15 patients seroconverted.
Overall, our results indicated that the combination approach of using conventional virus isolation and molecular detection could be successfully applied to the isolation of HMPV ( Figure 4 ). With this approach, we found that among the 48 study participants, 6 (12.5%) had both HMPV and coronavirus isolated from NPAs, 19 (39.6%) had HMPV, and 5 (10.4%) had coronavirus. Eighteen (37.5%) had no virus isolated from the cell lines that we used. | Discussion
On the basis of a combination of conventional virus isolation system and molecular techniques, we found that 52.1% (25/48) of patients with SARS admitted to our hospital had HMPV infection. Isolation of HMPV is known to be difficult, which is why the virus could not be detected until recently. The first report on HMPV by van den Hoogen et al. ( 1 ) showed that the virus produced syncytia formation in tertiary monkey kidney cells, followed by rapid internal disruption of the cells and subsequent detachment from cell monolayer. The virus replicated poorly in Vero cells and human lung adenocarcinoma (A-549) cells and could not be propagated in MDCK cells or chicken embryo fibroblasts ( 1 ). In the study from Boivin et al. ( 7 ), multiple cell lines including LLC-MK2, HEp-2, MDCK, human foreskin fibroblast, Vero, Mink lung, A-549, human rhabdomyosarcoma (RD), transformed human kidney (293), and human colon adenocarcinoma (HT-29), were used for isolation of HMPV. The results showed that HMPV only grew on LLC-MK2 cells with cytopathic effect of round and refringent cells but without syncytia formation in most cases, an observation in agreement with our results. In that study, HEp-2 cell monolayers did not show cytopathic effect. Since the HEp-2 cells were not tested for HMPV RNA, we do not know whether our findings on HEp-2 cells were also observed by Boivin et al. In another study reported by Peret et al. ( 6 ), LLC-MK2, MDCK, and NCI-H292 cells were used; those researchers found that only LLC-MK2 cells produced cytopathic effect of focal rounding and without syncytia formation, which is also similar to our observation. The major difference in our approach for HMPV isolation compared to previous studies is the use of RT-PCR to enhance the detection of HMPV isolated from cell cultures. With this approach, we found that HEp-2 cells, a widely available and commonly used cell line, support the growth of HMPV. When RT-PCR was used to follow up all cell cultures, the sensitivity of HEp-2 cells was higher than LLC-MK2 cells, the cell line most commonly used in previous studies for HMPV. However, even using our approach, LLC-MK2 cells cannot be discarded, as in 12% of cases HMPV was only isolated from LLC-MK2 cells. In contrast, in the presence of HEp-2 cells, MDCK cells gave little additional value, as both specimens positive by MDCK cells had the viruses isolated from HEp-2 cells. In addition, our initial incubation of 14 days might not be optimal for isolating HMPV because Boivin et al. reported that the cytopathic effect took a mean incubation time of 17.3 days to develop ( 7 ). By prolonging the initial incubation of LLC-MK2 cells to 21 or 28 days, more HMPV infections might have been detected from our “negative” group.
Because all our study samples were collected from patients related to the outbreak of SARS that occurred in our hospital, one cannot simply infer that this in vitro growth property can be applied to all HMPV strains in general. Nevertheless, our approach of including HEp-2 cells, a widely available cell line, to search for HMPV, in particular for those cases related to SARS, needs to be considered. In our study, six patients were co-infected with HMPV and coronavirus. Although the number was limited, our findings suggest that HMPV and coronavirus have different in vitro tropisms, and the isolation of one virus does not affect the recovery of the other from different cell lines.
Overall, we confirmed that 25 (52.1%) of 48 patients admitted to our hospital with SARS had HMPV infections, with 6 also co-infected with coronavirus. However, the data on such high prevalence of HMPV should be interpreted cautiously. Our study population was based on persons and their family members who had been exposed in the index ward in our hospital. Thus, a co-circulation of two pathogens within our study group was possible. While the clinical presentations of all our study participants fulfilled the World Health Organization definition for a probable case of SARS ( 13 ), one should not infer, at this stage, that the prevalence of HMPV is similarly high in SARS outbreaks occurring elsewhere. On the other hand, the possibility of an important role of HMPV in the current worldwide outbreak of SARS should not be neglected. HMPV has also been detected in five of six SARS patients living in Canada ( 15 ). In that study series, coronavirus was also detected in five of six patients and four patients were co-infected with HMPV and coronavirus. A few recent studies implicate a strong association of a novel coronavirus with the worldwide outbreak of SARS ( 16 – 18 ). While both HMPV and coronavirus infections may result in severe respiratory tract diseases, their transmission efficiency may not be the same. This urgent question must be answered because the answer affects the priority for immediate development of control strategies.
During this outbreak of SARS, we have applied this combination approach of conventional virus isolation and molecular detection to establish the viral infection status of other patients hospitalized for SARS. We are in the process of analyzing a larger cohort to elucidate their clinical conditions, treatment responses, and epidemiologic links with respect to the infection status for both HMPV and coronavirus. Similar work in other parts of the world is needed. | We used a combination approach of conventional virus isolation and molecular techniques to detect human metapneumovirus (HMPV) in patients with severe acute respiratory syndrome (SARS). Of the 48 study patients, 25 (52.1%) were infected with HMPV; 6 of these 25 patients were also infected with coronavirus, and another 5 patients (10.4%) were infected with coronavirus alone. Using this combination approach, we found that human laryngeal carcinoma (HEp-2) cells were superior to rhesus monkey kidney (LLC-MK2) cells commonly used in previous studies for isolation of HMPV. These widely available HEp-2 cells should be included in conjunction with a molecular method for cell culture followup to detect HMPV, particularly in patients with SARS.
Keywords: | Human metapneumovirus (HMPV) was first identified in 2001 in samples from children with respiratory tract diseases ( 1 ). Subsequent studies showed that the virus is responsible worldwide for a proportion of community-acquired acute respiratory tract infections in children ( 2 – 4 ), as well as other age groups ( 5 – 9 ). Co-infection of HMPV with respiratory syncytial virus (RSV) in infants has been suggested to be a factor that influences the severity of bronchiolitis ( 10 ).
HMPV is a new member of the family Paramyxoviridae , subfamily Pneumovirus . The overall percentage of amino acid sequence homology between HMPV and avian metapneumovirus (APV) ranges from 56% to 88% for open reading frames N, P, M, F, M2-1, M2-2, and L ( 11 ). Phylogenetically, RSV is the closest human virus related to HMPV, and the clinical symptoms of HMPV may share an overlapping spectrum with RSV ( 2 , 4 , 7 , 9 , 10 ). The epidemiology and symptoms of HMPV infection have not been fully elucidated; one obstacle in establishing these data is the difficulty in establishing a laboratory diagnosis of the infection. We describe our experience of detecting HMPV during an outbreak of severe acute respiratory syndrome (SARS). | Acknowledgments
We thank all healthcare workers in Hong Kong SAR who have bravely taken care of severe acute respiratory syndrome patients.
Dr. Chan is a clinical virologist and associate professor at the Department of Microbiology, Faculty of Medicine, The Chinese University of Hong Kong. His research interests include emerging viral infections, viral epidemiology, diagnostic virology, and viral oncology. | CC BY | no | 2022-01-24 23:35:47 | Emerg Infect Dis. 2003 Sep; 9(9):1058-1063 | oa_package/24/5b/PMC3016779.tar.gz |
||
PMC3016780 | 14519259 | Conclusions
In our patients, EAEC serotype O126:H27 appears to be a pathogenic agent of young children who require hospitalization and dehydration treatment. This same serotype has been reported as a common cause of diarrhea in children from England ( 16 ), Japan ( 17 ), and Bangladesh ( 9 ). However, we were not able to associate that serotype exclusively with the enteroaggregative pathotype, since nonaggregative Ec O126;H27 strains from hospitalized children (patients 16 and 17 in Table 1 ) produced ST and might therefore belong to the pathotype of ETEC.
However, ST was apparently not the main diarrheagenic factor, since in the five children with prolonged diarrhea no ST was produced. Some other kind of toxin was probably involved in these cases. In strains from some patients ( Table 1 , numbers 12–15) we found both traits of EAEC and ETEC in the same organism. A simple test to identify EAEC in routine laboratory work is needed. A possible solution is to use a phage sensitivity test in addition to serotyping, such as we used here for EAEC O126:H27. Preliminary results suggest that this test ( Table 3 ) is a reliable indicator. If this fact is confirmed on a large number of strains, specific phages might also be selected for EAEC of other serotypes.
The obvious accumulation of pCVD432-positive E. coli of serotype O126:H27 suggests that we found a clone that spread in Israel and probably has a selective advantage. | Enteroaggregative Escherichia coli (EAEC) is a newly diarrheagenic agent wherein several predominant serotypes are reported. We studied the association between those serotypes, as clonal indicators, and the trait of enteroaggregative adherence to host cells, tested by polymerase chain reaction. We also evaluated the clinical manifestations of infection in 17 hospitalized children by our most common EAEC serotype, O126:H27.
Keywords: | Enteroaggregative Escherichia coli (EAEC) is an emerging pathogen that causes diarrhea in many parts of the world, including children from Israel ( 1 ) and Jordan ( 2 ). This group of E. coli was preliminarily defined by its aggregative pattern of adherence (AA) to HEp-2 cells ( 3 ). Their identification was facilitated by a DNA probe developed from a plasmid (pCVC432 syn. pAA) necessary for expressing the aggregative phenotype ( 4 ). Based on that probe, a polymerase chain reaction (PCR) test was developed for screening EAEC strains ( 5 ). This test, which we used in this research, and another test using the same DNA probe ( 6 ), have been better indicators of diarrheagenic strains than the phenotypic Hep-2 cells test.
EAEC is a divergent group in terms of the organisms’ ability to induce diarrhea ( 7 ), the factors involved in attachment to host cells ( 8 ), and kinds of serotypes ( 8 , 9 ). Since certain EAEC serotypes were already prevalent throughout the world, we studied whether those strains could be found in our isolates of diarrheagenic EAEC. To simplify the detection of EAEC, we selected a bacteriophage active specifically on our clinically evaluated EAEC strains of E. coli O126:H27. EAEC have been rarely evaluated clinically in Israel. Here we address that problem by reporting clinical and microbiologic findings of children hospitalized with gastroenteritis in which our most common EAEC serotype, O126:H27, was found.
The Study
Clinical signs and the laboratory findings were evaluated for 17 children <2 years of age, hospitalized in four pediatric wards in different areas of Israel. All these children had gastroenteritis attributable to EAEC or enterotoxigenic E. coli (ETEC) of serotype O126:H27 ( Table 1 ).
Serotyping was performed ( 10 ). To determine O-antigen, cultures were heated to 120°C for 1 h, then checked for agglutination with specific O-antisera at 50°C overnight. For determination of H-antigen, motile cultures were grown overnight in nutrient broth, treated with 0.5% formaldehyde, and investigated for agglutination with specific H-sera at 50°C for 2 h.
To detect EAEC, we used PCR primers specific for a short sequence of the plasmid pAA of EAEC, which is necessary for adherence. Analysis for the presence of pCVD432 sequences was performed at the Institute of Hygiene and Microbiology, the University of Wuerzburg, and the Institute of Medical Microbiology and Hygiene, Technical University of Dresden, Germany. Briefly, E. coli were isolates grown overnight on L-agar, and a single colony was suspended in 50 μL of phosphate-buffered saline (PBS). Amplification was carried out in a total volume of 50 μL containing each nucleotide triphosphate at 200 μm, 30 pmol of each primer, 5 μL of 10-fold concentrated AmpliTaq DNA polymerase synthesis buffer, 1.5 mM MgCL 2 , 2.5 U AmpliTaq DNA polymerase (Applied Biosystems Applera, Weiterstadt, Germany), and 5 μL of template Oligonucleotides pCVD432/start (5′-CTG GCG AAA GAC TGT ATC AT-3′) and pCVD432/stop (5′-AAT GTA TAG AAA TCC GCT GT-3′) were purchased from Sigma-ARK GmbH (Darmstadt, Germany) ( 5 ). The PCR protocol comprises 30 rounds of amplification, each consisting of 30 s at 94°C, 60 s at 52°C, and 60 s at 72°C. The first cycle was preceded by a denaturation step of 10 min at 94°C, and the last extension cycle was followed by a final extension step of 10 min at 72°C.
Enterotoxins were determined by the Asialoganglioside-GM 1 enzyme-linked immunosorbent assay (GM 1 -ELISA) method using the direct plate cultures technique. Heat-labile toxin (LT) was determined by GM1-ELISA using monoclonal antibodies against LT ( 11 ). Heat-stable toxin (ST) was determined in parallel in the same cultures by an inhibition-GM 1 -ELISA that used monoclonal anti-ST ( 12 ). The test was performed in two 96-well polystyrene microplates A&B (Nunc A/S Roskilde, Denmark) and comprises several steps. The plates were coated with GM 1 by adding 0.1 mL of 0.3 nmol GM 1 (Sigma, Rehovot, Israel) in 0.1 M PBS, pH=7.2, to each well. After the plates incubated overnight at room temperature, they were washed three times with PBS, blocked with 0.1% bovine serum albumin (BSA) in PBS for 30 min at 37°C, and finally washed once with PBS. To each of the GM 1 -coated wells in plate A was added 0.2 mL LB broth, Lennox medium, adjusted to 45 μg lincomycin/mL and 2.5 mg glucose/L. From each bacterial isolate, five colonies grown on McConkey agar were transferred directly into five separate wells. The cultures were grown for 24 h at 37°C with moderate shaking. Plate B (without the bacterial cultures) was processed after step 1 in a different way to determine ST. Briefly, plate B was coated with ST-CTB (consisting of the B-subunit of cholera toxin conjugated to ST) by adding 0.1 mL of ST-CTB in 0.1% BSA–PBS to each well and incubation of the plate at room temperature for 60 min. Then the plate was washed three times with PBS. To each well in plate B, 0.05 mL of culture medium from plate A was added (presumed to contain ST); immediately thereafter, 0.05 mL of the monoclonal antibody against ST (anti-ST) was added, and the plate was gently shaken. The plate was incubated for 90 min at room temperature and then washed three times with 0.05% PBS-Tween. After the culture medium was disposed of, plate A was washed three times with PBS-Tween. To each well, 0.1 mL of monoclonal antibody against LT (anti-LT) in PBS-BSA-Tween was then added. The plate was incubated for 90 min at room temperature and then washed three times with PBS-Tween. To each well of plates A and B we added 0.1 mL of goat anti-mouse immunoglobulin G–horseradish perioxidase (Jackson Immuno-Research Laboratories, West Grove, Pennsylvania) in PBS-BSA-Tween. The plates were incubated for 90 min at room temperature and then washed three times with PBS-Tween. Substrate was prepared by dissolving 10 mg of ortophenylene diamine (Sigma) in 10 mL of 0.1 M sodium citrate buffer (pH=4.5) to which 4 μL of 30% H 2 O 2 was added. To each well in plates A and B, 0.1 mL of this substrate solution was added. After 20 min, the plates were read at 450 nm in a Micro ELISA Auto Reader spectrophotometer (Dynatech Inc., Alexandria, VA).
When the optical density (OD) decreased >50% as compared with the OD of anti-ST mixed with ST negative control culture, which run in parallel to the experimental wells, the result was considered ST positive. When the OD value at 450 was > 0.100 above the background, the result was considered LT positive. Since serotype O126:H27 was prevalent in our EAEC cultures, we tried to isolate bacteriophages specific to the EAEC of this serotype from sewage water. Five unrelated strains of EAEC serotype O126:H27 were used. One milliliter of an early logarithmic broth culture of each strain was seeded in a bottle of 50-mL nutrient broth. After incubation of 3 h at 37°C, 5 mL of sewage water was added to each bottle. After a new incubation of 6 h, cultures were killed by adding 1 mL of chloroform, followed by intensive shaking. The next day the supernatant of each bottle was tested for activity on the respective strain. The isolated phages were then diluted and purified twice by single plaque isolation ( 13 ). The five phages were active on EAEC strains of serotype O126:H27.
From July 1999 to December 2001, we collected and characterized 1,368 isolates of diarrheagenic E. coli. Of these isolates, 88 (6.4%) belonged to one of the five most common EAEC serotypes, i.e., serotype O126:H27 (n=48), O111:H21 (n=16), O125 (n=11), O44:H18 (n=11), O?:H10 (n=2) ( Table 2 ). The percentages of EAEC PCR–positive strains ( Table 2 ) were as follows: 73% in E. coli O126:H27 and 75% in E. coli O111:H21. In E. coli O125, the percentage was approximately 50%, and in E. coli O44:H18, unlike reported elsewhere ( 14 ), this percentage was low.
To determine if the isolated phages were specific for the enteroaggregative strains of serotype O126:H27, the five phages were tested by spot test at routine test dilution on our EAEC and non–EAEC cultures of this serotype. Four phages were active on both kinds of strains. Only phage no. 4 was active on 33 of the 34 EAEC cultures and on 1 of 12 non-EAEC cultures ( Table 3 ). The sensitivity of this phage was 97%, and its specificity was 91%. This phage could therefore be used as an indicator for AA in this E. coli serotype.
E. coli O126: H27 was found in stools from 17 children in four pediatric wards in various areas in Israel ( Table 1 ). The stools were watery; no mucus or blood was seen. Most of the children were dehydrated and needed IV treatment with fluids and electrolytes. Some children vomited several times. All 17 patients had a normal leukocyte count for age. Twelve of them had high fever (38.7°C–40°C). Three of these 12 children had diarrhea concomitant with other diseases (patients 11, 13, and 14). Stool cultures of these three children were taken as part of an investigation of febrile disease. The same three children received antibiotic treatment; all others recovered without antibiotics. The length of hospitalization was 2–8 days. The duration of diarrhea was 1–40 days (median 5 days) starting, in some cases, before hospitalization. ST was produced in six patients (nos. 12–17), while LT was not produced in any. Five patients (nos. 1, 2, 8, 9, 10) had prolonged diarrhea of >1 week, characteristic of EAEC ( 15 ). | Acknowledgment s
The monoclonal antibodies and the CTB-ST (heat-stable toxin conjugated to cholera toxin subunit B) conjugate for these tests were kindly supplied by Ann-Mari Svennerholm.
Dr. Shazberg is a senior physician in a pediatric department. Her research interest is pediatric infectious diseases. | CC BY | no | 2022-01-27 23:38:29 | Emerg Infect Dis. 2003 Sep; 9(9):1170-1173 | oa_package/d2/15/PMC3016780.tar.gz |
||||
PMC3016782 | 0 | Henry M. Stanley, in his second trans-Africa expedition of 1874–1877, lost 68% of his 356 men. Among the casualties, 58 died in battle or were murdered (several were cannibalized), 45 died of smallpox, 21 from dysentery, 14 drowned, and 1 was killed by a crocodile; several others died of starvation (all of this from the preface to this book). Modern-day expeditions—defined as organized and usually challenging journeys with a specific purpose of exploration, research, education, or discovery—are generally less dangerous than that experienced by Stanley. But they require extensive planning and preparation, by both leaders and expedition members, to reduce the frequencies of injury, illness, and death potentially associated with such adventures. This book is a compendium of information that will be useful to those who plan and participate in such journeys.
The editors have divided their book into three sections: expedition planning, field medicine, and specific environmental settings; each section comprises 7–14 chapters written by a total of 24 contributors. The section on planning includes advice on expedition risk assessment, assembling of medical kits, and first aid training. The second section addresses base camp hygiene, water purification, and care of various minor and serious conditions that may be encountered in the field; and the third addresses problems specific to high-altitude, polar, jungle, desert, and aquatic environments.
A major strength of this book is that, while targeted primarily to those (e.g, medical officers) who will be responsible for the health of expedition members, the writing is not highly technical. Hence, it is also suitable for paramedical personnel and other expedition members who may be interested (as they should be) in health issues specific to their expedition. In fact, this book is useful reading for those who may not have the background, time, or resources to join an expedition, but who simply enjoy wilderness experiences or ecotours for recreational purposes. As the editors point out, with the increasing availability of vacations in remote places offered by specialty tour companies, the boundary between such journeys and expeditions has become blurred. The book contains numerous tables and figures, which add to its readability. Inclusion of exotic subjects, such as treatment of bites by sea snakes and scorpions and attacks by large animals, makes for interesting reading.
The chapters vary somewhat in value to the reader. The chapter on commonly encountered ailments, such as gastrointestinal and respiratory illnesses, is very useful. The one on assessment of the injured patient is rather long; it is difficult to imagine wading through this chapter and recording various findings while managing the critically injured person in the field. The chapter on heat-related injuries fails to distinguish between heat exhaustion, heat stroke, and hyponatremia, conditions with different clinical presentations, management requirements, and prognosis. The chapter on medical aspects of survival is brief and not very useful.
Notwithstanding these minor shortcomings, this is a useful volume not only for those who plan and participate in expeditions but also for those of us who may aspire to join an expedition or who just dream of visiting exotic places. I recommend a copy for your bookshelf; better yet, for your backpack. | CC BY | no | 2022-01-24 23:41:44 | Emerg Infect Dis. 2003 Sep; 9(9):1189 | oa_package/4a/70/PMC3016782.tar.gz |
|||||||
PMC3016783 | 0 | In “Emerging Pathogen of Wild Amphibians in Frogs ( Rana catesbeiana ) Farmed for International Trade,” by Rolando Mazzoni et al., errors occurred in the figure legend on page 996.
The correct caption to the Figure appears below:
Figure. a and b, histopathologic findings from infected frogs. Characteristic sporangia (s) containing zoospores (z) are visible in the epidermis (asterisk, superficial epidermis; arrow, septum within an empty sporangium; bars, 10 μm). c, Skin smear from infected frog, stained with 1:1 cotton blue and 10% aqueous potassium hydroxide (aq KOH) (D, developing stages of Batrachochytrium dendrobatidis ; arrow, septum within a sporangium; bar, 10 μm). d, Electron micrograph of an empty sporangium showing diagnostic septum (arrow) (bar, 2 μm).
The corrected article appears online at http://www.cdc.gov/ncidod/EID/vol9no8/03-0030.htm
We regret any confusion these errors may have caused. | CC BY | no | 2022-01-25 23:35:55 | Emerg Infect Dis. 2003 Sep; 9(9):1188 | oa_package/c0/67/PMC3016783.tar.gz |
|||||||
PMC3016784 | 14519244 | Methods
Discharge Diagnoses Code Review
To confirm an increase in aseptic meningitis cases by a method independent of case reporting to DHMH, a discharge diagnosis code review was conducted at the six investigation hospitals. Included were patients with aseptic meningitis (including International Classification of Diseases [ICD]-9-CM codes 047.0, 047.1, 047.8, 047.9, 049.0, 049.1, 053.0, 054.72, 072.1) who were evaluated from June 1 to September 30, 1998 – 2001.
Case Definitions
The investigation was conducted at six hospitals that collectively reported to DHMH 76% of the 118 aseptic meningitis cases from Baltimore City and County. A case of aseptic meningitis was defined as an illness with onset from June 1 to September 30, 2001; cerebrospinal fluid (CSF) cell count of >5 leukocytes per milliliter; negative CSF bacterial cultures; and no physician or laboratory documentation of bacterial, fungal, or parasitic central nervous system (CNS) disease, cerebral hemorrhage, carcinomatous meningitis, or cerebral vasculitis. Neonates who developed CSF abnormalities before first hospital discharge were excluded, as were persons with a physician-documented diagnosis of encephalitis, confusion, or obtundation. A case of enteroviral meningitis was defined as an illness meeting the criteria for aseptic meningitis and, in addition, a positive enterovirus culture of a CSF, rectal swab, or nasopharyngeal specimen, or a positive enterovirus polymerase chain reaction (PCR) test of a CSF specimen. A case of WNV meningitis was defined as an illness meeting the criteria for aseptic meningitis and, in addition, WNV immunoglobulin (Ig) M detected in a CSF specimen by enzyme-linked immunosorbent assay (ELISA), a greater than-four-fold rise of WNV-neutralizing antibodies in acute- and convalescent-phase serum specimens, or WNV nucleic acid detected in a CSF specimen by PCR. The investigation was limited to persons living in Baltimore City or County who were evaluated at one of the six investigation hospitals.
Case Ascertainment
Cases reported to DHMH as the code “viral meningitis” were reviewed. Depending on the resources of each hospital, additional cases were identified by computerized queries for test results of >5–10 leukocytes per milliliter in CSF, and by review of hospital discharge diagnoses codes. For each case, a standardized form was used to abstract clinical information from the medical record.
Acute-Phase and Convalescent-Phase Specimens and Interviews
Acute-phase (<8 days after illness onset) CSF, serum, rectal swab, and nasopharyngeal specimens were collected if the ordered by the patients’ physicians. Specimens were stored at hospital, DHMH, or private reference laboratories at temperatures ranging from 4°C to –70°C. During home visits to consenting patients >12 years of age who had had aseptic meningitis of unknown cause, convalescent-phase (>7 days after illness onset) blood specimens were collected and a standardized questionnaire was administered to characterize symptoms and duration of illness.
Diagnostic Testing
Hospitals performed routine cell counts, chemistries, and bacterial cultures of CSF for patients with sufficient specimen quantity. Some patients’ physicians ordered additional tests. These tests commonly included, for CSF specimens, latex agglutination screening tests for bacterial antigens, culture or PCR tests for enteroviruses or herpesviruses, and culture for fungi; for CSF or serum specimens, Borrelia burgdorferi antibody, Venereal Disease Research Laboratory slide test, and cryptococcal antigen tests; and for nasopharyngeal and rectal swab specimens, culture for enteroviruses.
For available specimens from patients with aseptic meningitis of unknown cause, arbovirus serologic testing was performed at DHMH laboratories, and WNV PCR tests were completed at DHMH laboratories or the Division of Vector-Borne Infectious Diseases, CDC. CSF specimens were tested by ELISA for IgM antibodies to WNV and by TaqMan reverse-transcriptase (RT-) PCR tests for WNV ( 10 , 11 ). Serum specimens were tested at DHMH laboratories by ELISA for IgM antibodies to WNV, and by immunofluorescence assay (IFA) for IgM and IgG antibodies to La Crosse, St. Louis encephalitis, eastern equine encephalomyelitis, and western equine encephalomyelitis viruses.
Available CSF and rectal swab specimens from patients with aseptic meningitis of unknown cause were tested by enterovirus culture at the Respiratory and Enteric Viruses Branch, CDC. Three cell lines were used at CDC for enterovirus culture: RD (human rhabdomyosarcoma), HELF (human embryonic lung fibroblast), and LLC-MK2 (monkey kidney). Isolates were serotyped by RT-PCR amplification and subsequent sequencing of an approximately 320-nt segment of the VP1 enterovirus gene ( 12 ). When available, enteroviruses already isolated by hospital laboratories from CSF, nasopharyngeal, or rectal swab specimens were serotyped by CDC.
Evaluation of Aseptic Meningitis and WNV Surveillance
To evaluate strategies used to diagnose WNV meningitis in Maryland, reporting and testing policies were reviewed. Information was obtained from DHMH case-reports, surveillance plans and summaries, and laboratory tests of investigation patients. | Results
Confirmation of an Epidemic by Discharge Diagnosis Review
Each summer season (June 1–September 30) of 1998–2000, an average of 67 Baltimore residents were evaluated at the six investigation hospitals and assigned an aseptic meningitis ICD-9-CM code; in the 2001 season, 133 persons were evaluated, a 99% increase above the 1998–2000 seasonal average ( Figure 1 ). At on of the investigation hospitals, the specificity of ICD-9-CM codes was assessed. A medical record review showed that 23 (96%) of 24 cases identified by ICD-9-CM codes met the investigation’s case definition of aseptic meningitis.
Cases
At the six investigation hospitals, 113 aseptic meningitis patients were identified with illness onsets from June 1 to September 30, 2001. By the week of illness onset, the number of cases peaked in late August and early September ( Figure 2 ). The median patient age was 18 years (range 1 week – 74 years of age), and 56% of patients were male. Seventy-eight percent of patients were medically evaluated within the first 3 days after illness onset. Of the 110 patients with available information, the median duration of hospitalization was 2 days (range 0 – 11 days). No fatalities occurred during hospitalization nor were any subsequently reported to DHMH.
The median CSF leukocytes count was 135/mL (range 7–1,083/mL). Of the 110 patients with available data, 45 (41%) had >50% polymorphonuclear cells in CSF leukocyte count, including 9 (43%) of 21 patients who had CSF collected 4 – 10 days after illness onset. CSF glucose was normal (≥40 mg/dL) in 99% of patients (n=110), and CSF protein was elevated (>50 mg/dL) in 52% (n=111: median 53 mg/dL; range 10 – 215 mg/dL) ( Table 1 ).
On the basis of standardized interviews of 33 patients (median age, 31 years; range 13 – 55 years of age) at the time convalescent-phase blood specimens were obtained, the most commonly reported acute-phase symptoms were headache (100%), fever (85%), and eye pain or sensitivity to light (85%). Illness was sometimes prolonged by persistent fatigue and headaches. The median duration of illness was 18 days (n=32; range 5 – 47 days). Twelve of the patients had not fully recovered by the time of interview; for these patients, the minimum duration of illness was defined as (date of interview) – (date of illness onset). Patients ≥18 years of age reported longer duration of symptoms (n=24; median duration 22 days) than patients 13–17 years of age (n=8; median duration 7 days) (K-W, p=0.001). Clinical findings among different age groups were compared with the Kruskal-Wallis (K-W) test of significance in Epi-Info 6 software.
WNV Test Results
Of the 69 patients for whom at least one ELISA WNV IgM test was performed on CSF or serum, none tested positive. Of these 69, ELISA WNV IgM testing was performed on both acute- and convalescent-phase specimens for 23 patients, on only acute-phase specimens for 36 patients, and on only convalescent-phase specimens for 10 patients. Arboviral IgM IFAs completed on acute- or convalescent-phase serum specimens from 39 patients were all negative. WNV PCR tests completed on acute-phase CSF specimens from 27 patients were also negative. Acute-phase specimens were collected <8 days after illness onset, and convalescent-phase specimens were collected a median of 40 days after illness onset (range 12–111 days); exact dates were not available for two patients.
Enterovirus Test Results
Of 70 patients who had at least one test (viral culture or PCR test) performed for enteroviruses, 43 (61%) patients were confirmed to have enteroviral meningitis. Among patients who had at least one enterovirus test performed, the percentage enterovirus-positive was highest in infants and children; however, of 30 tested patients > 18 years of age, 13 (43%) were enterovirus-positive ( Table 2 ). Of the 34 cases in which enterovirus serotyping was completed, five serotypes were identified. Echovirus 13 (15 cases) and echovirus 18 (11 cases) together accounted for 76% of the serotyped isolates ( Table 1 ).
Other Diagnoses
Two patients were diagnosed with herpes simplex virus meningitis by culture-positive CSF specimens, and one patient was diagnosed with Lyme meningitis on the basis of clinical signs and symptoms of acute Bell’s palsy and meningitis, and positive serum B. burgdorferi antibody by ELISA and Western blot tests.
Sixty-seven (59%) of the 113 patients in the investigation remained undiagnosed; for many, sufficient specimens did not exist for further testing. The median age of these undiagnosed patients was 25 years, and 61% were male. Duration of hospitalization was similar to the cases with known cause. Five of the undiagnosed patients had HIV infection, and another four had a history of prior meningitis. Twenty-seven (40%) of the undiagnosed patients had at least one enterovirus test (culture or PCR) performed. Forty-six (68%) of the undiagnosed patients, including 24 with convalescent-phase specimens collected a median of 44 days after illness onset (exact dates not available for two patients) (range 12–111 days), had at least one WNV IgM ELISA performed. To estimate the number of WNV meningitis cases that could have been missed, we assumed that these 24 patients represented a random sample of the 67 undiagnosed patients, and that WNV infection was ruled out in patients with a negative result from a WNV IgM ELISA performed on a convalescent-phase serum. On the basis of these assumptions, 0% (95% confidence interval 0% to 10%) WNV IgM positivity among the sample suggests that fewer than seven cases of WNV meningitis were missed among the investigation patients.
Evaluation of Aseptic Meningitis and WNV Surveillance
Human WNV surveillance in Maryland focused on patients with two reportable CNS infections, encephalitis or meningitis, of unknown cause. Of the 113 aseptic meningitis cases identified at the six investigation hospitals, 76 (67%) had been reported to DHMH. Of these 76 aseptic meningitis cases, 71 (93%) were reported without a cause. Because of the urgency to detect WNV epidemics, WNV serologic testing with the first-line test was conducted by DHMH laboratories for patients with CNS infections of unknown cause. WNV testing was first prioritized for patients with encephalitis, and secondarily for hospitalized patients with aseptic meningitis who were > 17 years of age (late in the season, this last criteria was expanded to > 5 years of age). During 2001, DHMH laboratories conducted WNV testing for 440 patients statewide (including approximately 230 patients reported with aseptic meningitis); 6 patients were diagnosed with WNV disease.
DHMH requested CSF, serum, and convalescent-phase serum specimens for the diagnosis of WNV infection. However, before the investigation, essentially only acute-phase specimens (CSF more frequently than serum) from Baltimore patients were tested by WNV serologic tests; routine collection of convalescent-phase serum specimens was not feasible.
Enterovirus testing was not a component of DHMH aseptic meningitis surveillance. If enterovirus testing was initiated by the hospital, positive results were often not communicated to DHMH laboratories. Of 69 patients for whom at least one WNV IgM test was performed by DHMH, enteroviral meningitis was subsequently diagnosed in 23 (50% of 46 for whom at least one enterovirus culture or PCR test was performed). | Discussion
Although enhanced WNV surveillance among persons with aseptic meningitis could have been partially responsible for the tripling of Baltimore case-reports of aseptic meningitis during the summer of 2001, trends in discharge diagnosis codes suggest that a true increase in aseptic meningitis cases occurred. Despite a concurrent, intense WNV epizootic among birds, no evidence existed that WNV substantially contributed to the aseptic meningitis epidemic. By routine surveillance, five WNV encephalitis cases but no WNV meningitis cases were ultimately detected in Baltimore in 2001. However, in this setting, the five recognized WNV encephalitis cases did not appear to represent large numbers of undiagnosed WNV meningitis cases. Surveillance conducted in other states has also suggested that intense WNV epizootic activity does not necessarily correlate with large numbers of human WNV CNS infections ( 13 ).
Instead, multiple enterovirus serotypes likely caused most of the Baltimore aseptic meningitis cases. This finding is consistent with population-based studies ( 14 – 16 ) and large outbreak investigations ( 17 , 18 ) of aseptic meningitis occurring during the summer and fall in the United States. However, unlike outbreak investigations of the past few decades, echovirus 13 and echovirus 18 were the most commonly isolated agents in this investigation and might have accounted for the increased number of aseptic meningitis cases in Baltimore. Echovirus 13 was previously rarely detected in the United States. From 1970 to 2000, only 65 of 45,000 enterovirus isolates reported to CDC were echovirus 13 ( 19 ). Echovirus 18 had been relatively quiescent for over a decade; from 1988 to 2000, only 200 isolates were reported to CDC (20). In a study conducted from 1986 to 1990 in Baltimore hospitals among infants <2 years of age who were hospitalized with aseptic meningitis, only one case of echovirus 13 and two cases of echovirus 18 (among 167 serotyped enterovirus isolates) meningitis were identified ( 14 ).
In 2001, the previously rarely detected echoviruses 13 and 18 were the two enteroviruses most commonly reported to CDC ( 20 ). Multiple states reported echovirus 13 in association with aseptic meningitis outbreaks, and Tennessee reported an aseptic meningitis outbreak attributed to both echovirus 13 and 18 ( 19 ). Where previously rarely detected, echovirus 13 was isolated in association with aseptic meningitis outbreaks in Australia during 2001 ( 21 ) and in the United Kingdom ( 22 ) and Germany ( 23 ) during 2000. Surveillance data have previously demonstrated that in one or two seasons a usually quiescent serotype may cause an outburst of clinical disease superimposed on background, area-endemic enteroviruses ( 24 ). The worldwide spread of epidemics of clinical enteroviral disease has been documented with other serotypes, including echovirus 9 and enterovirus 70 ( 25 ).
Limitations of this study should be acknowledged. Physicians may have diagnosed more aseptic meningitis in response to WNV publicity; however, physicians more likely recognized that the risk and discomfort of a lumbar puncture required to diagnose meningitis outweighed the public health interest in identifying an untreatable condition, WNV meningitis. Regarding the investigation, only aseptic meningitis cases that could be rapidly identified at the six Baltimore hospitals were included. Not all patients underwent the same tests in the same laboratories, and the quality of enterovirus testing differed because of variable conditions of specimens. Additional results of other tests performed at reference laboratories might have become available only after the investigation ended. As a result, although any positive WNV test result would likely have been reported, the percentage of enterovirus test-positive cases could be inaccurate. Finally, similar to previous studies of the epidemiology of aseptic meningitis ( 2 ), 59% of cases remain undiagnosed; another, untested agent may have caused the increased number of aseptic meningitis cases in Baltimore.
The consistent predominance of enteroviruses among the known causes of aseptic meningitis in children and adults versus the relative infrequency of WNV meningitis (even during an intense WNV epizootic) warrants reconsideration of WNV surveillance testing strategies. Most cases of aseptic meningitis were reported to DHMH without a known cause. For these patients, WNV testing was the first priority to enable early detection of WNV epidemics that would warrant additional vector control interventions. By contrast, enterovirus testing was not a component of surveillance among patients with aseptic meningitis. Although no WNV meningitis was identified, >30% of the patients who underwent WNV testing were later determined to have had enteroviral meningitis. Many patients with aseptic meningitis were tested for an apparently rare virus, WNV, before being tested for the common agents, enteroviruses. During nonepidemic years, WNV IgM ELISA may be low yield when performed as a first-line test for aseptic meningitis. By contrast, enterovirus testing likely can identify the cause of a large fraction of aseptic meningitis cases.
The complexities and resource requirements of WNV serologic testing suggest that other testing strategies need to be developed. During 2001, DHMH laboratories conducted WNV testing for 440 patients statewide, often performing multiple tests for each patient; 6 patients were diagnosed with WNV disease. WNV ELISAs require at least 2–3 days to complete. PCR tests for WNV in CSF specimens are more rapid but have poor sensitivity ( 26 ). Because patients often seek medical care early after illness onset, when WNV antibodies might not be detectable, serologic tests of specimens collected at the time of first symptoms may also have poor sensitivity. Serologic testing of convalescent-phase serum specimens may be the most sensitive method to detect WNV infection. Yet, collecting convalescent-phase specimens can be logistically difficult and, as in Baltimore’s WNV surveillance program, may not be routinely feasible. When collected through primary care physicians, billing issues can be problematic, and each home visit for collection of blood specimens may require several hours of staff time.
An improved laboratory-based surveillance strategy might include a two-stage testing algorithm at hospital or public health laboratories to evaluate patients with aseptic meningitis. The goals would be 1) to judiciously use specimens of limited quantity (especially CSF) to rapidly identify common or treatable causes of aseptic meningitis and 2) to improve the yield of the more complex testing required to diagnose arboviral disease.
As a first stage of testing, multiplex PCR tests have been used to detect enteroviruses, herpes simplex 1 and 2, and varicella zoster ( 27 , 28 ). Several studies indicate that PCR tests for enteroviruses ( 29 ) and herpes simplex virus ( 30 ) are highly specific, can be completed more rapidly (<6 hours required), require less quantity of CSF, and potentially are more sensitive than traditional cell culture. Using PCR tests to identify enterovirus infections in patients with aseptic meningitis can reduce unnecessary ancillary tests and antibiotic or antiviral therapy and allows shortened hospitalizations ( 31 ). In addition, treatment for enteroviral infection may become available in the near future ( 32 ).
If no diagnosis is made after completion of screening tests for common or treatable agents and evidence of regional WNV or other arbovirus activity exists, a second stage of testing might include arbovirus IgM ELISA of acute- and convalescent-phase specimens. To rule out WNV disease, WNV IgM and WNV IgG ELISAs may need to be conducted approximately 8–45 days after illness onset (WNV IgG ELISAs may be complicated by cross-reactions that can only be differentiated by logistically difficult plaque reduction neutralization tests) ( 10 ). The importance of the timing of specimen collection should be clearly communicated to healthcare providers.
The epidemiology of aseptic meningitis and other CNS infections is not fixed and may vary by location; in the same location, the epidemiology may vary by different seasons. For example, while relatively quiescent in most years, WNV has the potential to cause large epidemics in humans. Because WNV disease does not have unique clinical manifestations, adequate laboratory testing is the only way to identify human WNV epidemics. Refinement of laboratory testing strategies for WNV surveillance may help use resources and build broader public health laboratory capacity for arboviral and other CNS infections. | While enteroviruses have been the most commonly identified cause of aseptic meningitis in the United States, the role of the emerging, neurotropic West Nile virus (WNV) is not clear. In summer 2001, an aseptic meningitis epidemic occurring in an area of a WNV epizootic in Baltimore, Maryland, was investigated to determine the relative contributions of WNV and enteroviruses. A total of 113 aseptic meningitis cases with onsets from June 1 to September 30, 2001, were identified at six hospitals. WNV immunoglobulin M tests were negative for 69 patients with available specimens; however, 43 (61%) of 70 patients tested enterovirus-positive by viral culture or polymerase chain reaction. Most (76%) of the serotyped enteroviruses were echoviruses 13 and 18. Enteroviruses, including previously rarely detected echoviruses, likely caused most aseptic meningitis cases in this epidemic. No WNV meningitis cases were identified. Even in areas of WNV epizootics, enteroviruses continue to be important causative agents of aseptic meningitis.
Keywords: | When national surveillance for aseptic meningitis was conducted in the United States, the Centers for Disease Control and Prevention (CDC) received reports of 7,000 to 14,000 cases of aseptic meningitis per year from 1984 to 1994 ( 1 ). Enteroviruses are the leading identifiable cause of aseptic meningitis in children and adults, particularly in summer and autumn ( 2 ). In smaller proportions, mumps virus (primarily in studies before 1980), herpesviruses, lymphocytic choriomeningitis virus, arboviruses, Leptospira, and many other viral and nonviral agents have been identified in etiologic studies of aseptic meningitis in the United States ( 3 , 4 ). However, the epidemiology of aseptic meningitis is not static and, with the appearance of emerging infectious agents such as West Nile virus (WNV), warrants periodic reevaluation.
WNV infection is usually asymptomatic but may cause a wide range of syndromes including nonspecific febrile illness, meningitis, and encephalitis. In recent WNV epidemics in which neurologic manifestations were prominent (Romania, 1996 [ 5 ]; United States, 1999–2000 [ 6 , 7 ]; and Israel, 2000 [ 8 ]), meningitis was the primary manifestation in 16% to 40% of hospitalized patients with WNV disease. However, because WNV meningitis has nonspecific clinical manifestations and requires laboratory testing for a definitive diagnosis, case ascertainment and testing practices can affect the number of cases diagnosed. Because WNV testing in U.S. surveillance programs has focused on patients with encephalitis of undetermined cause ( 9 ), the role of WNV as a cause of aseptic meningitis in the United States is not clear.
A 2001 investigation in Baltimore provided an opportunity to evaluate the role of WNV in the epidemiology of aseptic meningitis and to assess WNV surveillance. From Baltimore City and County, 118 cases of aseptic meningitis with onsets from June 1 to September 30, 2001, were reported to Maryland’s Department of Health and Mental Hygiene (DHMH), compared to an average of 39 cases during the same summer season in 1997 – 2000. Approximately 95% of these 2001 cases were reported without known cause. Simultaneously, an intense WNV epizootic among birds was detected in the Baltimore area (288 WNV-infected dead birds and 14 WNV-infected mosquito pools were collected before September 30, 2001). Early in the summer, nearly 100% of dead crows collected from some sections of Baltimore City tested positive for WNV. When the investigation of aseptic meningitis was initiated in mid-September, one case of human WNV encephalitis had been reported from Baltimore. The investigation’s objectives included 1) identification of the predominant cause(s) of the aseptic meningitis epidemic, emphasizing the relative contributions of WNV and enteroviruses and 2) evaluation of hospital-based–WNV surveillance of patients with aseptic meningitis, including strategies used for diagnostic testing. | Acknowledgments
The authors thank the many persons at hospitals, laboratories, Baltimore City and County health departments, and Maryland’s Department of Health and Mental Hygiene who contributed time and effort to this investigation, especially Kathleen Arias, Ruth Bertuzzi, Colleen Clay, Jeanine Brown, Diane Lagasse, Polly Ristaino, Donna Feldman, Phyllis Tyler, Joanne Venturella, and Matthew Wallace; Joseph Scaletta for providing Maryland’s zoonotic WNV surveillance data; and Brad Biggerstaff for statistical advice.
Dr. Julian is currently in training in infectious diseases at Penn State Milton S. Hershey Medical Center. The original work for this study was conducted while Dr. Julian was an Epidemic Intelligence Service officer in the Arbovirus Diseases Branch, Centers for Disease Control and Prevention. She participated in surveillance for West Nile virus and for adverse events potentially related to yellow fever vaccine. | CC BY | no | 2022-01-31 23:44:41 | Emerg Infect Dis. 2003 Sep; 9(9):1082-1088 | oa_package/7b/0e/PMC3016784.tar.gz |
||
PMC3016785 | 14531380 | Keywords: | To the Editor: September 2003 marks the anniversary of the deaths of Jonathan M. Mann and his wife Mary Lou Clements aboard Swiss Air flight 111, which crashed off the shore of Peggy’s Cove, Nova Scotia, 5 years ago. Although Jonathan and I were both members of the Council of State and Territorial Epidemiologists in the early 1980s, when Jonathan served as state epidemiologist for New Mexico, our paths did not cross until years later in 1990. Jonathan had reluctantly resigned as director, Global AIDS Activities, World Health Organization, to become full professor at Harvard School of Public Health. I had taken a year’s leave of absence from my position in Maine to enroll in Harvard’s Master of Public Health program.
In a talk at the Centers for Disease Control and Prevention, Jonathan once outlined many of his hopes and fears for AIDS activities worldwide. Moved by his pleas for global commitment to the epidemic, I sought out Jonathan at the opening reception for new Harvard students. I shared his dreams for public health activism. We believed in inspiring others to careers in applied public health, so we initiated a brown bag lunch series for students and faculty to share experiences about work in public health ( 1 ). The common thread throughout these discussions was universal human rights and respect for human dignity.
Jonathan went on to establish the Francis Xavier Bagnoud Center for Health and Human Rights at the Harvard School of Public Health and used his position to promote health as the broad-based core of human values. His lectures on universal human rights centered on the idea that health transcends geographic, political, economic, and cultural barriers. Jonathan drew on his past experiences with the HIV epidemic to argue that the developing world would never achieve economic or political stability unless the health of its people improved. He maintained that, if not addressed, the health problems of the developing world would pose a global threat. “Public health,” he wrote, “too often studies health without intruding upon larger, societal, inescapably laden issues.... If the public health mission is to assure the conditions in which people can achieve the highest attainable state of physical, mental and social well-being, and if these essential conditions predominantly are societal, then public health must work for societal transformation” ( 2 ).
Jonathan argued that discrimination and other violations of human rights were primary pathologic forces working against the improvement of public health and that if we ignored the plight of those whose rights were violated, we would be less than human ourselves. Jonathan very much admired Eleanor Roosevelt, chair, Declaration of Human Rights Drafting Committee, who on the 10th anniversary of the declaration asked, “Where, after all, do universal human rights begin? In small places, close to home—so close and so small that they cannot be seen on any map of the world. Such are the places where every man, woman and child seeks equal justice, equal opportunity and, equal dignity. Without concerted citizen action to uphold them close to home, we shall look in vain for progress in the larger world” ( 3 ).
On Jonathan’s desk at Harvard, amidst family photographs, was a framed joker taken from an ordinary deck of cards. When I asked about its significance, he responded that, despite life's challenges, it remains important to smile. So smile we must at the memory of Jonathan and his many accomplishments. Each year, the Council of State and Territorial Epidemiologists remembers by holding a distinguished lecture named in honor of Jonathan M. Mann.
The public health practitioner must respond to the needs of people and yet be sensitive to world politics. In solving difficult issues, the practitioner must understand the interconnection of social values and scientific truths and work collaboratively with the medical community. Moved to the forefront by recent acts of terrorism, public health has achieved recognition as first responder and as integral part of planning for and responding to catastrophic health crises. We cannot promote safety and security if we fail to recognize, and advocate for, people around the globe who do not have access to basic health care, adequate living and working conditions, or education to enlighten their response to life’s challenges. The anniversary of Dr. Mann’s untimely death serves as reminder to the medical and public health communities of the ongoing need to promote universal human rights and to focus energies and resources on a global approach to public health. | Acknowledgment
The author thanks Richard E. Hoffman, former state epidemiologist, Colorado Department of Public Health and Environment, for his help in preparing this article. | CC BY | no | 2022-01-31 23:46:18 | Emerg Infect Dis. 2003 Sep; 9(9):1181-1182 | oa_package/47/cd/PMC3016785.tar.gz |
|||||
PMC3016786 | 14531379 | To the Editor : Severe acute respiratory syndrome (SARS) is an emerging infectious disease worldwide, and relapsing SARS is a major concern. We encountered a 60-year-old woman who was admitted to the Princess Margaret Hospital in Hong Kong on March 29, 2003, with a fever of 39°C, chills, cough, malaise, and sore throat for 2 days before admission. She had no history of travel within 2 weeks of admission. She also had no close contact with patients who had a diagnosis of suspected or confirmed SARS. Chest radiograph on admission indicated consolidation over the right middle zone. In accordance with the diagnostic criteria proposed by the World Health Organization (WHO), this patient’s condition was diagnosed as SARS in view of her symptoms, temperature, and chest radiograph findings ( 1 ).
Standard microbiologic investigations to exclude common respiratory virus and bacterium for community-acquired pneumonia, including Mycobacterium tuberculosis , were negative in our patient. Reverse transcriptase–polymerase chain reaction (RT-PCR) of nasopharyngeal aspirate samples was negative for coronavirus twice. The coronavirus antibody titer was less than 1/25. The patient was initially treated with oral clarithromycin (500 mg twice a day) and intravenous amoxycillin-clavulanate combination (1.2 g three times a day). Despite the negative evidence for coronavirus infection, she was treated with intravenous ribavirin (24 mg/kg once a day) and hydrocortisone (10 mg/kg once a day) after 48 hours of antibiotics therapy ( 2 ). The patient’s symptoms were relieved, and she remained afebrile 3 days after admission. Tolerance for medication was good except for a moderate degree of hemolytic anemia (her hemoglobin level dropped to 9.1 g/dL) and hypokalemia that developed during treatment. On day 15, the chest radiography was clear. The patient was discharged after 3 weeks of hospital stay.
The patient attended outpatient clinic on day 35, complaining of exertional dyspnea, low-grade fever, and malaise since her discharge. Her chest radiography showed extensive shadowing. Computer tomographic scan of the thorax indicated widespread ground-glass shadowing in both lung fields, which was especially prominent at left lower and lingular lobes. Her hemoglobin level had dropped further to 8.4 g/dL. Sputum culture yielded substantial growth of methicillin-sensitive Staphylococcus aureus and Pseudomonas aeruginosa . RT-PCR results of throat and nasal swabs were positive twice for coronavirus, but coronavirus cultures from these areas were negative. One month after onset, her coronavirus antibody titer was 200. In view of possible relapse of SARS, she was treated with oral ribavirin (1,200 mg a day) and lopinavir (133.3 mg a day)/ritonavir (33.3 mg) combination (3 capsules twice a day) in addition to intravenous piperacillin/tazobactam combination. The patient was afebrile, and symptoms improved 3 days after admission. Serial chest radiograph showed gradual resolution of shadowing. Subsequent RT-PCR and sputum culture were negative.
This case illustrates several important issues regarding problems of infection control, diagnosis, and management of SARS. As the definition of SARS is nonspecific, patients with upper respiratory infection or community-acquired pneumonia could be mislabeled as having SARS. Accommodating confirmed SARS patients and patients mislabeled as having SARS in the same facility may be disastrous. Unfortunately, isolating every single case is impossible, particularly when a large number of patients are admitted. Our patient may have acquired the disease after admission since she was placed in the same ward with other patients confirmed to have SARS. For this reason, special cohorting of SARS patients with closely related signs and symptoms should be strictly implemented at admission. Since fever is the most common feature of SARS, isolating febrile cases with respiratory or gastrointestinal symptoms may be appropriate. Even patients with fever alone should be quarantined since the other symptoms of SARS may not be clinically obvious. Secondly, the sensitivity of diagnosing a coronavirus infection on admission is only 32% to 50% by nasopharyngeal RT-PCR test ( 3 , 4 ). Many infected cases will be missed as a result. Our patient may have had a relapse of disease during her second admission, although she had positive RT-PCR and antibody surge only 1 month after onset. However, we could not conclude whether the first RT-PCR on admission was a false negative or whether the patient acquired coronavirus infection in the hospital. Our study showed that sensitivity for diagnosing coronavirus infection could be increased by performing RT-PCR on samples from different parts of the body ( 4 ). Unfortunately, these samples were not taken from our patient. Furthermore, the chest infection with organisms recovered from her sputum could be the sole reason for her second admission, especially when her immune system was weakened by the administration of a high-dose steroid. The presence of genetic material for coronavirus from her nasal cavity and throat might not suggest that the virus is active. The absence of coronavirus growth in this patient might indicate that the virus is no longer viable, although the culture technique itself might not be sensitive enough to justify this claim. Therefore, further refinement of the diagnostic techniques for SARS is essential, especially for diagnosis during early onset. Thirdly, giving treatment to a patient without a legitimate diagnosis may be inappropriate, especially when the treatment carries substantial adverse effects, as illustrated in our patient, and a universally accepted therapy has not been available. Whether lopinavir/ritonavir combination is the key to a cure remains to be clarified, despite the satisfactory response that we observed, since the clinical and radiologic improvement in our patient might be the natural course of the disease. | CC BY | no | 2022-01-26 23:38:41 | Emerg Infect Dis. 2003 Sep; 9(9):1180-1181 | oa_package/b1/0d/PMC3016786.tar.gz |
|||||||
PMC3016787 | 14519239 | Discussion
Responding to and anticipating the difficulties encountered by existing automated reporting systems could be used to improve current systems and guide development of future infectious disease surveillance. Addressing limitations of automated reporting systems by continuing conventional notification methods during the adjustment period, promoting use of coding standards, validating data, and involving end-users is essential.
As illustrated in this study, lapses in data transmission occur during initial deployment of automated reporting systems. The potential risks attributable to lapses or errors in automated electronic reports are great, as are costs associated with misdiagnoses and treatment of healthy persons ( 16 ). Experiences in Hawaii and Pennsylvania indicate the need for continuing with existing reporting mechanisms during the first year while new systems are being refined.
Our study calls for evaluations to validate new automated systems before they are integrated into public health surveillance. While health departments and CDC have typically collaborated in such efforts, involvement of providers and laboratorians is likely to yield additional insights. Participation of public health officials is indicated in evaluations of automated methods that are being developed in research settings to capture nonreportable syndromes for bioterrorism detection.
Partnerships among state health departments, clinical laboratories, providers, CDC, and other diagnostics systems are needed to promote widespread use of uniform coding standards (LOINC and SNOMED) and Health Level 7 for messaging. As demonstrated in New York State, involving all users early in the planning stages enhances the success of automated electronic reporting system ( 13 ). CDC could facilitate laboratory participation in use of standards by assisting health departments in identifying benefits such as use of LOINC-coded data for antimicrobial resistance monitoring.
Current federal funding for emergency preparedness surveillance and epidemiology capacity ( 17 ) is expected to stimulate widespread use of automated systems in infectious reporting. However, automated systems are a complement rather than a substitute for human involvement in interpreting laboratory findings and screening for errors. Furthermore, the requirement that providers and laboratories report immediately by telephone when they detect organisms indicating an outbreak or an unusual occurrence of potential public health importance ( 18 ) is expected to continue even when automated reporting systems are implemented. Complete replacement of human judgment in reporting conditions suggestive of CDC category A bioterrorism agents (available from: URL: http://www.bt.cdc.gov/Agent/Agentlist.asp ) or other conditions that require immediate investigation is unrealistic.
Despite the limitations we have described, automated electronic systems hold promise for modernizing infectious disease surveillance by making reporting more timely and complete. Modern technology can translate into better public health preparedness by enhancing and complementing existing reporting systems. | While newly available electronic transmission methods can increase timeliness and completeness of infectious disease reports, limitations of this technology may unintentionally compromise detection of, and response to, bioterrorism and other outbreaks. We reviewed implementation experiences for five electronic laboratory systems and identified problems with data transmission, sensitivity, specificity, and user interpretation. The results suggest a need for backup transmission methods, validation, standards, preserving human judgment in the process, and provider and end-user involvement. As illustrated, challenges encountered in deployment of existing electronic laboratory reporting systems could guide further refinement and advances in infectious disease surveillance.
Keywords: | The primary purpose of reporting diseases is to trigger an appropriate public health response so that further illness can be prevented and public fears allayed. The threat of emerging infections and bioterrorist attacks has heightened the need to make disease surveillance more sensitive, specific, and timely ( 1 , 2 ). Recent advances in provider and laboratory information management have facilitated one step towards the modernization of surveillance: the development of automated reporting systems ( 3 , 4 ). With recent funding for activities to defend the public’s health against terrorism and naturally occurring diseases, development of automated reporting system has been accelerated ( 5 ).
However, technologically innovative reporting systems need to be consistent with the purpose of disease reporting. Wholesale adoption of automated electronic reporting systems in their current form might instead represent a quick response to the pressures of the moment rather than a fully considered decision that acknowledges some of the documented problems with the new technology. We review here current limitations of systems that provide automated notification of reportable conditions identified in clinical laboratories. A more thorough understanding of the pitfalls of such existing systems can provide insights to improve the development and implementation of new media in infectious disease surveillance.
With the computerization of patient and clinical laboratory data, automated notification of reportable events to health departments is often assumed to be more effective than conventional paper-based reporting ( 6 ). In recent years, the Centers for Disease Control and Prevention (CDC) has been funding several states to develop electronic laboratory reporting ( 7 ). With electronic reporting, laboratory findings (e.g., Escherichia coli O157:H7 isolates) are captured from clinical laboratory data and transmitted directly to the state. In turn, the state routes messages to local health units, as illustrated in the Figure. The National Electronic Disease Surveillance System (NEDSS) and bioterrorism preparedness initiatives are expected to further enhance disease surveillance by supporting integration of electronic data from various sources ( 4 , 8 ). Evidence from deployed systems shows promise in the ability of electronic laboratory reporting to deliver more timely and complete notifications than paper-based methods ( 9 – 12 ).
At the same time, experiences in Pennsylvania, New York, Hawaii, California, and other states indicate that implementation of automated reporting also poses unanticipated challenges. Five problem areas have been identified: sensitivity, specificity, completeness, coding standards, and end-user acceptance.
Sensitivity
To achieve the objective of triggering local public health response, automated electronic systems should consistently report cases that would have been reported by conventional methods. Contrary to expectations, automated reports seldom replicate the traditional paper-based system. Errors in data transmission reduce sensitivity in automated electronic reporting systems. An evaluation of electronic laboratory reporting in Hawaii documented that automated reports were not received for almost 30% of the days on which the paper-based method generated a report, suggesting that automated reporting alone was potentially suboptimal. Lapses in electronic reporting were due to various causes including ongoing adjustments to the data extraction program ( 11 ). In California, lapses in a semiautomated electronic laboratory reporting were traced to a failure in forwarding reports from the county of diagnosis to the county of residence ( 12 ). In Pennsylvania, lapses in automated notification have resulted from the occasional failure of data extraction at the clinical laboratory computer, difficulties deciphering reportable diseases from the test results, which used local terminology rather than Logical Observation Identifier Names and Codes (LOINC) codes (available from: URL: http://www.regenstrief.org/loinc ), and problems in the transmission of data files to and access by local health jurisdictions. To prevent interruption of reports while the automated system was being refined, Pennsylvania opted to continue conventional paper-based reports for 8 months after initiating electronic reporting.
Specificity
Typically, automated reporting increases not only reportable events data but also the number of extraneous reports (e.g., nonreportable conditions, unnecessary negative reports, or duplicate reports). In addition, false-positive results are increased by automated abstraction of culture results entered in free-text. For example, in an evaluation of an electronic laboratory reporting system in Allegheny County, Pennsylvania, negative results of Salmonella isolates were automatically transmitted as positive Salmonella results because the software recognized the organism name ( 9 ). Often, automated reporting transmits preliminary test results followed by results of confirmatory tests for the same condition. This method is desirable because some duplicates may actually provide useful preliminary test results that might trigger timely responses ( 9 , 10 ). However, multiple test results increase time for data processing. In addition, low specificity attributable to extraneous records of nonreportable culture results is also problematic. While over time automated programs can be expected to improve, initially erroneous or missing data will continue to arise and require manual checking and recoding.
Programming solutions might offer relief in eliminating extraneous records. But in a climate of bioterrorism, a complete replacement of human judgment is probably unacceptable for many. Therefore, in planning new systems, accounting for the time and effort of an experienced epidemiologist to review electronic laboratory data before routing them to investigators will be essential.
Completeness of Case Records
To be useful, case-reports received through conventional or automated methods must contain data in key fields identifying patient and physician (e.g., name, address, and telephone number) and specimen (e.g., collection date, type, test, and result). Lack of sufficient identifying information for follow-up investigations is a serious limitation in many currently operating automated systems.
In addition, experiences in New York and Pennsylvania indicate that the lack of a patient’s address is a barrier to routing electronic laboratory data to local health departments. Locating a patient’s residence is also useful for recognizing clusters of diseases attributable to natural causes or intentional acts of terrorism. Automated means were intended to improve completeness of case records fields by duplicating required fields, but this has not always been the case ( 13 ). Whether the laboratories fail to report missing data or whether data elements are not provided in the initial forms submitted with specimens is unclear. Widespread dissemination of standardized disease reporting forms specifying information required by health departments to both clinical laboratories and providers could reduce this problem. Such information could also be made readily available through the Internet. An example is of what laboratories and providers are required to include in Minnesota is available (URL: http://www.health.state.mn.us/divs/dpc/ades/surveillance/card.pdf ).
Data Standards
To facilitate use of state-of-the-art electronic surveillance tools, as envisioned in the NEDSS initiative, adoption of Systemized Nomenclature of Human and Veterinary Medicine (SNOMED) (available from: URL: http://www.snomed.org/ ), LOINC, and Health Level 7 standards (a national standard for sharing clinical data, available from: URL: http://www.hl7.org/ ) by clinical laboratories is essential. However, in practice clinical laboratories often use locally developed coding schemes or a combination of codes and free text. Data often arrive in multiple file formats or even with multiple formats within one mapping standard ( Figure ). In practice, file messages from multiple laboratories are mapped into a standardized database with desired variables including patient, physician contact information, specimen identifiers, test name, and results.
To increase use of uniform data coding and Health Level 7 as the standard for automated electronic reporting, further studies are needed to understand barriers encountered by clinical laboratories and ways to overcome them. Cost or lack of information technology resources might be factors contributing to slow adoption of standard coding in small-size clinical laboratories. In addition , variations in reporting requirements across states may be an extra cost to laboratories that serve multiple health jurisdictions. Public health officials could help promote use of coding standards by demonstrating their benefits to laboratories and providers. For example, use of standards such as LOINC facilitates integration of microbiologic culture data, minimizes chances for data errors in translating free text or handwritten test results, and makes it easier for laboratories to monitor antimicrobial resistance patterns. This could be reinforced by introducing regular data quality feedback to all the stakeholders, as illustrated in the Figure.
User Acceptance
This entire process for detecting diseases relies on acceptance and appropriate intervention by those working on the front-line of the public health system. As shown on the Figure, public health surveillance largely depends on investigation at the local level, where a determination is made that reported events meet case definitions for reportable and notifiable conditions. Local health departments report data to the state level, where nationally notifiable diseases (available from: URL: http://www.cdc.gov/epo/dphsi/phs/infdis.htm ) are transmitted to CDC. That agency in turn reports internationally quarantinable diseases to the World Health Organization (available from: URL: http://www.who.int/emc/IHR/int_regs.html ). The process begins with receiving, managing, and using surveillance data. Automated reports in the form of electronic-mail attachments could be cumbersome for some local health departments with limited information technology support. Also, encryption of data for confidentiality reasons increases complexity of the data retrieval process. Acceptance of automated electronic reporting systems is likely when assistance on data analysis and management is given to disease investigators.
During the 2001 bioterrorism outbreak investigation, labor-intensive methods (i.e., faxes and emails) were used for surveillance of cases with clinical syndromes compatible with anthrax among patients in selected counties in New Jersey, Pennsylvania, and Delaware ( 14 ). Because of personnel time demands, automated electronic systems are attractive in surveillance of syndromes suggestive of bioterrorism agents. While automated electronic surveillance systems using patient encounter records for syndromic surveillance might offer relatively low costs of adoption for physicians ( 15 ), other persons in the system may become unduly burdened. For example, when automated reports of syndromes are forwarded to local public health officials, who should interpret and act upon the results remains unclear. The key to the success of such innovative systems outside investigational settings will be their ability to offer meaningful results at an acceptable marginal cost to both reporters and local health departments. Integration of syndromic surveillance into local public health surveillance is less understood and needs attention. | Acknowledgments
We thank Lee Harrison, Kathleen G. Julian, Elliot Churchill, and David Welliver for their insightful comments.
Dr. M’ikanatha is a surveillance epidemiologist in Pennsylvania. He is interested in the use of new technology to promote notification of reportable diseases and other conditions of public health importance including antimicrobial resistance. | CC BY | no | 2022-01-24 23:43:25 | Emerg Infect Dis. 2003 Sep; 9(9):1053-1057 | oa_package/3b/7d/PMC3016787.tar.gz |
||||
PMC3016788 | 14519249 | Materials and Methods
Clinical Surveillance
ICDDR,B maintains a 2% surveillance system at its Dhaka Hospital, in which data from every 50th patient treated at the hospital is collected; these data include clinical information and biologic specimens. We used these data to extrapolate the overall numbers of patients with cholera; specimens from these patients were used in the bacteriologic studies described.
V. cholerae Strains
A total of 63 V. cholerae O139 isolates obtained from the recent cholera epidemic were analyzed. Seven strains of O139 vibrios isolated in India between 1992 and 1996, 17 strains of V. cholerae O139 isolated in Bangladesh between 1993 and 1997, and 2 strains isolated in Thailand in 1998 were also included in the study for comparison with the recent epidemic strains. Strains of the recent epidemic were isolated from stools of cholera patients who attended the treatment center of ICDDR,B located in Dhaka during March and April 2002. Stool samples were processed in the laboratory within 2 h of collection for the isolation of V. cholerae . Stools were initially streaked on thiosulphate-citrate-bile-sucrose (Becton, Dickinson and Co., Sparks, MD) agar plates for selection and presumptive identification of V. cholerae . All strains were subsequently examined by biochemical and serologic tests using standard methods ( 9 ). Strains were stored in sealed deep nutrient agar at room temperature until used for this study. Details of the strains are shown in Table.
Polymerase Chain Reaction (PCR) Assays
Presence of tcpA genes specific for the classical and El Tor biotypes was determined by using a multiplex PCR assay, as described previously ( 10 ). PCR assays for the tcpI and acfB genes have been described previously ( 6 ). Presence of classical, El Tor, and Calcutta type rstR genes of CTX phage were also determined with PCR by using specific primers derived from the published sequence of the respective genes. Three different forward primers for rstR class , rstR ET , and rstR Calc with sequences 5′-CTTCTCATCAGCAAAGCCTCCATC, 5′-GCACCATGATTTAAGATGCTC, and 5′-CTGTAAATCTCTTCAATCCTAGG, respectively, were used with a common reverse primer (5′-TCGAGTTGTAATTCATCAAGAGTG) to amplify the respective rstR genes. Presence of the rstC gene was also determined by a PCR assay described previously ( 11 ). All primers were synthesized commercially by Oswel DNA Service (University of Edinburgh, Edinburgh, UK). The expected sizes of the amplicons were ascertained by electrophoresis in agarose gels, and the identity of each PCR product was further verified by Southern blot hybridization.
Probes and Hybridization
The gene probes used in this study included a 0.5 kb Eco RI fragment of pCVD27 ( 12 ) containing part of the ctxA gene and a 2.1 kb Sph I- Xba I fragment of pCTX-Km containing the entire zot and ace genes and part of orfU ( 13 ). The toxR gene probe was a 2.4-kb Bam HI fragment of pVM7 ( 14 ). The rstR ET probe was a Sac I- Xba I fragment of pHK1 ( 15 ). The rRNA gene probe was a 7.5-kb Bam HI fragment of the Escherichia coli rRNA clone pKK3535 described previously ( 16 ). The O139-specific DNA probe 2R3 was a 1.3-kb Eco RI fragment of pCRII-A3 ( 17 , 18 ), and the SXT probe was a Not I fragment of pSXT1 ( 19 ). PCR-generated amplicons of the rstR genes of classical, El Tor, or Calcutta type CTX prophage were also used as probes whenever appropriate.
For preparation of Southern blots, total cellular DNA was isolated from overnight cultures as described previously ( 20 ). Five-microgram aliquots of the DNA were digested with appropriate restriction enzymes (Bethesda Research Laboratories, Gaithersburg, MD), electrophoresed in 0.8% agarose gels, blotted onto nylon membranes (Hybond, Amersham Biosciences, Uppsala, Sweden), and processed by using standard methods ( 21 , 22 ). The probes were labeled by random priming ( 23 ) using a DNA labeling kit (Bethesda Research Laboratories) and α- 32 P‐deoxycytidine triphosphate (3,000 Ci/mmol, Amersham Biosciences). Southern blots were hybridized with the labeled probes at 68°C and washed under stringent conditions as described previously ( 6 , 8 ). Autoradiographs were developed from the hybridized filters with Kodak X‐Omat AR x‐ray film (Eastman Kodak Co., Rochester, NY) at –70°C.
Antimicrobial Resistance
All V. cholerae isolates were tested for resistance to antimicrobial drugs by using the method of Bauer et al. ( 24 ) with standard antibiotic disks (Oxoid Ltd., Basingstoke, Hampshire, UK) at the following antibiotic concentrations (μg/disc): ampicillin, 10; chloramphenicol, 30; streptomycin, 10; tetracycline, 30; trimethoprim‐sulfamethoxazole, 1.25 and 23.75, respectively; kanamycin, 30; gentamicin, 10; ciprofloxcin 5; norfloxacin 10, and nalidixic acid, 30. | Results
Clinical Surveillance
We noted a marked increase in cholera cases associated with V. cholerae O139 from March to May 2002 ( Figure 1 ). The highest number of cholera patients admitted to the hospital was in March; 69.8% of these cases were attributed to V. cholerae O139, compared to 30.2% of cases caused by the El Tor biotype of V. cholerae O1. Cholera attributable to V. cholerae O139 occurred with similar frequencies in men and women, similar to those infected with O1 strains. From January 2001 to June 2002, a total of 91 (32%) of 282 of case-patients infected with O1 cholera were <5 years of age, but 15 (15%) of 115 of those infected with O139 were <5 years of age (p<0.001). During the same period, 48% of those infected with V. cholerae O1 were >15 years of age, while 76% of those infected with O139 were >15 years of age (p<0.001).
Genetic Analysis of V. cholerae Strains
The rRNA gene restriction patterns using Bgl I consisted of 10 to 14 bands between 11 kb and 1.6 kb in size ( Figure 2 ). The 89 analyzed strains belonged to four different ribotypes (B-I to B-IV). All 63 recently isolated O139 strains produced identical restriction patterns of their rRNA genes and belonged to ribotype B-II. Analysis of the rstR gene showed that O139 strains isolated from 1992 to 1998 carried El Tor type CTX ET prophage, whereas the recent epidemic strains carry the Calcutta type CTX Calc prophage in addition to the CTX ET prophage ( Figure 3 ). All strains were positive for tcpA , tcpI , acfB , toxT , ctxA , zot , and ToxR genes, as well as for the O139-specific genomic DNA in DNA probe or PCR assays.
Antibiogram
All strains isolated from the recent epidemic were resistant to nalidixic acid and were susceptible to ampicillin, tetracycline, gentamicin, chloramphenicol, ciprofloxacin, norfloxacin, streptomycin, trimethoprim, and sulfamethoxazole. In these strains, the SXT element, which encodes resistance to streptomycin, sulfamethoxazole, and trimethoprim, carried a deletion of an approximately 3.6-kb region. | Discussion
Generally, a seasonality exists in the cholera cases seen at the ICDDR,B hospital, with increased numbers expected before and after the rainy season. Thus, the increase in total number of cases seen during March and April was not unusual ( Figure 1 ). However, the increase in patient numbers during these months of 2002 was associated with a marked increase in cases associated with V. cholerae O139, and the numbers of cases infected with serogroup O139 outnumbered those with serogroup O1. The ages of patients infected with O139 strains were significantly higher than those infected with O1 strains (p<0.001). Since the onset of O139 cholera in 1992, this organism has tended to infect patients older than those with O1 cholera ( 1 ). The more advanced age of this group was explained by a lack of immunity to this new serogroup in adults who were likely partially immune to the O1 serogroup. Thus, after nearly 10 years of endemicity in Bangladesh, V. cholerae O139 continues to cause more cases of cholera in older adults.
Ribotype Analysis
The emergence of the O139 serogroup has provided a unique opportunity to witness the epidemiologic changes associated with the displacement of an existing serogroup by a new emerging one and thus provides new insights into the epidemiology of the disease. All 63 recently isolated O139 strains produced the identical restriction pattern of their rRNA genes. This restriction pattern has been previously designated as ribotype pattern B-II ( 6 – 8 ) and was first detected among epidemic V. cholerae O139 strains that emerged in 1992 and 1993. Cholera epidemics during 1992 to 1993 in India and Bangladesh that were associated with the first appearance of V. cholerae O139 were caused by strains belonging to two different ribotypes, designated as B-I and B-II. Since then, several new ribotypes of O139 vibrios have been detected which were associated with localized outbreaks during 1995 to 1996 or sporadic cases ( 8 ). The results suggest that strains of the recent epidemic were clonal and were derived from one of the initial clones of V. cholerae O139. We therefore investigated possible genetic changes sustained by this strain during the nearly 9 years since major epidemics were caused by strains of this ribotype.
Analysis of CTX Prophage
In V. cholerae , the genes encoding cholera toxin ( ctxAB ) are part of the CTX prophage ( 25 ). A typical CTXΦ genome has two regions: core and the RS2. The 4.5-kb core region comprises several open reading frames including ctxAB , zot , ace , orfU , and encodes CT as well as the functions that are required for the virion morphogenesis; by contrast, the 2.5-kb RS2 region encodes the regulation, replication, and integration functions of the CTXΦ genome ( 26 ). Previous studies have described the existence of at least three widely diverse repressor genes ( rstR genes) carried by different CTX phages (i.e., CTX ET Φ, CTX class Φ, and CTX Calc Φ) ( 27 , 28 ). This diversity of rstR constitutes the molecular basis for heteroimmunity among CTX phages, which are otherwise genetically similar. We examined the CTX prophage in the recent and previously isolated O139 strains with specific probes. Analysis of the rstR gene carried by the recent epidemic strains showed that, unlike the O139 strains of 1993, which carried multiple copies of an El Tor type CTX ET prophage, the new O139 strains carry at least one copy of the Calcutta type CTX Calc prophage in addition to the CTX ET prophage. As a result of heteroimmunity, toxigenic classical strains of V. cholerae O1 are known to be infected by CTXΦ isolated from El Tor biotype strains; toxigenic El Tor strains are resistant to further infection by the same phage. Similarly, strains carrying an El Tor type CTX prophage can be superinfected by the Calcutta type CTX phage ( 29 ). Therefore, the new epidemic strains appear to have arisen by acquisition of a Calcutta type CTX phage by strains that originally harbored only El Tor type CTX prophage, since the new strains carry both prophages ( Figure 3 ). What determines the reemergence of particular epidemic strains is not clear, but this study clearly shows changes in the CTX genotype attributed to the acquisition of a new CTX phage by the O139 strains associated with the recent epidemic.
Antibiogram of Reemergent O139 Strains
V. cholerae O139, which emerged during 1992 and 1993, was sensitive to tetracycline and showed a trend of increased resistance to trimethoprim-sulfamethoxazole (SXT) and streptomycin. This resistance was mediated by a ~99-kb self-transmissible transposon-like element (SXT constin) encoding resistance to sulfamethoxazole, trimethoprim, and streptomycin, the resistance genes being clustered together in a 9.4-kb region ( 19 ). In the present study, all strains isolated from the recent epidemic were found to be susceptible to SXT and streptomycin ( Table ). To identify the genetic changes associated with the observed SXT sensitivity, we used a cloned SXT gene probe to study restriction fragment length polymorphism in the SXT transposon. Three different Bgl I restriction patterns (patterns A–C) of the SXT element were observed among the O139 strains tested ( Figure 4 ). Strains producing pattern A and B were resistant to SXT and streptomycin and included strains isolated between 1992 and 1996, whereas all strains from the recent epidemic produced pattern C and were susceptible to all the three antibiotics. Further analysis of the restriction patterns suggests that the restriction site heterogeneity possibly occurred as a result of a deletion of approximately a 3.6-kb region of the SXT element in strains that were sensitive to SXT and streptomycin. The deletion in the SXT element associated with sensitivity to SXT and streptomycin was first detected in strains of ribotype B-III isolated from an outbreak in Bangladesh in 1997 ( 6 ). In keeping with the observation in Bangladesh, comparison of the antibiotic resistance patterns between the O139 strains isolated during 1992 and 1993 and those isolated in 1996 and 1997 in India also showed that the later strains were susceptible to SXT, unlike the O139 strains from 1992 and 1993 ( 30 ). However, in contrast to the previously isolated O139 strains, all O139 strains isolated from the recent epidemic were resistant to nalidixic acid.
Epidemiologic Importance of Genetic Changes in V. cholerae O139
Several previous studies have shown that the O139 serogroup of V. cholerae has been undergoing rapid genetic changes ( 6 – 8 ) since its first emergence. We speculate that the observed changes may have provided increased fitness to strains of this serogroup in some unexplained way to survive in competition with the existing seventh pandemic strain of V. cholerae O1 and establish itself as the etiologic agent of a possible eighth pandemic. The transient disappearance of the O139 serogroup in Bangladesh and repeated reemergence associated with somewhat altered genetic or phenotypic properties seem to support this speculation. Our study demonstrated the reemergence of V. cholerae O139 strains belonging to a previously described ribotype which has sustained at least three major genetic and phenotypic changes. These changes include the acquisition of a new CTX prophage, deletion in the SXT element associated with reversion of drug resistance phenotype against SXT and streptomycin, and development of nalidixic acid resistance.
The recent epidemic strains were otherwise similar to previously described O139 strains, including possession of the TCP pathogenicity island, as evidenced by the presence of tcpA , tcpI , and acfB genes; the virulence regulatory genes, toxT and toxR ; and the O139–serotype-specific DNA. The role of environmental and host factors that contribute to the emergence of new strains associated with epidemic outbreaks is not clearly known. In the present study, all strains isolated from the recent cholera outbreak belonged to the same ribotype and were genetically and phenotypically identical, suggesting that the recent outbreak in Bangladesh probably started from a point source and may have coincided with the acquisition of one or more critical new properties by a previously existing V. cholerae O139 strain. Clearly these properties included the acquisition of the Calcutta Type CTX prophage. Previous studies showed that O139 strains prevailing in Calcutta during 1996 carried this prophage ( 29 , 31 , 32 ), which might have contributed to the dissimilar incidence of O139 cholera in Calcutta and Dhaka during that period ( 33 ). How the initial enrichment of V. cholerae occurred before the initiation of an epidemic is not clear. We speculate that a critical factor for the recent reemergence of O139 vibrios might have been the development of nalidixc acid resistance. Identifying the first index case of the present cholera epidemic is not possible. A spontaneous nalidixic acid–resistant V. cholerae O139 strain may have been enriched in a patient undergoing nalidixc acid therapy, leading to the eventual spread of the organism. This is certainly possible in view of the widespread use of nalidixic acid in Bangladesh as a drug to treat other gastroenteritis, including shigellosis. The emergence of V. cholerae O139 has received global attention not only as the first non–O1 V. cholerae capable of causing epidemic outbreaks but also because of the rapid genetic re-assortment undergone by strains of this new serogroup. Our study shows yet another set of genetic and phenotypic changes in O139 vibrios and their association with an epidemic of cholera in Bangladesh. These results emphasize the need for continuing molecular epidemiologic surveillance of V. cholerae in Bangladesh and adjoining areas. | During March and April 2002, a resurgence of Vibrio cholerae O139 occurred in Dhaka and adjoining areas of Bangladesh with an estimated 30,000 cases of cholera. Patients infected with O139 strains were much older than those infected with O1 strains (p<0.001). The reemerged O139 strains belong to a single ribotype corresponding to one of two ribotypes that caused the initial O139 outbreak in 1993. Unlike the strains of 1993, the recent strains are susceptible to trimethoprim, sulphamethoxazole, and streptomycin but resistant to nalidixic acid. The new O139 strains carry a copy of the Calcutta type CTX Calc prophage in addition to the CTX ET prophage carried by the previous strains. Thus, the O139 strains continue to evolve, and the adult population continues to be more susceptible to O139 cholera, which suggests a lack of adequate immunity against this serogroup. These findings emphasize the need for continuous monitoring of the new epidemic strains.
Key words: | Vibrio cholerae O139 Bengal first emerged during 1992 and 1993 and caused large epidemics of cholera in Bangladesh, India, and neighboring countries ( 1 – 3 ). This new strain initially displaced the existing V. cholerae O1 strains. During 1994 to the middle of 1995, in most northern and central areas of Bangladesh, the O139 vibrios were replaced by a new clone of V. cholerae O1 of the El Tor biotype, whereas in the southern coastal regions the O139 vibrios continued to exist ( 4 – 6 ). During late 1995 and 1996, cases of cholera attributable to both V. cholerae O1 and O139 were again detected in various regions of Bangladesh. Since 1996, cholera in Bangladesh has been caused mostly by V. cholerae O1 of the El Tor biotype; only a few cases have been attributable to O139 serogroup strains. The epidemiology of cholera in Bangladesh changed again recently, and a large outbreak of cholera caused predominantly by V. cholerae O139 occurred in the capital city Dhaka and adjoining areas.
From early March to the end of April 2002, approximately 2,350 cholera patients associated with V. cholerae O139 were admitted to the Dhaka Hospital of the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B). A preliminary estimate showed that >30,000 cases of cholera occurred in Dhaka and the adjoining areas during this outbreak (A.S.G. Faruque, unpub. data). Since the initial emergence of V. cholerae O139 in 1992, we have monitored cholera outbreaks caused by this serogroup in Bangladesh and neighboring regions and have conducted several studies to characterize O139 strains. These studies indicate that strains of the O139 serogroup are undergoing rapid genetic changes, resulting in the origination of new clones; at least seven different ribotypes of O139 vibrios have been documented ( 6 – 8 ). Furthermore, O139 vibrios may have originated from more than one progenitor strain ( 8 ). The transient disappearance and reemergence of V. cholerae O139 in Bangladesh have raised questions regarding the origin of the reemerged O139 vibrios. In this study, we examined the current epidemiology of cholera in Bangladesh and analyzed V. cholerae O139 isolated from the recent outbreak to investigate the origin of the recent epidemic strains as well as to characterize possible genetic changes in O139 vibrios that might have contributed to the recent resurgence of V. cholerae O139. | Acknowledgments
We thank Matthew Waldor for the SXT and rstR gene probes and Afjal Hossain for secretarial assistance.
This research was funded by the Swedish International Development Agency (SIDA) under an agreement with the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) and USAID-Washington grant number HRN-A-00-96-90005-00. The ICDDR,B is supported by countries and agencies that share its concern for the health problems of developing countries. Current donors providing unrestricted support include: the aid agencies of the governments of Australia, Bangladesh, Belgium, Canada, Japan, Kingdom of Saudi Arabia, the Netherlands, Sweden, Sri Lanka, Switzerland, and the United States.
Dr. Faruque is a scientist at the International Centre for Diarrheal Disease Research, Bangladesh and the head of the Molecular Genetics Unit. His major research interests include microbial evolution, epidemiology and prevention of cholera, and environmental microbiology. Dr. Faruque’s current work focuses on understanding the molecular basis for the emergence of epidemic V. cholerae strains and developing vaccines against cholera. | CC BY | no | 2022-01-25 23:40:20 | Emerg Infect Dis. 2003 Sep; 9(9):1116-1122 | oa_package/6a/10/PMC3016788.tar.gz |
||
PMC3016789 | 14519242 | Material and Methods
Study Area
Yuto is located in the Ledesma Department, in the northeastern portion of Jujuy Province (23° 38′ S, 64° 28′ W). General topography is determined by the outlying spurs of the Andes range, and the area is covered by dense subtropical vegetation. The easternmost part of the study area is flat or slightly undulated, very fertile, with numerous rivers and streams and an average elevation of 349 m. Mean annual temperature is 20.7°C, ranging from 14.5°C in July (winter) to 25.8°C in January (summer). The rainy season starts in November as an annual monsoon, which lasts through the summer and into early fall; mean annual rainfall is 862 mm with a maximum monthly mean of 191 mm in January and a minimum monthly mean of 4 mm in July. Similar habitats and topography continue to the South (to Tucumán Province) and the North (to Oran, in Salta Province). The original biome of the area is a subtropical forest called “the yungas forests,” with numerous tree species of high economic value ( Anadenanthera colubrina, Calycophyllum multiflorum, Phyllostylon rhamnoides, Astronium urundeuva, Maclura tinctoria, Cordia trichotoma, among others). This forest area is now considerably fragmented and modified by human agricultural activities. The main cultivated crop is sugar cane, which is grown from May to November. Other products include citrus fruits, avocados, pears, bananas, mangos, papayas, cherimoyas, and vegetables. Agriculture is the main source of employment, mostly involving manual labor. Housing for agricultural laborers is typically of very poor construction, in many cases consisting of shacks of salvaged wood and sheet metal. This type of domestic and peridomestic habitat offers prime conditions for rodent infestations, providing easy rodent access and poor sanitation, and is found even in the urban area of Yuto.
Population Survey
A cross-sectional study was performed on a sample of the general population of the area (population 7,900). The estimated sample size to document the overall prevalence in the total population was approximately 340 persons. Figure 1 shows the distribution of the general population and that of the survey participants by sex and age. Local physicians explained the objectives of the study to participants, and an informed consent agreement was signed by each person or by parents or legal guardians of minors. Each participant had a blood sample drawn and completed a questionnaire that covered personal data, ethnicity, household and workplace characteristics, occupation, domestic sightings of rodents, recreational activities, time of residence in the area, history of travel inside and outside the country, previous disease compatible with HPS, and contact with a confirmed HPS patient.
Rodent Study
Trapping Site Selection
Sherman live traps were placed at likely sites of exposure of previously documented HPS cases. Nine sites were selected: four sites in Yuto District (Guaraní [13 lines, 347 traps], Jardín [4 lines, 60 traps], 17 Has [4 lines, 124 traps], and 8 Has [7 lines,168 traps]); one in El Bananal, a small rural village 7 km outside Yuto (11 lines, 214 traps); three on or adjacent to farms ( fincas [26 lines, 1,100 traps]); and one in a brushwood area (seminatural habitat [8 lines, 500 traps]). One farm was located in Urundel, Salta Province, in the immediate vicinity of Yuto, and the owner, workers, and inhabitants belonged to the Yuto community. Of the 73 capture lines, 19 were inside the household, 25 were peridomestic, 6 were in weeds near grapefruit culture, 5 in a brushwood, 5 at the side of a river or stream, 3 in vegetable gardens, 3 at roadsides, 2 in fruit orchards, 2 at the edge of a canal, and 3 adjacent to wire fences, railroads, or gullies. Outside lines consisted of 25 traps, each separated by 5 m. Lines located inside and outside the houses corresponded both to rural and urban areas. The number of traps inside the houses and in peridomestic urban lines depended on the area available at each site (8–20 traps). Figure 2 shows the location of trapping sites in Yuto and its surroundings.
Trapping and Processing
Trapping was performed from May 30 to June 4, 2000. Small mammals were collected each morning and transferred to a field laboratory for processing. After being anesthetized with Isofluorane (Abbott Laboratories Ltd., Queenborough, England), animals were bled from the retroorbital sinus by using heparinized capillary tubes, and then killed by cervical dislocation while still anesthetized. Samples of serum, blood clot, brain, heart, kidney, liver, and lung were placed in cryovials and stored in liquid nitrogen for their subsequent analysis at the Instituto Nacional de Enfermedades Virales Humanas (INEVH). Carcasses were tentatively identified in the field and preserved in 10% formalin and sent to the Natural Sciences Museum “Miguel Lillo” in San Miguel de Tucumán for taxonomic confirmation. Small mammal trapping and processing were performed according to established safety guidelines ( 7 ).
Serology
Human blood samples were centrifuged at Yuto Hospital. Serum was separated and placed in cryovials and stored in liquid nitrogen until further testing at INEVH. Rodent samples were centrifuged in the field laboratory and stored as described. Hantavirus antibodies were detected by an ELISA. Briefly, 96-well polyvinyl microplates were coated with SNV recombinant and control antigen overnight; then, serum samples and positive and negative controls were applied, followed by a peroxidase-conjugated antihuman IgG for human serum and a mix of peroxidase-conjugated anti– Rattus norvegicus and anti– Peromyscus maniculatus IgG for rodent serum. The substrate applied was 2,2′-azino-di (3-etilbentiazolin sulfonate) (ABTS, Kierkegaard & Perry Laboratories, Inc., Gaithersburg, MD). Serum dilutions were considered positive if the optical density was >0.2 after adjusting by subtraction of the corresponding negative-antigen optical density. Serum samples with titers >1:400 were considered positive. | Results
Serologic Survey in the General Population
Hantavirus IgG was found in 22 (6.5%) of 341 serum samples tested. For males, hantavirus antibody prevalence was 10%; females had a prevalence of 3.7%. Among the 341 participants, 56 were <10 years of age, 239 were 11–50 years of age, and 45 were >51 years of age (1 was without age data). Mean age among antibody-positive persons was 41 (range 18–87); 77% of these were in the 11- to 50-year age group. Hantavirus antibody prevalence according to sex and age are shown in Table 1 .
Most (292/341, 85.6%) of the population in the survey were local born or native Argentinians. Twenty-five (7.3%) participants were foreigners, including 24 Bolivians and 1 Paraguayan. Twenty-two persons had aboriginal ancestors (6.5%), with 17 belonging to the Guarani community. No information was available for two persons. Only one of the aboriginal participants had IgG antibodies to hantaviruses (1 [4.5%] of 22). Table 2 shows hantavirus antibody prevalence among the study population by ethnicity or nationality.
Dwellings were characterized according to their location as urban (>500 m from an open field), suburban (50–500 m from an open field), and rural (<50 m from an open field). Table 3 shows hantavirus antibody-prevalence findings in relation to house location and occupation of participants. Forty persons with urban occupations included 10 administrative employees, 13 health workers, 5 housewives, 4 students, and other miscellaneous occupations (technician, gardener, bricklayer, retired). Among suburban study participants, 23 were housewives, 28 were students, and the rest were employed in a variety of occupations (employee, health agent, maid, bricklayer). Among rural participants, 75 were agricultural workers, 64 housewives, 29 students, 10 sawmill workers, and the rest had miscellaneous occupations (employee, trader, bricklayer, retired). All three hantavirus-antibody–positive participants living in urban dwellings worked in rural areas (a bricklayer, a sawmill worker, and an agricultural worker), as did two antibody-positive participants living in suburban houses (agricultural workers). Thirteen (17.1%) of 76 participants with antibodies were agricultural workers (laborers, farmers, fincas owners). Antibody prevalences for other occupations included housewives (4 [6.3%] of 64), and sawmill workers (1 [10%] of 10). We found no hantavirus antibodies among 61 students or 20 healthcare workers, including physicians, nurses, health agents, and a dentist.
If occupations are considered as rural (positive IgG, 20 [10%] of 201) or not rural (urban and suburban, positive IgG 1 [1.03%] of 97), hantavirus antibody prevalence is significantly higher in the former (chi square=7.95, p=0.004); among those with rural occupations, those whose employment included agricultural activities had a higher prevalence of hantavirus antibodies (positive IgG 13 [17.1%] of 76; 7 [5.6%] of 125; chi square=6.98, p=0.008).
Table 4 shows clinical and epidemiologic data for the study population. Most (86%) hantavirus antibody–positive participants did not recall previous HPS clinical manifestations. The presence of rodents was reported by 77% of hantavirus antibody–positive and 79% of hantavirus antibody–negative persons, both in peridomestic and workplace settings.
Among persons who had previous contact with known HPS patients, 6 (6.1%) of 98 cases had hantavirus antibodies. Similar antibody prevalence was found in persons who did not have prior contact with a known HPS patient (16 [6.6%] of 242; chi square p>0.05). One hundred five persons (30.8%) reported no trips outside the area, and 58% of the remainder had traveled only to other areas inside the province or to nearby Salta Province; 41.6% reported trips to Bolivia in addition to local trips. Only a small percentage (0.4%) had visited relatively distant areas of the country. Hantavirus antibodies were found in 15 (6.4%) of 233 persons who traveled outside the immediate region, and 7 (6.6%) of 105 who had not traveled outside of the immediate region.
Rodent Study
A total of 361small mammals were captured in 2,427.5 trap-nights (overall trap success 14.8%). Captures represented two rodent families, three subfamilies, and 13 species ( Table 5 ). Calomys and Akodon were the most frequently trapped genera (38% and 40.2%, respectively) and the sole taxa that were found with hantavirus antibodies. Hantavirus IgG was found in 4 (2.8%) of 140 Akodon simulator and 7 (5.1%) of 137 Calomys callosus. The genus Oligoryzomys has several species previously identified as hantavirus reservoirs in the three HPS-endemic areas of the country. In this study, this genus accounted for 12 (3.3%) of the 361 captures; however, none was positive for hantavirus antibody. Table 6 shows the species distribution according to the different habitats sampled. The specimens of C. callosus were trapped inside dwellings located at different fincas .
More than half (6 [54.5%] of 11) of the positive rodents were captured in weeds, roadsides, or peridomestic sites at fincas (fruit trees and vegetable plantations); four were trapped in brushwood, and the last one at the riverside very near an HPS patient’s dwelling at El Bananal. Two lines contributed with two positive rodents on each. These were located in grapefruit plantations (weeds and roadside). | Discussion
The differences observed in the South American hantavirus infections relative to the classical SNV-related syndrome in North America have been suggested to reflect approaches to surveillance and differences in the pathogenicity of the viruses, host-reservoirs, and ecologic factors. The particular pattern of mild clinical illnesses and low case-fatality rate found in Yuto determined our selection of the area for detailed studies. In this first investigation, we attempted to determine the prevalence of past infection in the general population by testing for hantavirus IgG antibodies, to identify risk factors, and to identify the rodent species implicated in the transmission of hantaviruses
The hantavirus antibody prevalence found in the human population survey is one of the highest reported in Argentina, with a mean of 6.5% (females 3.7%, males 10%). Previous studies in the other HPS-endemic areas (central and southwestern) of Argentina found antibody prevalence varying from 0.1% to 1.5% ( 4 , 8 ). Males in their 30s and 40s showed antibody prevalences of >14% and 16%, respectively. Most antibody carriers (82%) did not report clinical manifestations consistent with HPS. Thus, the low case-death rate clearly reflects milder clinical illnesses (reported case-fatality rates are 40% to 50% in both Americas) ( 9 , 10 ).
In previous studies of asymptomatic contacts of HPS case-patients from this area, we found a high prevalence of IgG (4 [9.5%] of 42). This finding could be the result of infection from a common source or interhuman transmission ( 6 ), as described in the 1996 outbreak in El Bolson-Bariloche, southern Argentina ( 11 – 13 ). In this survey, we did not find differences in the hantavirus antibody prevalence in persons with and without known HPS case contact (6.1% and 6.6%, respectively). No antibodies were found among the healthcare workers studied, and the distribution of clinical cases and antibody carriers by sex showed a predominance of males (in patients from 1997 to 2000, the percentage of males was 76.7%, 23/30). These findings, collectively, do not favor the hypothesis that interhuman spread is playing a large role in the transmission of hantaviruses in this area. These findings also reinforce the view that environmental, occupational, and residential factors create an increased risk for rodent exposure in occupational, domestic, and peridomestic settings. This conclusion is supported by the noticeable observation of rodents and their signs in households and workplaces reported by the study population and patients. The risk factor that showed a significant difference between antibody-positive and -negative persons was a rural occupation, especially one associated with agricultural activities. This finding was also reflected in the high male antibody prevalence observed in the survey and predominance of males among patients.
A high antibody prevalence has been previously found in indigenous communities of the Gran Chaco of Paraguay (40.4%) and Argentina (17.1%) (in Salta Province). In those studies, the aboriginals evaluated belong to closed communities that still fish, hunt, and gather for their sustenance. Their main ethnic groups are Chorote, Chulupi, and Wichi of the Mataco-Mataguayan linguistic family ( 5 ). In the area in our study, aboriginal and foreign people are in the minority (14% in the sample). Indigenous people (22 in this sample) belong to different groups; 77% are Guarinis from Paraguay. Only one person from the Charagua community, which originated in Bolivia, participated in the study. This person had hantavirus antibodies. Among the 25 foreign participants, the only antibody-positive person belonged to the Bolivian majority ( 14 ). All nonnative inhabitants, including those with aboriginal ancestors and foreigners, are integrated members of the general population, sharing jobs and household conditions with local people, and therefore sharing similar risk factors.
More than two thirds of the studied group had traveled inside or outside the province, and/or to Bolivia. Such trips are frequent among migrant farm laborers, who follow harvest seasons. No differences were found in the antibody prevalence between persons who traveled and those who did not, probably because the reported trips were primarily within the same ecologic area.
The genetic diversity of sigmodontine rodents in South America is well known ( 15 ). Characterization of rodent species and their association with indigenous hantaviruses are currently under study. Putative rodent reservoirs of pathogenic hantaviruses identified in Argentina thus far belong to the Oligoryzomys genus ( O. longicaudatus for Andes and Oran genotypes, O. chacoensis for Bermejo genotype, and O. flavescens for Lechiguanas genotype). Three previous rodent expeditions were performed in the northwestern Argentine hantavirus-endemic area in relation to HPS studies: two in Salta Province (July 1995 and October 1996) and one in Jujuy Province (May 1998), involving the villages of La Mendieta and Libertador General San Martin. Hantavirus antibody–positive species from Salta included O. chacoensis (1 [3.7%] of 27), A. simulator (1 [3.8%] of 26), and O. longicaudatus (2 [7.7%] of 26) and in Jujuy O. chacoensis , (1 [8.3%] of 12) ( 16 ).
In the present study, hantavirus antibody prevalence among rodents was similar to that previously reported in the country (2.7% to 12.5%, varying by area and species) ( 16 ), but the species found with hantavirus antibodies were different. The genera of hantavirus antibody–positive rodents corresponded to those with higher relative abundance, Akodon and Calomys . Akodon, associated with the Pergamino virus in central Argentina, has thus far not been reported to be pathogenic for humans ( 17 ). Among the species of Calomys , C. laucha has been identified as the reservoir of Laguna Negra virus in Paraguay, but no previous evidence suggests it circulated in Argentina ( 14 ). Sigmodontine rodents were collected in every rural habitat in which we used traps, including inside dwellings, peridomestic sites, weeds close to grapefruit and banana plantations, vegetable fields, and mainly natural habitats such as woodbrush and the sides of rivers and streams. Most positive rodents were captured in weeds or roadsides inside or close to cultivated citrus or vegetables. Other rodents were captured in peridomestic sites associated with HPS cases and in woodbrush near one of the fincas . Focal concentration of positive rodents appeared to occur, with multiple positive rodents often trapped in the same trap line.
Characteristics of household or working habitats included a great deal of available potential food and cover for rodents attributable to substandard housing and sanitation. Sigmodontine rodents were also trapped in peridomestic sites of the urban area (8 Has and 17 Has quarters), where the features of the environment and buildings were similar to suburban or rural areas. C. callosus was the only wild rodent species captured inside houses. This observation is in accord with previous descriptions from Bolivia in relation to the Bolivian hemorrhagic fever outbreaks; C. callosus is the reservoir of Machupo virus, the arenavirus linked to Bolivian hemorrhagic fever, in the BHF-endemic area of El Beni ( 18 ). Control of the large Bolivian hemorrhagic fever outbreaks of the 1960s was achieved through measures directed to prevent infestation of C. callosus in towns and villages. These same measures should also be useful in this area to prevent hantavirus transmission, at least from rodents of the genus Calomys that are adapted to domestic and peridomestic settings.
Our results favor the hypothesis that less virulent hantaviruses are responsible for the mild and subclinical illnesses circulating in this region. Ongoing investigations that include the genetic characterization of the viruses associated with the different clinical forms will help to clarify this point. | We initiated a study to elucidate the ecology and epidemiology of hantavirus infections in northern Argentina. The northwestern hantavirus pulmonary syndrome (HPS)–endemic area of Argentina comprises Salta and Jujuy Provinces. Between 1997 and 2000, 30 HPS cases were diagnosed in Jujuy Province (population 512,329). Most patients had a mild clinical course, and the death rate (13.3%) was low. We performed a serologic and epidemiologic survey in residents of the area, in conjunction with a serologic study in rodents. The prevalence of hantavirus antibodies in the general human population was 6.5%, one of the highest reported in the literature. No evidence of interhuman transmission was found, and the high prevalence of hantavirus antibody seemed to be associated with the high infestation of rodents detected in domestic and peridomestic habitats.
Keywords: | Hantaviruses (family Bunyaviridae, genus Hantavirus ) are zoonotic viruses of rodents that produce two major clinical syndromes in humans: hemorrhagic fever with renal syndrome (HFRS) in Asia and Europe and hantavirus pulmonary syndrome (HPS) in the Americas. Since HPS was initially characterized in the United States in 1993 and the associated hantavirus (Sin Nombre virus, or SNV) was identified, an increasing number of human cases and SNV-related viruses have been identified in different countries of North and South America ( 1 ). Three HPS-endemic areas have been recognized in Argentina: northern (Salta and Jujuy Provinces), central (Buenos Aires, Santa Fe, and Entre Rios Provinces), and southern (Rio Negro, Neuquén, and Chubut Provinces). In the North, cases of acute respiratory distress syndrome of unknown etiology have been reported since 1984 at Orán, Salta Province. The illness, known in the area as ”Distress of Orán,” had an unexplained etiology until the early 1990s, when these cases were first associated with Leptospira interrogans infections and later with hantaviruses. Of 21 patients tested between 1991 and 1993, eight showed serologic evidence of recent leptospira infection by microscopic agglutination test, and 4 had a positive immunoglobulin (Ig) M enzyme-linked immunosorbent assay (ELISA) using Hantaan virus antigen ( 2 , 3 ). Ultimately, these patients were recognized as having HPS, and a new SNV-related hantavirus, now designated Oran virus, was recognized in the region ( 4 ). The first HPS cases in Jujuy Province were confirmed in 1997, and since then, their number has been progressively increasing. Isolated cases were detected in several different localities (San Pedro, La Mendieta, Caimancito, Libertador General San Martín, Fraile Pintado, and San Salvador, the provincial capital), but most originated in the town of Yuto and surroundings. A high percentage of confirmed cases had the usual nonspecific prodrome but were not followed by a distress syndrome. The case death rate ( 4 [13.3%] of 30) was noticeably lower than that reported in other areas of the country and in the literature. Some strains of hantavirus were then hypothesized to produce subclinical disease. Only one hantavirus antibody-prevalence study had been performed among inhabitants of the Gran Chaco of Paraguay and Argentina (Salta Province), and hantavirus antibodies were found in 20% to 40% of participants ( 5 ).
Some differences in the clinical signs and symptoms of HPS have been recognized in other areas of the Americas compared with those described after infections with SNV; these differences have included possible person-to-person transmission, a different spectrum of clinical illnesses, an elevated incidence of infections in children, and higher antibody prevalence ( 6 ). For instance, patients from the area under study had unusually mild clinical symptoms and low death rates, supporting the idea that a less pathogenic hantavirus could be circulating in that area or that host or environmental factors might be responsible for the observed pattern. The objectives of this study were to determine the prevalence of hantavirus antibodies in the general population, identify risk factors, and investigate the rodent species implicated in hantavirus transmission in Yuto. | Acknowledgments
We thank Horacio López, Germán O’Duyer, Cesar Polidoro, Enrique Serrrano, Miguel Canchi, Alberto Segobia, Julio Gil, Bernardino Perez, Monica Diaz, and David Flores for field work and John Boone for reviewing the manuscript.
This work was supported by the Administración Nacional de Laboratorios e Institutos de Salud (ANLIS), Ministerio de Salud Pública de la Nación, Argentina; and by National Institute of Health grant 1R01 AI45059.
Dr. Pini is a medical doctor who specializes in pathology. Since 1993, she has worked at Instituto Nacional de Enfermedades Virales Humanas in the hantavirus program. | CC BY | no | 2022-01-27 23:35:41 | Emerg Infect Dis. 2003 Sep; 9(9):1070-1076 | oa_package/3f/90/PMC3016789.tar.gz |
||
PMC3016790 | 14519254 | Discussion
Tick-borne relapsing fever caused by B. hermsii is acquired only within the geographic range of its specific tick vector, O. hermsi . This tick has been found in southern British Columbia, Washington, Idaho, Oregon, California, Nevada, Colorado, and the northern regions of Arizona and New Mexico ( 2 , 4 , 16 ). As this and other outbreaks demonstrate, patients often become ill after they leave disease-endemic areas where they were bitten by infectious ticks ( 2 , 6 ). One patient (case 1) remained untreated early in his illness in spite of seeking medical attention at a hospital near the site of exposure.
The cabin where the patients were infected has been owned by the same family for nearly 40 years. None of the members of the four related families questioned recalled any prior illnesses consistent with what they experienced with this outbreak of relapsing fever. The event that appears to have instigated this outbreak was the partial removal and disturbance of animal nest material in the east end of the attic. Some ticks presumably fell through the spaces between the ceiling boards to the two bedrooms below. The boy (case 5) slept all but part of one night on the porch, but during the night of August 6 a thunderstorm forced him indoors, and he moved to the front east bedroom. His onset of illness in St. Louis was on the afternoon of August 11, which equates to an incubation period of approximately 4.5 days. The incubation periods for the others were estimated at 5 to 15 days.
The animals that maintained the enzootic cycle with B. hermsii and O. hermsi in the cabin are unknown. Red squirrels are highly susceptible to infection with B. hermsii ( 17 ), are important hosts for these ticks ( 1 ), and were abundant in the forest surrounding the cabin. However, no evidence of squirrels was found in the cabin. Deer mice were routinely in the cabin, and the owners used poison bait stations to control the indoor population. One dead mouse was found near the cabin, and two carcasses were in the attic material that had been removed on July 25. American robins ( Turdus migratorius ) had nested in the attic, and two dead robin chicks were found in the material collected from the attic on August 24. Recently, a B. hermsii –like spirochete was implicated in the death of a northern spotted owl ( Strix occidentalis ) in Kittitas County, Washington ( 18 ), and many years ago, 26 O. hermsi were collected from the nest of a bluebird (either Sialia mexicana or S. currucoides ) in Summerland, British Columbia ( 19 ). The role of birds in perpetuating relapsing fever spirochetes and their tick vectors in nature is worthy of further investigation. A serologic survey of red squirrels and deer mice in the vicinity of the cabin for immunologic evidence of exposure to B. hermsii might also help explain the enzootic involvement of these rodents.
This outbreak demonstrated for the first time that B. hermsii and its tick vector O. hermsi exist in Montana and caused multiple cases of relapsing fever. Owners of cabins in the vicinity of where the outbreak occurred met with the Montana state epidemiologist and received information regarding the epidemiology and prevention of tick-borne relapsing fever. Although the outbreak was localized, a large area of western Montana has the appropriate ecologic parameters to support enzootic cycles that provide the potential for relapsing fever caused by B. hermsii to occur. A diagnosis of relapsing fever should therefore be considered when patients who have resided or vacationed in western Montana seek treatment for a recurrent febrile illness. | Five persons contracted tick-borne relapsing fever after staying in a cabin in western Montana. Borrelia hermsii was isolated from the blood of two patients, and Ornithodoros hermsi ticks were collected from the cabin, the first demonstration of this bacterium and tick in Montana. Relapsing fever should be considered when patients who reside or have vacationed in western Montana exhibit a recurring febrile illness.
Keywords: | Tick-borne relapsing fever, caused by Borrelia hermsii , is endemic in the higher elevations and coniferous forests of the western United States and southern British Columbia, Canada ( 1 ). Although many multicase outbreaks of relapsing fever associated with B. hermsii and its tick vector, Ornithodoros hermsi , have been reported ( 2 – 6 ), none has been documented in Montana. Patients usually become ill after they have slept in cabins infested with spirochete-infected ticks that feed quickly during the night. The illness has an incubation period of 4 to > 18 days and is characterized by recurring episodes of fever accompanied by a variety of other manifestations, including headache, myalgia, arthralgia, chills, vomiting, and abdominal pain ( 1 ). Relapsing fever is confirmed by the microscopic detection of spirochetes in the patient’s blood ( Figure 1 ) ( 7 ).
In 1927, relapsing fever was diagnosed in a a 33-year-old man in Walla Walla, Washington, although his possible site of exposure was Montana ( 8 ). A specific location was not given, however, and spirochetes causing the illness were not identified. Ornithodoros parkeri , another tick vector of relapsing fever spirochetes in western United States, was collected during 1936 in Beaverhead County in southwestern Montana, and an undisclosed number of these ticks transmitted Borrelia parkeri to one mouse in the laboratory ( 9 ). If relapsing fever had occurred in Montana, B. parkeri transmitted by O. parkeri would have been the likely etiologic agent ( 9 , 10 ).
In summer 2002, a multicase outbreak of relapsing fever associated with a privately owned cabin occurred in western Montana. Spirochetes were isolated from two patients and identified as B. hermsii , and this spirochete’s tick vector, O. hermsi , was collected from the cabin where the patients slept. This is the first multicase outbreak of tick-borne relapsing fever in Montana and the first report of B. hermsii and O. hermsi in the state, thereby documenting the risk of this infection beyond the geographic range known previously within the United States.
The Study
From July 30 to August 20, 2002, a total of 5 persons in a group of 20 became ill with symptoms consistent with tick-borne relapsing fever during or following their visit to western Montana ( Table ). The common site of exposure was a cabin on the south shore of Wild Horse Island (47°50′30” N; 114°12′30” W) in southwest Flathead Lake, Lake County, Montana. The 875-hectare island became a state park in 1978, although 56 privately-owned properties exist, many of which have cabins. No one lives permanently on the island, and camping overnight (by day visitors to the island) is not allowed. The island is approximately 4.6 km wide from east to west and 3.2 km wide from north to south; its elevation varies from 881 m at the shoreline to its highest point of 1,141 m. The island is separated from the mainland by 2.0 km to the south and 2.4 km to the north. The habitats include Ponderosa Pine and Douglas Fir forests, native grassland, and steep rocky outcroppings. Red squirrels ( Tamiasciurus hudsonicus ) and deer mice ( Peromyscus maniculatus ) are abundant.
On July 22, the first of four related families arrived at the cabin, and on July 25, a 54-year-old man (case 1, Table) entered the east end of the attic and removed nest material that had accumulated there. He slept at night and napped during the day in one of two bedrooms located immediately under the area of the attic where the nest material had been partially removed. On July 30, he became ill with fever, headache, arthralgia, myalgia, and rash, and 2 days later he visited the emergency room of a local hospital but a diagnosis was not made. Over the next several days he improved, and on August 6, he and his family began driving back to their home in Seattle, Washington. During the trip, he relapsed with another febrile episode. That evening, he was taken to the emergency room of a Seattle hospital and admitted early the next morning. On the basis of his history, a diagnosis of relapsing fever was considered, although spirochetes were not detected in the blood.
Three additional families (17 persons) arrived at the cabin on July 31 and on August 5 and departed on August 8 and 9. One family of five returned to their home in Seattle, and three of them became ill on August 12, 17, and 20 (cases 2–4). Relapsing fever was suspected immediately, and spirochetes were detected in Wright-stained blood smears from two patients (cases 2, 3). On August 10, a family of six returned to St. Louis, Missouri, where a 13-year-old boy (case 5) became ill the next day. On August 12, he was taken to an emergency room and to his pediatrician the following day. His mother communicated with the family in Seattle, where a young girl (case 2) was ill, and spirochetes had been detected in her blood. This discovery led to the detection of spirochetes in a blood smear from the boy. All patients had fever and other clinical manifestations consistent with tick-borne relapsing fever ( Table ). They were all treated with doxycycline, and all recovered with no subsequent relapses.
Blood smears from three of the Seattle patients (cases 2–4) were prepared and stained separately with monoclonal antibodies H9724, which recognizes all known species of Borrelia ( 11 ), and H9826, which is specific for B. hermsii ( 12 ), and rabbit hyperimmune serum to B. hermsii ( Figure 2A ). Indirect immunofluorescence assays (IFA) and microscopic analysis demonstrated spirochetes from two patients (cases 2, 3) that were reactive with all antibodies, which identified these bacteria as B. hermsii . Blood from the third patient (case 4) was negative for spirochetes with all antibodies. EDTA-treated whole-blood samples from these patients were injected intraperitoneally into mice, and the two samples positive by microscopic examination also produced detectable levels of spirochetemia in mice. Whole blood obtained from the infected mice was injected into modified Kelly’s medium (BSK-H supplemented with 12% rabbit serum; Sigma-Aldrich Corp., St. Louis, MO), and spirochetes that originated from two patients were isolated.
A convalescent-phase serum sample from the first case-patient (case 1) was collected 55 days after the onset of his illness. This sample was examined by IFA with whole cells of B. hermsii ( 13 ) and by immunoblot with a whole-cell lysate of B. hermsii and recombinant GlpQ ( 13 ). The patient’s IFA titer to B. hermsii was positive at 1:1,024, and the sample was positive by immunoblot at 1:100 dilution.
The five persons with confirmed or presumptive relapsing fever slept in two adjacent bedrooms in the east end of the cabin under the attic where animal nest material had been partially removed. People who slept only on the outside porch or in other bedrooms did not become ill. On August 24, 2002, the two east bedrooms were examined for ticks, but none were found. The remaining nest material was collected from the attic and taken to Rocky Mountain Laboratories. During the next several weeks, the material was processed with two small Berlese extraction funnels, which separate live arthropods from nonliving debris. Fourteen O. hermsi were recovered, including 1 larva, 10 nymphs, 2 males, and 1 female ( Figure 2B ). The postlarval stages of O. hermsi are very similar to those of O. sparnus, which parasitizes woodrats and deer mice in Utah and Arizona, but the latter species is an incompetent vector of B. hermsii ( 14 , 15 ). The larva collected from the cabin displayed morphologic characteristics consistent with O. hermsi . Voucher specimens (one nymph, one larva) of O. hermsi collected at the study site were deposited in the U.S. National Tick Collection, Georgia Southern University, under accession number RML 123385. The 12 remaining ticks were allowed to feed on a laboratory mouse to determine whether they were infectious. The blood of the mouse did not become spirochetemic during the 10 days after tick bite. These ticks were not examined for infection by other methods and were kept alive to establish a laboratory colony.
On June 21, 2003, the attic, utility room, and bedrooms where the infected persons slept were treated with an over-the-counter insecticide-acaricide (Ortho Indoor Insect Fogger, The Ortho Group, Columbus, OH). Sentinel O. hermsi ticks (late stage nymphs and adults) from a laboratory colony were confined in open flasks in one treated bedroom (46 m 3 ) and a family room that was not treated to examine the efficacy of treatment. After the 4-hour application with two 141-gm cans of fogger, all 54 ticks in the treated bedroom were dead, whereas all 52 ticks in the untreated room were alive. | Acknowledgments
We thank those involved with this outbreak for their interest, patience, information, and logistic support; Merry Schrumpf, Sandra Raffel, Ted Hackstadt and Gary Hettrick for technical assistance; Carol Schwan for help in the field; staff of the Infectious Disease Department of Children’s Hospital, St. Louis, Missouri, for their assistance; Peter Talbot, Burt Finch, and Montana Fish, Game and Parks for boat transportation to the island; and James Musser, Mark Fisher, and Amy Henion for reviewing the manuscript.
Portions of this research were supported by National Institute of Allergy and Infectious Diseases grant AI-40729 to J.E.K.
Dr. Schwan is a senior investigator in the Laboratory of Human Bacterial Pathogenesis at the Rocky Mountain Laboratories, National Institute of Allergy and Infectious Diseases. His research interests include medical entomology, the serodiagnosis of vector-borne infections, and how bacterial pathogens adapt for their biologic transmission by ticks and fleas. | CC BY | no | 2022-01-24 23:45:25 | Emerg Infect Dis. 2003 Sep; 9(9):1151-1154 | oa_package/10/39/PMC3016790.tar.gz |
||||
PMC3016791 | 14519256 | Conclusions
Since the introduction of antimicrobial drugs in therapy, S. pneumoniae has shown a strong ability to acquire resistance to the progressive introduction of new antibiotics to treat it.
Surveillance studies suggest that the levels of resistance to macrolide antibiotics in S. pneumoniae are high and are still rising ( 9 , 10 ). Ketolides, of which telithromycin is the first to be registered for clinical use, and quinupristin-dalfopristin are new compounds belonging to the macrolide-lincosamide-streptogramin B (MLSb) class of antimicrobial agents. One of the main advantages attributed to these two new families of antibiotics is their ability to retain activity against most resistant pneumococcal isolates ( 11 ). Recently, mutations in the 23S rRNA genes and in ribosomal proteins L4 and L22 have been identified in macrolide-resistant S. pneumoniae, although the predominant mechanisms of resistance are mediated by ermB or by mefA genes ( 12 ). The combination of the mutation in the domain V of the 23S rRNA and the insertion in the L22 ribosomal protein gene has not been previously described in S. pneumoniae isolated in vivo or in vitro. In these strains, a high level of resistance to 14 -, 15 -, and 16-membered ring macrolide s and to clindamycin and resistance to quinupristin-dalfopristin and telithromycin were observed. The continued use of clarithromycin in the presence of an isolate with an insertion in the L22 ribosomal protein gene may have led to the selection of the isolates with the double mutations, L22 and 23S rRNA genes, associated with combined resistance to telithromycin and quinupristin-dalfopristin, although neither of these antibiotics was used. The A2058G mutation in all four 23S rRNA genes alone slightly increased both quinupristin-dalfopristin and telithromycin MICs as seen in the fifth isolate. The L22 insertion alone, as observed in the second isolate, was enough to confer a high level of quinupristin-dalfopristin resistance and also increased the telithromycin MIC to 2 μg/mL. The combination of both mutations (L22 insertion and A2058G mutation in the 23S rRNA genes) led to high level of resistance to telithromycin and increased the quinupristin-dalfopristin MIC (third and fourth isolates).
The first isolate was susceptible to all antibiotics tested, and although it had a point mutation in the gyrA gene, it had no phenotypic expression. In mutants obtained in vitro, other authors observed point mutation in the gyrA gene without mutation in the parC gene with or without phenotypic expression of quinolone resistance ( 13 ). Nevertheless, using fluoroquinolones to treat a strain that had an existing, but unapparent, first-step mutation in the gyrA gene, probably favored the development of the high level of resistance to fluoroquinolones observed in the later isolates. Fluoroquinolone resistance in clinical isolates of S. pneumoniae is still infrequent, but in some places, the resistance has been increasing ( 9 , 14 – 16 ).
Until now, most erythromycin- or fluoroquinolone-resistant pneumococci had belonged to only a few serotypes. Finding an erythromycin-resistant serotype 3 was unusual, and the isolation of a fluoroquinolone-resistant serotype 3 S. pneumoniae was the exception, if ever reported. Penicillin or another appropriate β-lactam antibiotic could have been a valid therapeutic option in the absence of allergy to penicillin. Serotype 3 is considered the most virulent of S. pneumoniae serotypes , and it is commonly associated with invasive disease in adults. Most serotype 3 isolates have broad antibiotic susceptibility ( 17 ). A fatal infection associated with a multiply drug-resistant S. pneumoniae serotype 3 was first reported in 1988 ( 18 ). This strain was resistant to erythromycin, clindamycin, and tetracycline.
The therapeutic failure and selection of resistance to several antibiotics by S. pneumoniae , the emergence of new mechanisms of resistance to macrolides in clinical isolates of S . pneumoniae, and the appearance of multidrug resistance in a serotype 3 isolate (ST180) evoke concern. | Streptococcus pneumoniae serotype 3, isolated from a penicillin-allergic patient and initially susceptible to fluoroquinolones, macrolides, lincosamides, quinupristin-dalfopristin, and telithromycin, became resistant to all these drugs during treatment. Mutations in the parC and gyrA and in the 23S rRNA and the ribosomal protein L22 genes were detected in the resistant isolates.
Keywords: | Macrolide antimicrobial drugs and new fluoroquinolones have become good therapeutic choices in the treatment of penicillin-resistant Streptococcus pneumoniae infections and in penicillin-allergic patients with pneumococcal pneumonia. Until now, clinical failures of fluoroquinolones during treatment of pneumococcal infections have rarely been reported ( 1 – 3 ) and development of resistance in S . pneumoniae to quinupristin-dalfopristin and telithromycin during or after treatment with a macrolide or a combination of macrolide and quinolone antibiotics has never been reported.
We describe failure of treatment of pneumococcal pneumonia in a 71-year-old man, who was allergic to penicillin and had a history of chronic obstructive pulmonary disease. During treatment, isolates that were susceptible to levofloxacin, clarithromycin, clindamycin, quinupristin-dalfopristin, and telithromycin became resistant.
The Study
S . pneumoniae were identified to the species level by their colony morphology, optochin sensitivity, and the bile solubility tests. Serotyping was performed by the Quellung reaction (Quellung antisera, Staten Seruminstitut, Copenhagen). MICs of the antibiotics and criteria of susceptibility and resistance, otherwise not indicated, were those of the broth microdilution procedure described by the National Committee for Clinical Laboratory Standards (NCCLS) ( 4 ). The agar dilution method ( 5 ) and the E-test, as referred to in Table 1 , were performed to expand the range of dilutions available in the broth microdilution trays. No discordance was observed in the susceptibility results, which were obtained by using the broth microdilution, the agar dilution, or the elution E-test.
Molecular typing methods (pulsed-field gel electrophoresis [PFGE], BOX-polymerase chain reaction [PCR], and multilocus sequence typing) of the isolates were performed according to previously described protocols ( 6 ). Presence of the mefA, ermB, and ermA ( ermTR ) genes and point mutations at Ser-79 in the parC and at Ser-81 in the gyrA genes were detected as previously described ( 6 ). Fragments of the domains II and V of the 23S rRNA genes and of the genes encoding ribosomal proteins L4 and L22 were amplified by using the primers and conditions previously described ( 7 , 8 ). Amplification products were sequenced after purification.
Case Description
In January 2002, a 71-year-old man, who was allergic to penicillin and had a history of chronic obstructive pulmonary disease, was hospitalized due to pneumonia. The first S . pneumoniae strain was isolated from sputum obtained before antibiotic treatment with intravenous levofloxacin (500 mg once a day for 13 days) was begun. On day 4, intravenous clarithromycin (500 mg twice a day) was added but withdrawn after 4 doses. On day 14, clinical and radiologic conditions had deteriorated, and treatment was changed to intravenous clarithromycin (500 mg) and intravenous ciprofloxacin (200 mg) twice a day for 7 days. On the same day, a second pneumococcal isolate resistant to levofloxacin and clarithromycin but susceptible to clindamycin was obtained ( Table 1 ). The MIC of clarithromycin for this second isolate was 2 μg/mL by the double-disk test ( 9 ) and showed that the susceptibility of clindamycin was not modified after the erythromycin induction. Initially, this second isolate was incorrectly reported as clarithromycin susceptible because of an erroneous record of the result of the disk-diffusion method. On day 24, the patient was discharged with oral clarithromycin. Twenty-four hours later, the patient was readmitted with exacerbation of the respiratory infection and cor pulmonale, and two pneumococcal isolates resistant to levofloxacin, clarithromycin, and clindamycin were found within 6 hours. The patient received trimethoprim-sulfamethoxazole for 5 days; a fifth pneumococcal isolate was found from a pleural effusion specimen. The pneumonia completely resolved after 10 days of treatment with vancomycin. The five S . pneumoniae serotype 3 isolates recovered over a 32-day period had the same PFGE, BOX-PCR patterns, and multilocus sequence typing (ST180) results.
All S. pneumoniae isolates were susceptible to penicillin (MIC < 0.03 μg/mL), trimethoprim-sulfamethoxazole (MIC < 0.5/9.5 μg/mL), tetracycline (MIC < 2 μg/mL), chloramphenicol (MIC < 2 μg/mL), and vancomycin (MIC=0.5 μg/mL). The first isolate was susceptible to both macrolides and fluoroquinolones. This isolate had a levofloxacin MIC of 2 μg/mL, confirmed by all susceptibility methods used (E-test, broth microdilution, and agar dilution), although it had a point mutation in the gyrA gene, as shown in Table 2 .
For the second isolate, MICs of macrolides, quinupristin-dalfopristin, and telithromycin were higher than those for the first isolate, and a 18-base insert in the sequence of the gene encoding the ribosomal protein L22 was detected. The result, deduced from the corresponding ribosomal protein, was a six–amino acid (RTAHIT) insertion between amino acids T108 and V109 (GenBank accession no. AY140892). The third and fourth isolates, with resistance to macrolides, clindamycin, quinupristin-dalfopristin, and the highest telithromycin MICs of all the isolates, had an A2058G ( Escherichia coli numbering) mutation in the sequence of the gene corresponding to the domain V of the 23S rRNA as well as the 6–amino acid insert in the ribosomal protein L22. The four alleles encoding the 23S rRNA gene had the A2058G mutation. The sequences of the fifth isolate, resistant to macrolide antibiotics and clindamycin, with an intermediate susceptibility to quinupristin-dalfopristin, indicated a mutation at position 2058 of domain V, but no insert was found in the ribosomal protein L22. | Dr. Perez-Trallero is a clinical microbiologist and infectious disease consultant. He is head of the Microbiology Department at Donostia Hospital and assistant professor of Preventive Medicine and Public Health at the Facultad de Medicina at the Basque Country University. His research focuses on antimicrobial resistance and epidemiology of transmissible diseases. | CC BY | no | 2022-01-24 23:39:57 | Emerg Infect Dis. 2003 Sep; 9(9):1159-1162 | oa_package/b1/63/PMC3016791.tar.gz |
||||
PMC3016792 | 14519253 | Materials and Methods
Surveillance Data
Rabies case data for raccoons and skunks for each county by month from 11 states (Connecticut, Delaware, Maryland, Massachusetts, New Jersey, New York, North Carolina, Pennsylvania, Rhode Island, Virginia, and West Virginia) were used for analysis. Only counts of rabid animals per county were used because not all counties reported total numbers of animals submitted for testing. The observation period for each county started when the first case of raccoon or skunk rabies was reported, with a maximum study interval of 20 years (1981–2000) and a minimum of 11 years (1990–2000). The lengthy study interval reduced the variability of reporting within a county that may be observed when an epizootic arrives. The unit of analysis was the number of laboratory-confirmed rabies cases in raccoons and skunks reported per month by county. To identify counties that had an appreciable number of skunks infected with rabies, analysis was restricted to counties that reported at least 12 rabid skunks within 12 months of first detecting rabies in skunks. This average of one rabid skunk per month corresponded to the 90th percentile of all counties reporting at least one rabid skunk, and 36 counties met this criterion. Upon examination, one county was excluded because of geographic isolation in the western part of Maryland (Garrett County) that would have severely biased the spatial analysis, and three counties in New York (Clinton, Franklin, and Oswego) were excluded because rabies found in skunks was the result of spillover from a red fox epizootic emerging from Canada ( 24 ).
Descriptive Analysis
Rabies epizootics among skunks and raccoons in the 32 counties used for our analyses were identified by using the following algorithm: an epizootic began when the monthly number of rabid animals reported was greater than the county’s monthly median for two consecutive months and ended when this number was less than or equal to the county median for two consecutive months ( 16 ). Additionally, an epizootic had to be at least 5 months in duration. In calculating a county’s monthly median number of rabid animals, months occurring before the appearance of the first rabid animal were excluded. For example, if rabid skunks first appeared in a county on June 1, 1994, then the months before were excluded for calculation of the skunk median. In that same county, if rabid raccoons appeared on December 1, 1993, then the months before were excluded from calculation of the raccoon median. The size and length of epizootics were compared between species by a Wilcoxon rank sum test. The Kolmogorov-Smirnov two-sample (KS) test was used to assess seasonal differences in the number of rabies cases by species.
Temporal Analysis
A series of Poisson regression models were used to further explore the relationship between the number of rabid skunks and rabid raccoons. The outcome variable was defined as the log number of rabid skunks. The predictor variables were the number of rabid raccoons, time (continuous 1–140 months), and calendar month of report. The time variables started at 1 with the appearance of the first rabid animal (skunk or raccoon) and continued for up to a total of 140 months (maximum number of months of observation). All counties had at least 72 months of follow-up, and 50% had more than 107 months of follow-up. To smooth each time series, a moving average of the number of rabid animals was calculated on the basis of the present and previous month’s observations for both species and used for subsequent analyses. A time-squared term, an interaction term of time by number of rabid raccoons and indicator variables for county and calendar month of report, was included in the model. The effect of repeated measures by county was controlled in the analysis by using a generalized estimating equation ( 25 ). Lag periods of 0 to 5 months for the number of rabid raccoons were introduced and assessed to identify any improved fit in the Poisson regression model, as determined by comparing the log likelihood values. The model with the highest (less negative) log likelihood value was chosen as the best fitting model.
The full Poisson regression model can be represented as
Log (# of skunks) = β 0 + β 1 (# of raccoons t-i ) + β 2 (t) + β 3 (t*t) + β 4 (t*# of raccoons t-i ) + β j (county j ) + β k (month k ) + E where t = time in months (starts at 1 with first appearance of rabid skunk or rabid raccoon in each county and ends with a maximum value of 140) i = 0–5 lag time in months j = 31 indicator variables representing the 32 counties used in the analysis k = 11 indicator variables representing months, with December being the reference group E=residual error.
Spatial Analysis
To determine if skunk and raccoon epizootics were associated spatially from 1990 through 2000, the mean center of the counties reporting a rabies case was determined for successive years, 1990–2000, for each species (Crimestat, Department of Justice). The standard deviational ellipse was also calculated, showing the dispersion in two dimensions (Crimestat) of the mean centers. The distance between mean centers of successive years by species was calculated by the Pythagorean theorem. The direction between mean centers was calculated by converting latitudinal and longitudinal coordinates into the Universal Transverse Mercator projections of eastings and northings. On dividing the difference in eastings by the difference in northings, the arctangent was calculated ( 26 ), yielding the angle (degrees) between the mean centers. The angle was then converted to degrees from the reference angle of 0° (true north). The resulting series of vectors ( Figure 2 ) was used to determine if the mean centers by year for each species were moving in a similar direction. The Watson-Williams test ( 27 ) was applied to test for a difference in the angle of rotation between the mean centers of each species from 1990 to 2000. An F test was used to determine if the epizootic direction of spread differed between the species. The cumulative mean direction (rotational angle that summarizes a series of vectors through successive years) and circular variance of the mean centers were also calculated (Crimestat). | Results
Descriptive Analysis
Of 495 total counties, 344 (69.5%) reported at least one rabid skunk, and 421 (85.1%) reported at least one rabid raccoon ( Table 1 ). The median number of reported rabid raccoons was greater than that for skunks (p<0.0001). Three hundred thirty-nine counties (68.5%) reported rabies in both skunks and raccoons. Within these counties, rabid raccoons preceded rabid skunks in 297 counties (87.6%), rabid skunks preceded rabid raccoons in 30 counties (8.8%), and rabid skunks and raccoons were first reported in the same month in 12 counties (3.5%). The median interval between the initial appearance of rabid raccoons and skunks was 14 months and ranged from –108 months (i.e., rabid skunks preceding rabid raccoons) to 177 months.
Of 344 counties with at least one reported rabid skunk, 36 counties had at least 12 rabid skunks appearing in the first 12 months after the first rabid skunk appeared. Four counties were omitted for reasons described in the Methods section. In these 32 counties used for more detailed analysis, rabid raccoons preceded skunks in 30 (93.8%) counties, rabid skunks preceded raccoons in 1 (3.1%) county, and both skunks and raccoons appeared in the same month in one county (3.1%). The median interval between the appearance of rabies in raccoons and skunks was 5 months, and ranged from –2 months to 13 months. In the four omitted counties, rabid raccoons preceded skunks in two counties, and rabid skunks preceded raccoons in the remaining two counties.
For all 32 counties, the peak number of rabid raccoons reported was reached by 21 months; the median interval from the first case to the peak number of cases was 10.5 months. In contrast, the interval from the first to the peak number of skunks ranged from 6 to 90 months, with a median interval of 16.5 months (p<0.001). The calendar month when the peak was reached did not exhibit a pattern for rabid raccoons, whereas for rabid skunks there was a strong tendency for the peak to be reached in the last quarter of the year.
Analysis of epizootic characteristics found differences between the first epizootics of each species ( Table 2 ). The first raccoon epizootic was significantly larger (median=126; range 9–494, p<0.0001) than subsequent epizootics among raccoons and also significantly greater than the first skunk epizootic (median=16; range 4–85, p<0.0001). However, after the first epizootic, the epizootics converged and characteristics did not differ, with the exception of the third epizootic, in which the duration and magnitude were lower for skunks than for raccoons. In general, the size of subsequent epizootics among raccoons showed damped oscillations, while skunk epizootics appeared uniform.
Temporal Analysis
Overall, a significant relationship existed between the number of rabid raccoons (RACCOON) and the number of rabid skunks (SKUNK) ( Figure 3 , Table 3 ). Specifically, a significant interaction existed between time and RACCOON on SKUNK with the effect of RACCOON on SKUNK increasing with increasing time. The fit of the models improved significantly when a 1- or 2-month lag for RACCOON was used to predict SKUNK; however, the lag of 1 month provided the best fit. The time-squared term was not significant and was dropped from subsequent models. A month by RACCOON interaction was also tested and did not significantly improve the model fit. At the beginning of the time series, the peak in SKUNK coincided with the larger peak in RACCOON. In the period of approximately 25 months to approximately 50 months, the sharp reduction in RACCOON and coincident reduction in SKUNK was well below model predictions. A second peak in SKUNK at approximately 55 months and 70 months coincided with the increase in RACCOON associated with a second epizootic among raccoons. The model predictions reflected this increase, but the predicted SKUNK fell below the actual SKUNK for the peak months. In addition to a positive correlation with RACCOON over time, SKUNK displayed a strong seasonal component with annual peaks occurring in the fall months ( Figure 4 ). These strong seasonal peaks were unique to skunks, and not present in raccoons (KS 10.6; p<0.0001).
Among the 32 counties used for Poisson regression analysis, four counties in Massachusetts exhibited a general increase in the number of skunks over time (Essex, Middlesex, Norfolk, and Plymouth). A separate model was fit to determine potential differences between these counties and the 28 other counties that did not exhibit a significant increase in rabid skunks over time. The modeling results did not differ among these counties compared with those for the other 28 counties.
Spatial Analysis
As determined by previously described methods that used vectors ( Figure 2 ), the mean centers of the counties first reporting rabies in both species were in Maryland in 1990 ( Figure 5a ), and Virginia/West Virginia in 2000 ( Figure 5b ). The mean direction and distance traveled of the skunk and raccoon epizootics were similar. The mean centers of the epizootics from 1990 to 2000 moved an average of 339.3 km for skunks and 368.2 km for raccoons in a southwesterly direction. Application of the Watson-Williams test resulted in no significant difference between the angles of rotation of successive epizootics [F 1,18 = 0.11(F 1,18;0.95 <4.41, n.s.)], indicating that the mean centers of the skunk and raccoon epizootics were moving in a similar direction. The cumulative mean directions of the epizootics from 1990 to 2000 were 42.06° ± 0.23 for skunks, and 47.76° ± 0.28 for raccoons. | Discussion
This study examined the relationship between the occurrence of rabies in skunks and raccoons in the eastern United States. The present analysis indicated epizootic cycles of 4–5 years for raccoons and skunks, consistent with previous studies of rabies in raccoons ( 16 , 17 ) in this region. In comparison, studies of the epidemiology of skunk-variant rabies among skunks from the Midwest reported epizootic cycles with periods ranging from 4 to 5 years ( 21 ) to 6 to 8 years ( 20 , 28 ). If raccoon-variant rabies virus becomes established in eastern skunk populations, the periods of epizootic cycles in skunks may subsequently decouple from those of raccoons so that independent cycles among skunks may be observable. However, differences in the periodicity of cycles between these species may be caused by many factors, including differences between the variants, resulting in changes in incubation period, transmission potential, and duration of disease.
The spatial analysis performed in this study indicated that skunk rabies epizootics in the eastern United States are closely coupled to epizootics in raccoons. These epizootics moved in similar directions and traveled similar distances as they progressed upwards along the eastern seaboard. The mean centers of epizootics in each species originated near Maryland and are now situated near the Virginia/West Virginia border as of 2000 ( Figure 5a,b ). The southwesterly movement is of concern as the raccoon epizootic encroaches areas in the Midwest, where the skunk virus variant predominates.
The Poisson regression analysis showed a statistically significant association between the number of rabid skunks and raccoons through time. The association was weakest during the first months, apparently due to the large number of rabid raccoons that are characteristic of initial rabies epizootics in raccoons ( 16 ). After the initial peak in numbers at approximately 15 months, both species exhibited a secondary peak at 60 months, consistent with the 4–5 year cycle ( 16 , 17 ) for raccoon epizootics in the eastern United States. After 80 months, the number of peaks in both species diminished in size and increased in frequency, with the rabies cases in skunks maintaining a strong seasonal component.
The comparison of epizootic characteristics by species also found that the size and duration of epizootics in both species converged after the first epizootic. Of note, however, were the four counties in which rabies cases in skunks were outnumbering those in raccoons near the end of our study period. In these counties, rabies cases in skunks became less sporadic: cases were regularly reported throughout the year but the annual peaks in the fall months remained. As surveillance continues for these four counties, current observations suggest that skunks may be acting as important secondary hosts of the raccoon rabies virus variant in certain geographic areas of the eastern United States and that the potential for independent cycles to emerge exists.
Although we varied the time variable between raccoon and skunk rabies from 0 to 5 months, the best fitting regression model resulted from using a 1-month lag time. This lag time is consistent with the generally accepted incubation period for rabies of 3–8 weeks ( 29 ), which would permit at least one cycle of virus multiplication among raccoons before transmission from raccoons to skunks. The regression model also showed that reports of rabid raccoons remain fairly constant by month throughout the year. In contrast, the number of rabid skunks showed an independent seasonal pattern that consistently peaked during the fall months ( Figure 4 ). In the Midwest, where rabies is endemic in skunks, the major peak is in late winter and early spring, with a smaller peak in the fall ( 20 , 28 ). The peaks in winter and late spring have been attributable to the breeding season, and the fall peak to dispersal of juveniles ( 23 ). Why a dominant fall peak is apparent in the eastern states is not clear at this time. However, during dispersal, skunks may have increased contact with more raccoons, thereby increasing the risk for transmission of rabies. The absence of a spring peak may indicate little to no transmission between skunks in communal winter dens and during the breeding season.
Skunks and raccoons coexist within the same geographic areas in different ecologic niches. Raccoons are social animals that are capable of existing in fairly high densities in close proximity to human habitation and prefer forested habitats ( 4 , 5 , 15 ). Skunks are rather solitary animals and are found in lower densities than raccoons ( 23 ). Skunks prefer grasslands ( 21 ), agricultural areas ( 30 ), and interfaces between agricultural and nonagricultural lands ( 22 ). These characteristics would suggest that contact between the two species should occur less frequently than among those of the same species. However, since rabies affects the central nervous system, rabid animals may exhibit aberrant behaviors, leading to increased contact between the species and cross-species transmission of the virus.
Monitoring rabies among skunks in regions where the raccoon rabies virus variant circulates has important implications for public health intervention programs. To control the spread of the raccoon rabies epizootic, an oral rabies vaccine-baiting program has been implemented in several states ( 7 – 9 , 31 ) after the successful development of a vaccinia virus recombinant vaccine expressing the rabies virus glycoprotein gene (V-RG) for raccoons ( 6 , 32 ). The oral vaccine has been effective in raccoons. However, as formulated for raccoons, it has not been proven to be as effective for preventing rabies infection in skunks ( 10 ). Administration of intramuscular rabies vaccines has been shown to be effective in controlling rabies in skunks ( 5 ), but this method is labor-intensive and cost-prohibitive. The emergence of independent maintenance or cycling of raccoon-associated rabies virus within skunks would necessitate the development of alternative strategies to control rabies within wildlife populations. At least one vaccine candidate ( 33 ) designed for skunks has been identified but will require further development for this species and prevent spillover of rabies back into the raccoon population.
Currently, we have no evidence that the raccoon rabies virus variant is cycling independently in the skunk population of the eastern United States or that the variant has undergone any genetic adaptations among skunks. However, epizootic rabies in skunks was first reported in 1990 and, with expected epizootics cycling every 4–5 to 6–8 years, it may be too soon to detect decoupling of rabies cycles in skunks and raccoons. Surveillance and monitoring must continue through several cycles to further evaluate additional epizootics for changes in patterns. Additionally, scant information exists on the population densities and behavior patterns of skunks and raccoons in the eastern United States. Field investigations to assess the incidence of rabies in wildlife populations have rarely been conducted. Further research is needed to evaluate environmental factors that can affect the population density and structure, the behavior of both raccoons and skunks, and factors influencing interactions between them. Finally, the genetics of the raccoon rabies virus variant should be monitored for changes that might indicate cross-species adaptation after spillover into skunks. Assessment of these changes and continued surveillance can provide important guidelines to ensure the success of oral rabies vaccination programs for the control of rabies in wildlife and to decrease the risk of acquiring rabies among the human and domestic animal populations. | Since 1981, an epizootic of raccoon rabies has spread throughout the eastern United States. A concomitant increase in reported rabies cases in skunks has raised concerns that an independent maintenance cycle of rabies virus in skunks could become established, affecting current strategies of wildlife rabies control programs. Rabies surveillance data from 1981 through 2000 obtained from the health departments of 11 eastern states were used to analyze temporal and spatial characteristics of rabies epizootics in each species. Spatial analysis indicated that epizootics in raccoons and skunks moved in a similar direction from 1990 to 2000. Temporal regression analysis showed that the number of rabid raccoons predicted the number of rabid skunks through time, with a 1-month lag. In areas where the raccoon rabies virus variant is enzootic, spatio-temporal analysis does not provide evidence that this rabies virus variant is currently cycling independently among skunks.
Keywords: | In North America, variants of rabies virus are maintained in the wild by several terrestrial carnivore species, including raccoons, skunks, and a number of bat species. Each antigenically and genetically distinct variant of the virus in mammalian species occurs in geographically discrete areas and is strongly associated with its reservoir species ( 1 ). Within each area, a spillover of rabies into other species occurs, especially during epizootics ( 2 ). As a result of spillover, a variant may eventually adapt to a secondary species, which may begin to serve as an alternative reservoir species. This phenomenon of spillover and cross-species adaptation has been inferred from historical relationships ( 2 ) but is poorly understood and not routinely investigated.
In the late 1970s, an epizootic of raccoon rabies was reported on the Virginia/West Virginia border attributed to the translocation of raccoons from the southeastern United States ( 3 ). This epizootic has spread northward and southward throughout the eastern United States ( Figure 1a,b ). The establishment of rabies in this species has raised public health concerns about an increased risk for rabies transmission to the human population because the raccoons are well adapted to living at unusually high densities in urban and suburban environments ( 4 , 5 ). As a novel potential control method, several states have initiated raccoon vaccination programs using an oral rabies vaccine ( 6 – 9 ).
Beginning in 1990, a concomitant increase in the number of cases of skunks infected with the raccoon rabies virus variant has occurred in these states ( Figure 1c,d ). Additionally, these cases appeared to be preceded by cases in raccoons, both temporally and spatially. Moreover, in a growing number of counties in Massachusetts and Rhode Island, the number of rabid skunks has surpassed the number of rabid raccoons. Whether the increasing number of cases in skunks is a result of spillover from raccoons or the raccoon rabies virus variant has begun to circulate independently within the skunk population remains unclear. The establishment of an independent cycle of rabies in the skunk population may have serious consequences for rabies vaccine baiting programs because the current oral vaccine for raccoons is not as effective in skunks ( 10 ).
The epizootiology of raccoon rabies in the eastern United States has been investigated in several states, including Virginia ( 11 , 12 ), Connecticut ( 13 ), and Maryland ( 14 , 15 ). Models have been developed to describe the spatial and temporal patterns of raccoon rabies epizootics ( 16 – 18 ). Several studies have also described the behavior of skunk rabies epizootics in western North America ( 19 – 21 ), Texas ( 22 ), and Canada ( 23 ). The existing raccoon and skunk rabies studies show that epizootic patterns appear to differ between skunks and raccoons, possibly because of differences between the species, rabies virus variants, or environmental factors. However, no documented studies exist on the relatively recent increase of rabies in skunks caused by the raccoon rabies virus variant in the eastern United States. In light of the recent efforts to implement rabies control programs for raccoons in the eastern United States, the epizootiology of raccoon rabies virus variant occurring in skunks in this part of the country needs to be better understood.
The objectives of this study were to describe the epizootiology of skunk rabies in the eastern United States, determine if skunk and raccoon rabies epizootics are associated spatially and temporally, and introduce methods to assess evidence of spillover of rabies from raccoons to skunks compared with independent cycling of the virus within the skunk population. | Acknowledgments
The authors thank Kim Burkhardt and Wade Ivy, III for their assistance in data management and William Thompson and John O’Connor for reviewing the manuscript.
Dr. Guerra was an Epidemic Intelligence Service officer in the Viral and Rickettsial Zoonoses Branch, Division of Viral and Rickettsial Diseases, and is now a veterinary epidemiologist at the Division of Global Migration and Quarantine, National Center for Infectious Diseases, Centers for Disease Control and Prevention. She is interested in the epidemiology of zoonotic diseases and geographic information system applications. | CC BY | no | 2022-01-24 23:41:34 | Emerg Infect Dis. 2003 Sep; 9(9):1143-1150 | oa_package/60/37/PMC3016792.tar.gz |
||
PMC3016793 | 14519238 | Conclusion
We describe an automated outbreak detection system that uses laboratory data electronically collected in the Netherlands by ISIS. The system assesses data as soon as they are made available and disseminates the information by means of the Internet to all involved health professionals to help in the rapid interpretation and subsequent action to control any suspected outbreak. Much still needs to be done, and efforts are now concentrated on increasing the data available to ISIS, system evaluation, and subsequent modifications, with the aim of having a flexible, automated outbreak detection system for all laboratory-reported pathogens in the Netherlands by 2006. | Rapid detection of outbreaks is recognized as crucial for effective control measures and has particular relevance with the recently increased concern about bioterrorism. Automated analysis of electronically collected laboratory data can result in rapid detection of widespread outbreaks or outbreaks of pathogens with common signs and symptoms. In the Netherlands, an automated outbreak detection system for all types of pathogens has been developed within an existing electronic laboratory-based surveillance system called ISIS. Features include the use of a flexible algorithm for daily analysis of data and presentation of signals on the Internet for interpretation by health professionals. By 2006, the outbreak detection system will analyze laboratory-reported data on all pathogens and will cover 35% of the Dutch population.
Keywords: | Rapid detection of outbreaks on a time scale compatible with disease incubation periods is recognized as crucial to maximize the effect of control measures. Most outbreaks are rapidly detected and controlled locally. However, outbreaks involving cases over a wider area or in several local health jurisdictions may have only few local cases and thus be easily missed, especially if the outbreak has a slowly rising number of cases. Outbreaks of certain pathogens with common signs and symptoms (e.g., gastroenteric disease) can also be missed. The role of national laboratory data in detecting such outbreaks has been increasingly recognized in the last few years as modern typing techniques give more precision on the pathogen type and subtype, routinely unearthing outbreaks by linking cases either locally, nationally, or internationally ( 1 – 4 ) that otherwise would probably not be detected. In addition, surveillance of a wide range of pathogens is essential in identifying emerging disease threats ( 5 , 6 ). The increasingly perceived threat of bioterrorism recently has made more urgent the need for rapid detection of increases in laboratory diagnoses of common and uncommon pathogens to complement clinician-based reporting systems. Increasing computational power in the last 10 years has resulted in the development of mathematical algorithms to routinely and rapidly detect significant clusters within large amounts of surveillance data ( 7 – 12 ). Automated electronic laboratory reporting is frequently promoted to improve data quality and timeliness of collection ( 13 ). More recently, the general availability of the Internet permits feedback to many users, who can have continuous, simultaneous, and even interactive access to information. The Internet allows for immediate communication of signals of possible outbreaks to relevant professionals for interpretation and action.
In the Netherlands, these developments have led to the implementation of automated laboratory-based surveillance system integrated with the Internet in a project named the Infectious Disease Surveillance Information System (ISIS). We describe the development of an automated outbreak detection system within ISIS for all laboratory-reported pathogens in the Netherlands. The system is updated daily with Web-based feedback.
Overview of National Laboratory Surveillance
In the Netherlands, >90% of the 76 microbiologic laboratories are associated with public hospitals; <10% are private laboratories not associated with hospitals. Other than 10 notifiable infectious diseases, microbiologic laboratories have no legal requirement to provide data for surveillance. Since 1994, ISIS has collected anonymous positive and negative test results on over 350 pathogens directly from voluntarily participating laboratories on a daily basis in a fully automated system that uses electronic data interchange. The raw information is then processed by applying a set of criteria based on the diagnosis of a particular infection. Laboratory results are thus combined into surveillance diagnoses by the removal of results of duplicate testing of the same case by the same or a different microbiologic technique and then classified by the type of infection 1 . Surveillance diagnoses are then presented as feedback on a password-protected Internet site within 24 hours. At present, information on 40 of the 350 pathogens is presented on this site ( Table ) (available with password at: URL: http://www.isis.rivm.nl ). Currently, 11 laboratories located throughout the country are connected to ISIS, covering 16% of the total Dutch population of 16 million. The coverage of each laboratory is calculated from the coverage of each hospital exclusively served by that laboratory, which in turn is calculated by a national organization that calculates the government subsidy to each hospital. One laboratory (the National Institute for Public Health and the Environment [RIVM]) is also the national reference laboratory for Salmonella , Escherichia coli , and Mycobacterium tuberculosis , for which the coverage is much higher. The coverage of the Salmonella reference laboratory, for example, is estimated to be 64% of the national population. Since 1996, an algorithm has been used to detect outbreaks in the surveillance data resulting from Salmonella (sub)typing ( 14 ).
Apart from ISIS, two other systems collect laboratory data. Fifteen regional public health laboratories provide a weekly report of aggregated data of positive diagnoses for nine bacterial pathogens. These same laboratories and two other laboratories form a network of 17 virologic laboratories that report weekly aggregated numbers of positive diagnoses of 37 virologic pathogens. Four of the 15 public health laboratories contribute data electronically to ISIS.
Design of the Outbreak Detection System
The overall objective of the system was the automated detection of an unexpected national increase of any one pathogen reported by laboratories in a determined period, for feedback to all interested parties by means of the Internet, followed by interpretation and communication to relevant authorities for decisions on control to be taken. The system thus comprises three components: detection of clusters in time or unusual disease events (e.g., one case of rabies) and signal generation; feedback of the signals on the Internet to relevant professionals; and interpretation of signals on a weekly basis with communication to relevant authorities.
Cluster Detection and Generation of Signals
Approach
Our approach was to design a system to detect outbreaks that otherwise would probably be missed altogether and detect more rapidly the outbreaks that would also probably be eventually detected by other means. We designed the system with sensitivity and timeliness as the priority features, especially since small increases in laboratory data often indicate larger communitywide outbreaks. Sensitivity in this context would be defined as the number of relevant outbreaks found from all relevant outbreaks. Clearly, this distinction depends on how “relevant” is defined. All relevant outbreaks, however, should include those outbreaks of public health importance that are missed by conventional means; therefore, the denominator will always be unknown. Thus, absolute sensitivity of the automated system will be impossible to calculate. The system, however, can be designed to maximize sensitivity and detect more outbreaks than other mechanisms such as clinical observation, without resulting in an unmanageable number of signals. The system was also intended to be more timely , by detecting the same outbreaks as other mechanisms but more quickly. The specificity of the system was considered less important in the initial phase, since false-positive results could be filtered out when signals were interpreted.
We also decided that the system should be sensitive enough to detect even one case of certain critical infectious diseases (e.g., hantavirus infection) or unusual infections of current interest (e.g., hepatitis E virus infection), which might indicate an outbreak, and to detect expected seasonal increases of diseases caused by selected pathogens (e.g., influenza) as they occur. This design would allow for rapid action to verify the signal and institute case-finding or put in place certain public health measures (e.g., prompting nursing homes to vaccinate residents against influenza).
Generation of Signals
Signals generated by the system are produced by comparing observed values with a predefined threshold value. Threshold values are calculated from values expected from historical data (for most pathogens) or are fixed, user-defined thresholds, set by epidemiologists for detecting seasonal increases or monitoring critical pathogens.
Algorithms Using Historical Data
Several algorithm types applied to outbreak detection have been described in the literature, based either on Cumulative Sums ( 12 , 15 ) linear regression ( 7 ), or Fourier regression and autocorrelative models such as Box-Jenkins ( 8 , 9 ). Fourier analysis and autocorrelative methods require model building or the setting of many parameters, processes considered too labor-intensive for a generic algorithm for all type of pathogens. We decided to base the ISIS system on the algorithm currently run each week on Salmonella data, which has been successfully detecting outbreaks since 1998 in the Netherlands ( 14 , 16 – 20 ) but is not automatic and requires an operator to periodically update data. The algorithm is a simple linear regression model, adjusted for seasonality, secular trends, and past outbreaks in a similar manner as described by Farrington et al. ( 7 ) and requires little parameter resetting or model checking. Briefly, to calculate an expected total value for the current epidemiologic week, a regression line is plotted through the totals in the nine epidemiologic weeks centered on the same epidemiologic week in the previous 5 years. For example, to calculate an expected value for week 20, a regression line is plotted through the values at weeks 16–24 of the previous 5 years.
To maximize sensitivity we decided, after preliminary testing with Salmonella data, on two variations of the same algorithm, using two different window periods. The first is a 7-day total calculated daily. This variation is based on an algorithm that calculates expected week totals of a certain pathogen and a threshold value of 2.56 standard deviations from the mean (equivalent to a 99% confidence interval). A 7-day window advances day-by-day as new data enter the system and a new 7-day observed total is calculated daily and compared with the expected value for that epidemiologic week (Monday to Sunday). If the observed total is over the threshold, a signal is generated. The second algorithm variation is a 4-week total calculated daily. Each week, this algorithm calculates an expected total of the previous 4 weeks and a 99% threshold value. A 4-week window advances day-by-day and a new 4-week observed total is compared with the expected total for the four epidemiologic weeks ending with the current week.
Most outbreaks would be detected in a timely manner by the 7-day total system. However, comparison of the two algorithms using Salmonella data has shown that small sustained increases > 1 month would be missed by a 7-day total system, since the threshold value would not be exceeded in any one 7-day period. Including a 4-week total algorithm in the system produces 10% extra signals of outbreaks with slowly increasing numbers of cases, which otherwise would not be detected.
If the 4-week total is <5, or the 7-day total is <3, no signals are generated, even if above threshold. Though reducing sensitivity, this cutoff greatly reduces the number of signals of sporadic cases of infrequent infections that are of little public health significance. The system uses the date the sample was taken for calculation of observed and expected totals since for any one pathogen a variable delay between date of disease onset and date of reporting of result to ISIS is likely. In the case of an outbreak, the use of date or reporting for surveillance would result in a lower peak number of cases spread over a longer period (a “smeared” epidemiologic curve), reducing the sensitivity of the system. Using date of sampling entails retrospective examination of data to ensure that data reported in 1 week and plotted by date of sampling do not produce a signal in weeks previous to reporting. The “look-back” period (i.e., the period of retrospective examination) has been set at 10 epidemiologic weeks. This window allows enough time for most pathogens to be sampled, tested, and reported. New signals of an excess of cases at time of sampling >10 weeks previous to reporting are unlikely to signify unrecognised outbreaks that can still be investigated and controlled.
User-Defined or Fixed Threshold.
Algorithms depending on automated evaluation of historical data are often unreliable in detecting seasonal increases in pathogens whose seasonality shifts. A flexible, user-defined, fixed threshold was chosen to detect such increases in selected pathogens. For instance, with present historical data on respiratory syncytial virus, 10 positive laboratory results in any epidemiologic week have always indicated the beginning of the epidemic season. Thus, the threshold for that virus is set at 10 positives in a 7-day period. Some pathogens (e.g., hantavirus) have been defined as zero-tolerance, where one positive result is considered worth a signal. Although such cases are often communicated faster by other means, in some of these situations the system can be considered as a backup.
Data Used
At present, signals are generated from both the Salmonella database (data from the national reference laboratory stored in ISIS but not processed into surveillance diagnoses and presented only internally) and the database of 38 surveillance diagnoses (data on pathogens stored in ISIS and processed into surveillance diagnoses for Internet feedback). Signals are generated from the Salmonella database with algorithms that use historical data and from the surveillance diagnoses database with both user-defined and algorithm-defined thresholds ( Figure 1 ).
Internet Feedback of Signals
Currently, signals from the Salmonella database are presented only on an internal RIVM site. Signals are listed and incidence by municipality mapped. The signals generated from surveillance diagnoses, however, are available on the Internet for all local health authorities, Ministry of Public Health staff, and all registered microbiologists to access. The signals are presented first in a table ( Figure 2 ) that displays, for each signaled pathogen, the week in which the increase occurred (by week of sampling), the type of algorithm used, and the epidemiologic week in which the signal was generated.
Signals remain in this table for one epidemiologic week after they are signaled. For each signaled pathogen, a link can be made to a graph showing the observed and threshold for the previous 2 years. Historical signals by week of signaling are also listed on the site. Age and sex breakdown of all cases of a pathogen in the previous 4 weeks can be compared with that of all data, allowing an idea of which age or sex may be affected in an outbreak. Those with access to the site can also subscribe to automatically receive an email of a new signal.
Signal Interpretation and Action
Signals that were produced during the previous 7 days are interpreted formally on a weekly basis in a meeting of members of RIVM and the National Co-ordination Centre for Communicable Disease Outbreak Management. Since 1999, this group has interpreted all signals of potential national importance, from informal and formal sources. In addition, the algorithm-generated signals are monitored on a daily basis. The accessibility of the site allows input from many other health professionals who can contact ISIS should they have some information to help interpret any signal.
Every week a meeting report is written and disseminated to all 46 regional health authorities as well as to the Ministry of Health and other interested parties. The investigation and control of outbreaks within one area is the legal responsibility of that area’s health authority. For outbreaks that span one or more health authorities, the RIVM coordinates and supports investigation, while RIVM and the National Co-ordination Centre for Communicable Disease Outbreak Management coordinate implementation of control measures.
The early-warning system was implemented in January 2002. In early March 2002, the system signaled an increase in diagnoses of syphilis. This increase was subsequently found to represent a sustained outbreak of syphilis that had begun the previous year in a large Dutch city. The outbreak was subsequently investigated, and prevention strategies were implemented ( 21 ) ( Figures 2 and 3 ).
Limitations
This system is designed to complement, not replace, any conventional methods of outbreak detection (e.g., clinician-based surveillance of notifiable diseases). Laboratory-based surveillance will be less timely and sensitive than conventional methods in detecting many local outbreaks of disease, particularly those clearly associated with a certain setting, and in detecting many widespread outbreaks of disease with unusual signs and symptoms (e.g., acute flaccid paralysis in a polio outbreak). Local outbreaks may also be more rapidly detected from local, not national, laboratory data. In addition, though expansion is planned, many laboratories are likely never to participate in ISIS, limiting the coverage of the electronic system.
Analysis of large amounts of laboratory data will likely signal many clusters of no significance, and the work generated in interpreting signals meaningfully may be overwhelming and so mask true signals. Thoroughly evaluating and adjusting parameters such as the minimum number required to trigger a signal may be required to prevent this but at a cost of losing sensitivity. Conversely, the ability to detect clusters of commonly reported pathogens that are not routinely subtyped (e.g., Campylobacter ) will always be limited because the signal will be likely smaller than the variability of the large amount of data routinely submitted. One solution to this problem is to apply the algorithm to subsets of reduced amounts of data on common pathogens such as data collected by a group of regional laboratories.
Future Work
Evaluation of the System
The ISIS outbreak detection system needs to be evaluated to demonstrate a clear advantage over conventional means for detecting outbreaks of infection of all types of pathogens, not just salmonellae (for which the algorithm has already proved its usefulness). The sensitivity and timeliness of algorithms in other outbreak detection systems relative to a variety of standards such as formal records of investigated outbreaks or informal epidemiologic judgment, have been assessed retrospectively ( 11 , 12 ). However, no records of investigated outbreaks in the Netherlands exist, and the minutes from the signals meeting have only recently been put in a format that allows easy interpretation of signal outcome. In addition, retrospective analysis does not allow evaluation of the extra sensitivity nor of the specificity of an algorithm. This limitation exists because any signals from historical data produced by the algorithm, and not detected by other means, are classified as false positives, when many may have been genuine. Nonetheless, some idea of the value of the algorithm is given by the fact that since 1998, no national outbreak of Salmonella has been detected by means other than by the Salmonella outbreak detection system. Additionally, the feedback on the Internet and comments from the public health community are important factors that affect the sensitivity, specificity, and timeliness of the whole system since they will impact the eventual interpretation of a signal.
ISIS will, therefore, be evaluated prospectively at the weekly signal meeting, comparing signals detected by the algorithm to signals detected by other means. This comparison will allow assessment of the following: 1) how many signals detected by the algorithm are not of public health interest, as decided in the weekly meeting (a measure of specificity), and 2) the number of relevant signals detected by other means that should have been detected by the algorithm (a measure of relative sensitivity and timeliness). Assessing the number of outbreaks that the algorithm detects that would not have been detected otherwise will not be possible, since once a signal is detected by algorithm it can never be known with certainty that it would not have been detected later by other means. However, if the first detection of a signal is by algorithm, this will give some measure of timeliness of the system.
Expansion
At present, 40 surveillance diagnoses in ISIS are available for use in the automated outbreak detection system. Much incoming data are as yet not formatted for daily signal generation and feedback as described. A priority, therefore, is to adapt the system to directly analyze raw data (those not processed as surveillance diagnoses) on the other 300 pathogens currently collected, and, in particular, to make the current Salmonella outbreak detection system part of the automated ISIS. By 2004, a total of 25 laboratories are scheduled to be connected, increasing the coverage of the system for all pathogens to at least 35% of the Dutch population. We also hope that regional health authorities will eventually have access to their own Web page, presenting the results of applying the algorithms to their data. This improvement would allow smaller regional outbreaks of common pathogens to be detected. | Acknowledgments
We thank the laboratories participating in the Infectious Disease Surveillance Information System and the Ministry of Health for their support of this project and T. Grein for critical review of the manuscript.
Marc-Alain Widdowson was funded by European Programme for Intervention Epidemiology and Directorate-General V of the European Commission.
Dr. Widdowson is a veterinary public health epidemiologist now based at the Centers for Disease Control and Prevention. He is responsible for the foodborne virus epidemiology program, with a particular focus on Norwalk-like viruses. His other research interests include all aspects of zoonotic infections. | CC BY | no | 2022-01-24 23:50:19 | Emerg Infect Dis. 2003 Sep; 9(9):1046-1052 | oa_package/ec/fd/PMC3016793.tar.gz |
||||
PMC3016794 | 0 | Brower and Chalk, authors of The Global Threat of New and Reemerging Infectious Diseases: Reconciling U.S. National Security and Public Health Policy, describe their book’s purpose as examining “the changing nature of security” and focusing on “the threat of infectious diseases.” There are many examples in today’s world where the intersection of threats to public health and national security should direct the attention of policymakers, security and public health strategists, and the systems that support each toward an organized response.
The authors use two case studies: HIV/AIDS in South Africa and the U.S. public health response system. The first case, in South Africa, illustrates how a single microbial agent can undermine the economic, social, and medical underpinnings of a developed country. The second study shows the negative effect of newly emerging diseases such as HIV/AIDS, Hantavirus infection, West Nile virus infection, Creutzfeldt-Jakob disease, and intentionally released agents ( Bacillus anthracis ). This study demonstrates how events can overload the public health response system and weaken public confidence in its government. The reader can easily conclude that the intersection of disease and national security can be dangerously destabilizing and seriously undermine a nation’s social, economic, and political order. The current outbreak of severe acute respiratory syndrome reinterates the global nature and warp speed of emerging infections.
In their summary and conclusions, the authors provide recommendations for policymakers addressing both public health and security issues. The thrust of the authors’ conclusions is to push policymakers and strategists to actions that strengthen the infrastructure of a public health response system and broaden the traditional definition of national security to include the impact of naturally occurring and intentionally released microbial agents.
The authors present a compelling case study for HIV/AIDS in South Africa, where an emerging disease has gone unchecked and is having a devastating effect on a developed country. The case study of the U.S. public health response system is interesting and thoughtfully presented but lacks sufficient and carefully documented detail to aid the reader in drawing conclusions and formulating solutions. Unsubstantiated or incorrect examples also detract from the overall presentation of this case study. For instance, the contention that lack of good communications with area physicians and hospitals resulted in the deaths of postal workers in the fall 2001 anthrax crisis is not supported by the author’s reference or by any other authoritative materials known to this reviewer. In the public health response case study, the authors provide broad recommendations aimed at strengthening the public health infrastructure. Also included is an excellent summary of the current status of efforts begun in the mid-1990s in the United States to address the infrastructure of public health. The recommendations are presented in such a way that the shortcomings of the system can be addressed in critical areas, including a well-trained public health workforce; interagency coordination; private sector, hospital, and emergency response integration in public health; technical and educational interventions; and domestic and global investment in public health.
Brower and Chalk’s book is a powerful and useful argument for the urgent need to integrate and streamline public health and national security strategies. | CC BY | no | 2022-01-31 23:42:44 | Emerg Infect Dis. 2003 Sep; 9(9):1189-1190 | oa_package/67/56/PMC3016794.tar.gz |
|||||||
PMC3016795 | 14519241 | Methods
We performed a retrospective case series study. All reported SARS patients who stayed in the medical wards or intensive care unit of Princess Margaret Hospital and Wong Tai Sin Hospital on April 16, 2003, were screened. Patients were excluded if subsequent follow-up serologic tests showed no rise in antibody titer against SARS-associated coronavirus. All eligible SARS patients, except three, were recruited into the study. One healthcare worker refused to be studied, and two patients who were suspected of contracting the infection during their hospital stay were also excluded. This cohort was followed up until May 20, 2003. Data were collected through the hospital authority’s computerized clinical management system, case record review, and a questionnaire survey assisted by the nursing staff of each SARS ward. Age, sex, occupation, residential address, smoking habit, time between onset of fever and start of antiviral therapy, coexisting conditions, and laboratory data were the variables under study. Outcome variables were the following: dependency on high amounts of oxygen (requiring at least 3 L/min of oxygen through a nasal cannula) and admission to an intensive care unit or death.
Statistical Analysis
Categorical variables were analyzed with the chi-square test and the means of continuous variables were compared with the Student t test. Association among continuous variables was assessed with Pearson correlation coefficient. Multivariate logistic regression by backward stepwise analysis was performed to identify independent variables that correlated with the clinical outcome as of May 20, 2003. Cox’s regression model was used to study survival data. Plus-minus values are mean ± standard deviation; a p value of <0.05 was considered significant, and all probabilities were two-tailed. SYSTAT software (version 10.0, SPSS, Chicago, IL) was used for statistical analysis. | Results
The study population consisted of 127 male and 196 female patients, ranging in age from 18 to 83 (41±14). Forty-seven (15%) patients were healthcare workers. One hundred thirty-three (41%) were Amoy Gardens residents. Two hundred seventy-three (85%) patients were in good health. The coexisting conditions are listed in Table 2 . Psychiatric illness, hepatitis B carrier status, and thalassemia trait status were not classified as coexisting conditions. Fifteen (14%) males and 7 (4%) females were current smokers. The overall prevalence of smoking among SARS patients was 7.6% (9.1% if healthcare workers are excluded). None of the affected healthcare workers smoked. The symptoms exhibited fulfilled the diagnostic criteria of the Hospital Authority’s SARS registry.
All patients had lung involvement, documented either by chest x-ray or high-resolution computed tomographic scan of the thorax. Lymphopenia, found in 221 (68%) patients, was a prominent feature in those who sought treatment. Other initial laboratory findings included thrombocytopenia (41%), elevated creatine kinase level (14%), and elevated lactate dehydrogenase level (42%). Initial bacterial cultures were negative. Virus screening was negative for adenovirus, respiratory syncytial virus, influenza A and B, and parainfluenza virus. Two hundred and seven (64%) patients had reverse transcriptase–polymerase chain reaction (RT-PCR) assays performed for SARS-associated coronavirus, and 128 (62%) of the results were positive. Two hundred and forty-two (75%) patients had completed serologic testing. The diagnosis of recent SARS-associated coronavirus infection was confirmed by either RT-PCR assays or serologic test in 286 (89%) patients. The sensitivity of RT-PCR assays was 58% (95% confidence interval [CI], 50% to 66%).
Our patients sought treatment 3.9±2.7 days after onset of fever. The interval between onset of fever and admission was positively correlated with admission neutrophil count (Pearson r=0.1, p=0.07), admission platelet count (Pearson r=0.1, p=0.06), and initial lactate dehydrogenase level (Pearson r=0.36, p<0.001). An antibiotic was started immediately after admission in all cases. Either levofloxacin, 500 mg once a day, or amoxicillin/clavulinate acid, 375 mg three times a day plus clarithromycin, 500 mg twice a day, was used to protect against community-acquired pneumonia. All patients were also treated with oral or intravenous ribavirin, according to protocol. Most (94%) were given either intravenous hydrocortisone or oral prednisolone, according to protocol. Five patients received intravenous methylprednisolone as a form of steroid therapy. The dose was administered at 3 mg/kg once a day and would be tapered down to 1 mg/kg if the patient showed a clinical response. Pulsed doses of methylprednisolone (500 mg per dose) were given as initial treatment in 12 patients, who then received maintenance steroid therapy. Two patients were treated with ribavirin only. Ribavirin plus steroid therapy was administered 1.2±1.7 days after admission. The interval between admission and initiation of antiviral therapy was negatively correlated with the interval between onset of fever and admission (Pearson r = –0.17, p=0.003).
Clinical Outcome
In 115 (36%) patients, the disease was limited with resolution of fever and pneumonitis. Two hundred and eight (64%) patients had either clinical or radiologic evidence of progression of pneumonitis, and they received 2.9±2 gm pulsed dose methylprednisolone therapy. Maintenance steroid was resumed after pulsed dose therapy. Patients who were given pulsed doses of steroids were treated with potent broad-spectrum intravenous antibiotics (piperacillin and tazobactam) to protect against hospital-acquired infection. Hyperglycemia, hypokalemia, flare up of hepatitis B infection, hospital-acquired infection, and steroid psychosis were the acute side effects encountered. Hepatitis B carriers were treated with lamivudine, 100 mg once a day; no liver failure occurred in members of this cohort. Disease progression was apparently arrested by pulsed dose steroid therapy in 98 (30%) patients. In the remaining 110 (34%) patients, the illness ran a severe and protracted course, and the patient needed high doses of oxygen. Sixty-seven (21%) had been admitted to intensive care unit, and 42 (13%) required ventilator support. Twenty-six patients died (12 males and 14 females). The crude mortality rate of our cohort after 47±8 days of follow-up was 7.9% (95% CI, 5% to 10.8%) and was an underestimation because of sampling bias. Those who died before April 16, 2003, were excluded from our sample, while long-term survivors were retained for study. Among them, 10 had concurrent medical illness. No healthcare worker in this cohort died. Diabetes was found in three patients who died, and hypertension in four who died. Eleven of those who died lived in Amoy Gardens. A young pregnant woman died after delivery, despite aggressive treatment.
Age, sex, healthcare worker status, Amoy Gardens resident status, presence of coexisting conditions, interval between onset of fever and therapy (ribavirin plus steroid), neutrophil and platelet count on admission, and initial creatine kinase and lactate dehydrogenase levels were the correlates of clinical outcome under study. Variables with a p value of <0.1 by univariate analysis were entered into the multivariate regression model. By multivariate logistic regression, advanced age, high neutrophil count on admission, and high initial lactate dehydrogenase level were independent correlates of high oxygen dependency as well as intensive care unit admission or death ( Table 3 ). By Cox’s backward stepwise regression, young age, low neutrophil count on admission, and healthcare worker status (p=0.05) were favorable independent correlates of survival time ( Table 3 ). A dose-response relationship also existed between the independent correlates and clinical outcome ( Figure 1 , 2 , 3 ). We used the term “correlates” instead of “predictors” of outcome because of the method we used, a case series.
The second serology titer obtained after the end of second week was negatively correlated with age (Pearson r=–0.13, p=0.05) and admission lymphocyte count (Pearson r=–0.17, p=0.01). Conversely, the neutrophil count on admission was positively correlated with the second serology titer (Pearson r=0.2, p=0.003). The pulsed dose of steroid was not shown to affect the second serology titer (Pearson r=0.1, p=0.18). Patients who depended on high oxygen therapy had a higher second antibody titer against SARS-associated coronavirus (p = 0.05). | Discussion
The virus attacked persons of both sexes and all ages. Many were previously in good health and the wage earners in their families. Not infrequently, several members of a family were admitted to the hospital. The need for isolation discouraged close social contact. Unfortunately, some of the patients were also stigmatized. The psychosocial effect of SARS is by no means a lesser problem.
RT-PCR assay for SARS-associated coronavirus is a new test, and its sensitivity and specificity have yet to be established. In our cohort, the sensitivity was 58%, and results depended on sampling technique and stage of disease ( 10 ). Contamination of specimen could lead to a false-positive result. A false-negative result could arise from performing the test in the very early or late stage of the disease. Diarrhea was common among Amoy Gardens SARS patients. The virus could be found in stool by RT-PCR assays. A negative test does not rule out the diagnosis, however. The serologic test remains as the standard criterion of definitive diagnosis. Pulsed doses of steroid did not seem to affect the humoral response of SARS patients. In retrospect, the intensity of antibody response was related to clinical outcome and associated pretreatment prognostic factors. The viral load could be a determinant of these prognostic association factors.
Our hematologic and biochemical data, as well as associated prognostic factors, agreed with the work of Lee et al. ( 4 ). Both advanced age and high neutrophil count on admission were associated with poor outcome. We found that initial lactate dehydrogenase level was also an associated prognostic factor. The early phase of SARS is characterized by lymphopenia and thrombocytopenia. As the disease progresses, both neutrophil and platelet counts rise, accompanied by an elevation in lactate dehydrogenase level. The neutrophilic response is important in the pathogenesis of hypersensitivity pneumonitis, and thus the initial neutrophil count could also indicate disease progression. Lactate dehydrogenase level reflects tissue necrosis related to immune hyperactivity in SARS and thus relates to poor outcome. Patients with high neutrophil counts and lactate dehydrogenase levels on admission could have been late in seeking treatment or have experienced heavy exposure to the virus.
Effect on Healthcare Workers
The spread of the disease to healthcare workers is a major problem in any country dealing with SARS. Intubation, nasopharyngeal aspiration, chest physiotherapy, handling of excreta, and even feeding become high-risk procedures. All healthcare workers working in Hospital Authority hospitals are required to follow the recommended personal protection equipment standards ( 11 ). The level of precaution depends on the risk in the work area and the type of procedure performed. All healthcare workers working in a SARS area wore N-95 masks, face shields, caps, gowns, and surgical gloves. The intensive care unit was high-risk area in this cohort ( Table 4 ). However, healthcare workers working in a non-SARS area were not exempted. They contracted the disease from SARS patients who sought treatment early or exhibited atypical signs and symptoms. By univariate analysis, healthcare worker status was negatively correlated with death. Healthcare workers were younger. They sought treatment earlier and had a lower neutrophil count and lower initial lactate dehydrogenase level on admission ( Table 5 ). Nevertheless, healthcare worker status was still an independent survival correlate after controlling these confounding variables. The current safety precaution could not prevent all frontline healthcare workers from contracting SARS, but minimizing individual exposure to the virus might reduce the viral load, subsequent immune hyperactivity, and the risk for a fatal outcome.
Benefit of Treatment
Most of the patients in this cohort were treated according to protocol. The clinical outcome did not represent the natural history of SARS. The only variable that was related to the benefit of treatment was the time from onset to treatment. Donnelly et al. found that the time between the onset of symptoms and admission to hospital did not affect the death rate ( 12 ). In this study, patients who sought treatment early and received antiviral and steroid combination therapy were not shown to do better by multivariate analysis.
The Hospital Authority adopted an aggressive treatment protocol during the peak of the SARS epidemic in Hong Kong. Broad-spectrum antibiotics and a combination of ribavirin and steroid were the mainstays of treatment. The dose of ribavirin used was small to prevent major side effects. The administration of steroids in SARS treatment is controversial, however. Theoretically, the early use of steroids promotes viral replication, enhances infectivity, and possibly causes a rebound of infection. Peiris et al. found that the viral load peaked at day 10 in SARS patients treated with both ribavirin and steroids ( 13 ). However, immunosuppression or, more precisely, immunomodulation, is believed to be an effective therapy at the second stage of SARS. The current consensus among the Hospital Authority’s expert panel is to begin administering a steroid or pentaglobin at the second stage of SARS when a hypersensitivity immune reaction occurs ( 8 ).
Patients who sought treatment early tended to receive antiviral therapy at a later time. This is understandable since the symptoms of SARS are nonspecific, and clinicians also rely on laboratory data for diagnosis. The sensitivity of current RT-PCR assays is not satisfactory. A more sensitive and rapid diagnostic test must be developed, particularly if we have an effective treatment regime in the future. | Conclusion
One third of the SARS patients in our study had a limited disease course. In the remaining two thirds, pneumonitis progressed rapidly after the early use of ribavirin and steroid combination therapy. Apparently, approximately one third responded to pulsed doses of steroids, while the other third depended on treatment with high amounts of oxygen. Intensive care was required for 21% of patients. Advanced age, high neutrophil count on admission, and elevated initial lactate dehydrogenase level were independent correlates of an adverse clinical outcome. Strong evidence to support early and routine use of ribavirin and steroid combination therapy in all SARS patients does not exist.
We need to investigate new antiviral agents and test the efficacy of steroids in randomized controlled trials. SARS is an entirely new emerging disease and its clinical course varies widely. By stratifying our patients according to risk, we could individualize our treatment protocol. In addition, we need a more sensitive and rapid diagnostic test for SARS-associated coronavirus infection, both for treatment and for forming cohorts of patients infected with this deadly disease. | Severe acute respiratory syndrome (SARS) poses a major threat to the health of people worldwide. We performed a retrospective case series analysis to assess clinical outcome and identify pretreatment prognostic correlates of SARS, managed under a standardized treatment protocol. We studied 127 male and 196 female patients with a mean age of 41±14 (range 18–83). All patients, except two, received ribavirin and steroid combination therapy. In 115 (36%) patients, the course of disease was limited. Pneumonitis progressed rapidly in the remaining patients. Sixty-seven (21%) patients required intensive care, and 42 (13%) required ventilator support. Advanced age, high admission neutrophil count, and high initial lactate dehydrogenase level were independent correlates of an adverse clinical outcome. SARS-associated coronavirus caused severe illnesses in most patients, despite early treatment with ribavirin and steroid. This study has identified three independent pretreatment prognostic correlates.
Keywords: | The outbreak of severe acute respiratory syndrome (SARS) in Hong Kong was caused by a novel virus belonging to the family Coronaviridae ( 1 , 2 ). The virus is transmitted through respiratory droplets, direct contact with fomites, and aerosolized respiratory secretions ( 3 , 4 ). The first outbreak was linked to an index patient treated in the Prince of Wales Hospital ( 4 ). The second wave of spread in the community was started by an infected patient with renal disease and amplified by the sewage system of Amoy Gardens, a densely populated condominium in Hong Kong ( 5 ). The floor drain traps in many apartments of Amoy Gardens were not filled with water and thus lost the sealing function. Therefore, the bathrooms of many apartments were openly connected with the soil stack. Virus-loaded droplets of an affected apartment could have been spread through the floor drain system. Hundreds of patients were then treated in public hospitals. The virus was highly contagious and caused substantial illness and death among the general population as well as among healthcare workers.
The Hong Kong Hospital Authority, which provides more than 90% of inpatient care in Hong Kong, has been responsible for the management of all SARS patients ( 6 ). The Princess Margaret Hospital is a designated treatment center for SARS patients. Convalescent-phase SARS patients are treated in Wong Tai Sin Hospital. More than 500 SARS patients have been treated in these two hospitals since March 2003. The Hospital Authority has established a structured approach in the diagnosis, investigation, and treatment of SARS. The clinical diagnostic criteria of the Hospital Authority’s SARS registry (defined in Table 1 ) were similar to the case definition of probable SARS by the World Health Organization ( 3 ).
Persons infected with the SARS-associated coronavirus may exhibit a wide spectrum of signs and symptoms and a varied clinical course. We have found asymptomatic cases and patients with spontaneous recovery without antiviral or steroid therapy ( 7 ); SARS is at the other end of the disease spectrum. The Hospital Authority’s hypothetical disease model has three phases: viral replication, immune hyperactivity, and pulmonary destruction ( 8 ). Autopsy findings have supported the theory of cytokine deregulation in SARS ( 9 ). Using steroids in the treatment of SARS was based on this hypothesis and on initial clinical experience in the management of SARS in Hong Kong ( 4 ).
The recommended treatment regime at the time of the Amoy Gardens outbreak consisted of antibiotics, ribavirin, and steroid combination therapy. Patients without known epidemiologic contact with SARS patients were treated with antibiotics that would prevent both community- acquired pneumonia and hospital infections. If patients did not respond to antibiotics in 48 h, they would be given a combination of ribavirin and steroid. For patients with an epidemiologic history of contact with a SARS patient, this combination would be started together with the above antibiotic. Ribavirin would be given at a dose of 8 mg/kg intravenously every 8 h. For patients who appeared for treatment with extensive pneumonitis, a loading dose of 33 mg/kg of ribavirin, followed by 20 mg/kg every 8 h, was given intravenously. Hydrocortisone, 2 mg/kg every 6 h or 4 mg/kg every 8 h, would be administered, together with ribavirin. Oral equivalent doses of ribavirin and prednisolone could be prescribed at any stage of the disease. The total duration of therapy could range from 14 to 21 days. Besides administering steroids, we have tried in selected cases immunomodulation through the use of intravenous pentaglobin. Pulsed doses of methylprednisolone were restricted to those with disease progression and marked lung involvement. Lee et al. have made a comprehensive report of 138 cases of suspected SARS during a hospital outbreak in Hong Kong ( 4 ). Our study investigated the SARS patients after the Amoy Gardens outbreak to identify associated pretreatment prognostic factors for risk stratification and assess the clinical outcome of SARS under a standardized treatment protocol. | Acknowledgments
We thank all medical staff of the Department of Medicine and Geriatrics, Princess Margaret Hospital, Tom Buckley, W.W. Yan, Winnie Chan, Y.C. Chan, H.P. So, and Eva Y.W. Ng for their dedication in combating the SARS outbreak in Hong Kong. | CC BY | no | 2022-01-31 23:36:33 | Emerg Infect Dis. 2003 Sep; 9(9):1064-1069 | oa_package/4b/eb/PMC3016795.tar.gz |
|
PMC3016796 | 14528882 | To the Editor : Cryptococcus neoformans is an opportunistic fungus that causes meningoencephalitis, primarily in immunocompromised patients. However, C. neoformans can also cause illness in apparently normal hosts. The yeast is a heterothallic basidiomycete with two mating types, MAT a and MATα identified in all the four serotypes, A, B, C, and D. However, the mating type a of serotype A is a rare and recent finding. One strain was isolated from a Tanzanian AIDS patient and a second from the Italian environment; the first was mating defective ( 1 , 2 ). We report the isolation of a serotype A MAT a strain of clinical origin that was characterized by mating at high frequency under standard laboratory conditions.
In August 1998, a 45-year-old Hungarian man was admitted to the Laszlo Hospital for Infectious Diseases in Budapest because of septic fever. The patient had a history of hematologic malignancy (Hodgkin disease), which was diagnosed in 1991. He had received several courses of chemotherapy and radiation. After 4 years when his cancer was in remission, in September 1995, the disease recurred (stage IVa) for which he received several more courses of chemotherapy, according to protocols BEAM (carmustine, etoposide, cytarabine, melphalan) and CEP (lomustine, etoposide, prednimustine). In February 1998, another relapse was diagnosed and the patient was given chemotherapy, according to protocol COPP (cyclophosphamide, vincristine, prednisolone, procarbazine) four times. In April 1998, he was hospitalized with herpes zoster infection and treated with acyclovir. At the last admission, in August 1998, he was pancytopenic and had septic fever. Salmonella enteritidis was cultured from his blood. The salmonella septicemia was successfully treated with ceftriaxone. As palliative treatment, he received 4x10 mg vinblastine for his residual disease. On September 30, he became febrile again. Cryptococcus neoformans was isolated from his blood, although cerebrospinal fluid culture and serologic tests were negative. On the right fossa cubitalis, cellulitis and a tender mass were present, although he did not have a history of recent central line or cytostatic treatment on this side. Cryptococcus neoformans was isolated from the sample taken from the mass. Antifungal treatment was started with 600 mg fluconazole per day and continued with amphotericin B, 1 mg/kg/day. The patient died 6 weeks after the isolation of Cryptococcus , probably because of his uncontrolled Hodgkin disease. As far as the physician was aware, the patient had not visited other countries.
The strain, isolated from the patient’s blood during the European Confederation of Medical Mycology Cryptococcosis Survey, was sent for typing to the European Convenor. The isolate, IUM 99-3617, was identified as serotype A using Crypto Check serotyping kit from Iatron Laboratories (Tokyo, Japan) and genotyped as VN6 by multiplex polymerase chain reaction (PCR) ( 3 ) by using the primers previously described ( 4 , 5 ). The fungus was shown to be haploid by cytofluorimetric analysis ( 6 ). The strain’s fertility was investigated, according to Kwon-Chung ( 7 ), by crossing the isolate with reference serotype A strains H99 ( MATα ) and IUM 96-2828 ( MAT a ), and with serotype D congenic strains JEC20 ( MAT a ) and JEC21 ( MATα ). When cocultured with MATα strains (H99 and JEC21), IUM 99-3617 produced abundant basidiospores. On the contrary, the strain did not mate with JEC20 ( MAT a D ) or with IUM 96-2828 ( MAT a A ).
The genotypic and phenotypic characteristics of the fungus were then compared with those of serotype A ( MAT a and MATα ) reference strains. The mating type was analyzed by using PCR amplification of MF a , MFα genes, and STE20 a - and STE20α- specific genes for serotype A and serotype D. PCR reaction was performed as previously reported ( 4 ). The amplification product showed that IUM 99-3617, like IUM 96-2828, contains only serotype A STE20 a and MF a genes.
To further confirm that IUM 99-3617 was MAT a in mating type, MF a and STE 20 a genes were sequenced by an ABI PRISM 310 automatic sequencer using Big Dye Terminator (Applied Biosystems, Monza, Italy) and the primers, forward and reverse strands previously reported ( 4 ). The sequences were then aligned with the reported sequences of IUM 96-2828 ( 2 , 8 ), the Tanzanian isolate 295.1 ( 1 ), H 99, and the congenic JEC 20 and JEC 21 strains. The IUM 99-3617 sequences were found to be identical to those of IUM 96-2828 and of the Tanzanian isolate 295.1. The MF a A and the STE20 a A sequence of IUM 99-3617 have been submitted to GenBank database ( www.ncbi.nlm.gov/Bankit/nhpbankit.cgi ) under accession number AY182035 and AY182036, respectively.
Virulence studies in the mouse model demonstrate that, like IUM 96-2828, the strain is significantly less virulent than H99. The latter strain caused 100% deaths day 29, while IUM 99-3617 took until day 60 to kill 60% of mice (unpub. data). No difference was observed among the three serotype A strains when virulence factors such as capsule, melanin, phospholipase activity, and ability to grow at 37°C were tested.
The MAT a of C. neoformans serotype A was long regarded as extinct or as existing in an undiscovered ecologic niche until the recent finding of the clinical and the environmental isolate ( 1 , 2 ). The existence of MAT a in nature is also supported by recent studies designed to established the origin of the serotype AD strains ( 4 , 5 ). These studies demonstrated that AD strains were diploid or aneuploid hybrids derived from a fusion of serotype A and D parents and that several of them were harboring a serotype A MAT a locus. These hybrid strains have been found fairly often in Europe ( 9 , 10 ).
The finding of this isolate provides evidence of the pathogenic role of this rare mating type, emphasizes the critical function of molecular genetic tools in the characterization of C. neoformans populations, and represents an advance in knowledge of this fungal species whose genome is undergoing identification by a worldwide research team. | CC BY | no | 2022-01-27 23:40:11 | Emerg Infect Dis. 2003 Sep; 9(9):1179-1180 | oa_package/8d/3c/PMC3016796.tar.gz |
|||||||
PMC3016815 | 20874388 | Introduction
In West Africa (Niger, Burkina Faso, Benin), the solitary ectoparasitoid, Dinarmus basalis Rondani (Hymenoptera: Pteromalidae), and its sympatric species Eupelmus vuilleti Crawford and E. orientalis Crawford (Hymenoptera: Eupelmidae) parasitize the larvae and pupae of Callosobruchus maculatus (F.) and Bruchidius atrolineatus (Pic) (Coleoptera: Bruchidae) which develop inside the seeds of the cowpea, Vigna unguiculata (L) Walp, (Fabales: Fabaceae). After harvesting, these seeds are stocked in granaries where successive generations of bruchids develop, fluctuating in space and time. A survey of the hymenoptera population shows that the most abundant species at the beginning of storage is E. orientalis (72%), while E. vuilleti and D. basalis account for 12% and 16% respectively ( Ndoutoume-Ndong and Rojas-Rousse 2007 ). The E. orientalis population decreases gradually during storage, disappearing completely within two months, the majority of which escape directly from the storage structures ( Ndoutoume-Ndong and Rojas-Rousse 2008 ). However, D. basalis and E. vuilleti have been found coexisting for several months in these structures which form uniform and relatively closed habitats, resulting in inter and/or intraspecific competition ( Lammers and van Huis 1989 ; Monge and Huignard 1991 ).
The coexistence of D. basalis and E. vuilleti is based on a counter-balanced competition, i.e. on two opposing behaviours ( Zwölfer 1979 ; van Alebeek et al. 1993 ). This strategy implies that the females of the competitive species have interspecific discrimination capacities. In fact, D. basalis females lay fewer eggs in the presence of E. vuilleti females or in hosts parasitized by them ( van Alebeek et al. 1993 ). In contrast, E. vuilleti females have developed an aggressive strategy, concentrating their ovipositions on hosts already parasitized by D. basalis, killing the D. basalis eggs and/or neonatal larvae by stinging them ( van Alebeek et al., 1993 ). Host discrimination involves the detection of external and/or internal cues at the host site. External cues can be detected more rapidly than internal ones and over a longer period since they can be picked up continuously while foraging ( Hoffmeister and Roitberg, 1998 ).
This competitive strategy of taking advantage of resources foraged by heterospecific individuals is a characteristic feature of cleptoparasitism ( Sivinski et al. 1999 ; Jaloux and Monge 2006 ). The increased encounter rate with parasitized hosts and cleptoparasitic efficiency appears to be based on the detection of two independent signals ( Jaloux et al. 2004 ; Jaloux and Monge 2005 , 2006 ). First, olfactory detection of Dufour gland hydrocarbons, left by the D. basalis female on the cuticle on the surface of the seed, allows a seed which has been visited or exploited by D. basalis to be recognized. Secondly, detection of the proteinaceous substance produced by the D. basalis venom gland and deposited on the edge of the hole drilled through the cotyledon to reach the host indicates that the host has probably been exploited, triggering the cleptoparasitic behaviour. Because this protein is not volatile, it is probably detected through antennal or oral contact chemoreceptors ( Jaloux and Monge 2006 ).
Internal stimuli have poor accessibility but are the most reliable indicators of previous parasitism. The study of intraspecific competition between D. basalis females shows that host discrimination is achieved through the perception of cues inside the seed, because it is only when the females have probed the host chamber with their ovipositor that they decide to accept or reject a host for oviposition ( Gauthier 1996 , Gauthier et al. 2002 ). In this species, host discrimination is expressed through two mechanisms acting independently ( Gauthier et al. 1996 ). First, a time-dependent process: the deterring oviposition factor(s) can be perceived by the wasp after the first oviposition, reaching a maximum activity with a 24-h-old embryo; secondly, a hostquality indicator process comes into play after a host has been parasitized for 48h ( Gauthier et al. 1996 ). During this time, a transfer of chemical information from the egg to the surface of the host occurs ( Gauthier et al. 1996 ; Gauthier et a l. 2002 ). The gradual deterrent effect supports the hypothesis of substances released by the egg in the course of its embryonic development. One question arising from these observations concerns the means by which the oviposition deterring effect is transferred from the egg to the host ( Gauthier and Monge 1999 a ).
Faced with hosts offering its offspring little chance of survival, D. basalis females lay a few eggs and resorb the others; egg resorption is a transitory process which ceases after 5 days if there is a return to favourable conditions ( Gauthier 1996 ; Gauthier and Monge 1999 b ).
However, in the Sudano-Sahelian zone of Burkina Faso and in the Guinean zone of Togo, the biological control of bruchids by releasing D. basalis females in leguminosae V. unguiculata granaries leads to unfavourable conditions at the end of the storage period due to substantially reduced numbers of the bruchid C. maculatus following the development of successive generations of D. basalis ( Sanon et al. 1998 ; Amevoin et al. 2007 ). In this extreme situation, E. vuilleti and E. orientalis females, living sympatrically with D. basalis, are able to express facultative hyperparasitism when confronted by hosts parasitized by conspecific or heterospecific females ( Rojas-Rousse et al. 1999 ; Rojas-Rousse et al. 2005 ). Facultative hyperparasitism involves the development of the progeny as either primary or secondary parasitoids ( Sullivan 1987 ). In fact, facultative hyperparasitism is a very aggressive behaviour of females that sting and kill the developing primary parasitoid (L5 larvae or pre-pupae or pupae) before ovipositing on it. For E. vuilleti females the facultative hyperparasitism can be considered as an extreme expression of cleptoparasitism (expressed only towards eggs and/or neonatal larvae of primary parasitoids) ( van Alebeek et al. 1993 ; Leveque et al. 1993 ; Jaloux et al. 2004 , 2005 ; Rojas-Rousse et al. 2005 ).
At the end of the storage period in granaries, because the development of successive generations of D. basalis leads to a reduced number of unparasitized hosts and an increased number of parasitized hosts, D. basalis females might have to forage among hosts that offer their offspring little chance of survival. To understand the consequences of these particular environmental conditions on the population of D. basalis, this study investigated the egg-laying behaviour of D. basalis females when faced with a prolonged deprivation of suitable hosts leading to extreme ‘Oviposition pressure’. The egg-laying behaviour of virgin D. basalis females was tested with hosts parasitized by conspecific females in which the developing primary parasitoid larvae had reached the last larval instar (L5) or pupae stage. By this time the phytophagous host has been almost entirely consumed by the primary developing parasitoid larva ( Rojas-Rousse et al. 2005 ). Under these experimental conditions of low quality host patches, we investigated the egg-laying behaviour of D. basalis females and their ability to develop at the expense of their conspecific larvae, i.e to hyper-parasitize. | Materials and Methods
Insect stocks
Bruchid and parasitoid stocks were derived from C. maculatus and D. basalis adults emerging from cowpea cultures ( V. unguiculata ) at the end of the rainy season in the Niamey region. C. maculatus is a common pest that develops inside cowpea seeds, concealed from the parasitoid females.
In the laboratory, bruchids and the primary parasitoids were mass-reared in climatecontrolled rooms under conditions close to those of their area of origin: 12:12 L:D, 23– 33° C and 40% RH.
The strain of C. maculatus was maintained by placing males and females (50 pairs) in rearing boxes containing 300 cowpea seeds. The females laid eggs on the seeds, and the neonate larvae perforated the coat. The four larval stages and the pupal stage were completed within the seed.
For primary parasitoid rearing, hundreds of 1 or 2 day-old adults of D. basalis were placed in transparent cages (25*30*40 cm) in presence of 200 cowpea seeds containing L4 larvae or pupae of C. maculatus. Parasitoids were provided daily with a sucrose saturated cotton roll fixed in the middle of the cage. After 2 days the seeds, parasitized or not, were removed from the cages. Primary parasitoid adults emerged from the parasitized seeds after 12–15 days for D. basalis The parasitoid females used in the experiments were isolated in Petri dishes and fed with a sucrose solution.
Experimental methods
All the experiments were carried out in the laboratory. Translucent gelatine capsules were used that mimic the bruchid pupal chamber, the size and shape being replicated by using both parts of the capsule ( Cortesero and Monge 1994 ; Damiens et al. 2001 ; Jaloux et al. 2004 ). This system allows development to occur normally.
Activation of oogenesis of D. basalis virgin females during their first four days of life, production of paralysed hosts, D. basalis L5 larvae and pupae males
Immediately after emergence, the D. basalis virgin females were put into groups of eight in small cylindrical Plexiglass boxes (250 cm 3 ) and provided daily with 16 cowpea seeds each containing one C. maculatus L4 larva or pupa primary host until the evening of the fourth day. Egg production reaches a peak on the fourth day of egg-laying and remains constant until the females are 8 days old ( Gauthier 1996 ; Gauthier and Monge 1999 a ).
The seeds were removed every day and stored until the terminal developmental phase of D. basalis (last L5 larval stage and /or young pupae) on the 7 th ± 1 day after egg-laying ( Damiens et al. 2001 ). The seeds were opened to isolate stung and paralysed C. maculatus hosts, the L5 larvae and young pupae of which were presented to the D. basalis virgin females during the choice tests.
Hyperparasitism choice between parasitized D. basalis L5 larvae and young pupae
To produce extreme oviposition pressure, the D. basalis virgin females received no hosts from the fourth to eighth day of life. These virgin females, ‘conditioned’ by deprivation of suitable hosts for 4 days, were used in the hyperparasitism tests.
The D. basalis L5 larva or pupa was confined in a transparent cell, the same size and shape as the lodge of a bruchid larva in a seed, and with holes drilled on the surface to simulate the bruchid larval gallery providing access to the host.
Eight ‘conditioned’ virgin D. basalis females were kept in a small cylindrical Plexiglass box (250 cm 3 ) with eight D. basalis hosts put singly into gelatine capsules containing alternately one D. basalis L5 larva or one young pupa (total per box: four L5 larvae and four young pupae). Egg-laying was observed every day for 4 days (total in 4 days: 16 L5 larvae and 16 young pupae). Six sets were completed (total: 16x 6 L5 larvae and 16x 6 young pupae). These experiments were carried out under the same climatic conditions as those used for rearing bruchids and parasitoids.
Choice between non-stung (healthy) and stung-paralysed C. maculatus hosts
D. basalis can distinguish pupae from L5 larvae by their physiology and by the texture of their integument: soft in larvae and chitinised in pupae. Before egg-laying, D. basalis females inflict a sting which has a paralysing action on the host. However, as the females' decision to accept or reject a host is based on the perception of cues inside the seed when they probe the host chamber with their ovipositor, we hypothesized that D. basalis females could be deluded about the quality of the host stage ( Gauthier et a l. 2002 ). To test this hypothesis that hosts can be rejected due to their immobility after the paralysing sting, the egg-laying behaviour of the ‘conditioned’ virgin females exposed to primary healthy and stung-paralysed C. maculatus hosts is examined. The same experimental method described above were used, since the C. maculatus stungparalysed L4 larvae hosts would be easy to identify due to their immobility and melanized scars ( Rojas-Rousse et al. 1995 ).
These experiments were carried out under the same conditions as those used for choice between D. basalis L5 larvae and young pupae. Egg-laying of virgin D. basalis females (4 days old) was observed for 2 consecutive days (total in 2 days: 8 stung C. maculatus and 8 non-stung C. maculatus L4). Seven sets were studied (total: 8 × 7 stung L4 larvae, and 8 × 7 healthy C. maculatus L4 larvae). The experiments were carried out under the same climatic conditions as those used for rearing bruchids and parasitoids.
Development of eggs laid on hyperparasitized hosts
The number of eggs laid by virgin D. basalis females on D. basalis hosts was noted, and the hyperparasitized host + eggs were placed in a cell in a Plexiglass sheet closed by a Plexiglass cover-slide until emergence of the hyperparasitoid adult male. This developmental chamber has already been used successfully for developing parasitoids ( Darrouzet et al. 2003 ). As only virgin D. basalis females were used, when more than one egg was laid per host, the larval competition that occurred did not affect the final sex-ratio because reproduction was by arrhenokotous parthenogenesis and consequently only male eggs were involved.
Statistical analysis
The chi-square test was used to test the homogeneity of the egg-laying behaviour of D. basalis females between the data sets. If homogeneity was accepted, all the data sets could be pooled.
To evaluate the hyperparasitism behaviour of D. basalis females, the observed numbers of hyperparasitized and non-hyperparasitized L5 larvae were compared with those theoretically expected under the null hypothesis, whereby no preference would be shown by the egg-laying females. According to this null hypothesis, the theoretical probability of hyperparasitism was 1/2. The same method was used to demonstrate the behaviour of the females exposed to non-stung (i.e. healthy) and stung-paralysed C. maculatus hosts. The eggs laid on each type of hyperparasitized host were counted and compared using Student's t-test.
Results
Choice between D. basalis L5 and pupae hosts
In each data set studied, some D. basalis L5 hosts presented at the beginning of the test reached the pupal stage, and the egg-laying distribution indicated that the D. basalis pupae were not hyperparasitized ( Table 1 ).
Some of the D. basalis L5 hosts were hyperparasitized ( Table 1 ). The null hypothesis was tested that there would be no difference in the proportion of hyperparasitized and non-hyperparasitized D. basalis L5 hosts between the six data sets. Since this hypothesis of homogeneity was confirmed, the six data sets were pooled (χ 2 calculated = 3.65: α = 0.05, χ 2 ddl 5 = 11.07).
To evaluate the behaviour of D. basalis females exposed to L5 hosts, the numbers of hyperparasitized (N = 43) and nonhyperparasitized (N = 29) hosts observed were compared with those theoretically expected under the null hypothesis. Under these conditions, D. basalis females hyperparasitized as many L5 hosts as they avoided (χ 2 calculated = 2.72: α = 0.05, χ 2 ddl 1 = 3.84). These results indicate only that the D. basalis females were able to lay on their last stage larvae.
Choice between primary non-stung (healthy) and stung-paralysed C. maculatus hosts
To test whether D. basalis pupae hosts were avoided due to their immobility, D. basalis females were presented alternately with stung-paralysed and non-stung (i.e., healthy) primary C. maculatus L4 hosts. They were able to lay eggs on both categories of hosts ( Table 2 ).
The null hypothesis was tested whereby no difference of parasitism would be observed between the seven data sets. As this hypothesis of homogeneity was confirmed, the seven data sets were pooled (χ 2 calculated = 1.53: α = 0.05, χ 2 ddl 6 = 12.59).
To evaluate whether D. basalis females showed a preference for one or other type of primary host, the numbers of parasitized and avoided hosts were compared with those theoretically expected under the null hypothesis (i.e., 44 parasitized and avoided C. maculatus L4 hosts: 43 + 45/2) ( Table 2 ). The observed number of primary hosts (healthy or stung) parasitized by D. basalis females was not significantly different from the theoretically expected number (χ 2 calculated = 0.19: α = 0.05, χ 2 ddl 1 = 3.84). Thus, when D. basalis females could choose between non-stung (healthy) or stungparalysed C. maculatus L4 larvae, they parasitized both categories equally.
However, analysis of the eggs laid per host showed that 51.8% (29/56) of stung-paralysed primary hosts, and 64.28% (36/56) of the healthy primary hosts presented, were superparasitized (i.e., more than one egg laid per host) ( Figure 1 ). The percentage of superparasitized hosts did not differ significantly between the two types of primary host presented alternately to D. basalis females ( t -test for frequencies comparison: t = 1.02, significance level α = 0.05, t [.05] ∞ = 1.96). However, on average, D. basalis females laid significantly more eggs per healthy primary host than per stung-paralysed primary host: 4.25 ± 1.05 and 2.04 ± 0.29 respectively (mean number of eggs laid ± 95% confidence interval); (Student's test: t = 3.65 significance level α = 0.05 t [.05] ∞ = 1.96).
Development of eggs laid on hyperparasitized D. basalis L5 hosts
The individual development of 102 hyperparisitized D. basalis L5 hosts was observed in the Plexiglass cells. Since one to twelve eggs were laid per hyperparasitized D. basalis L5 host, hyperparasitism did not prevent the occurrence of superparasitism ( Table 3 ). Although only one egg per host reached the adult stage due to the solitary behaviour of the D. basalis larvae, under our experimental conditions we observed that 64.7% of D. basalis L5 hosts were superhyperparasitized (66/102) ( Table 3 ). Under these experimental conditions, egg development occurred in three possible ways. First, 60.78% of the hyperparasitized D. basalis L5 hosts (62/102) reached the miniaturized hyperparasitoid adult male stage. Second, the eggs hatched but the neonatal hyperparasitoid larvae died allowing the D. basalis L5 host to reach the adult stage (i.e., the primary parasitoid adult); this was observed in nine hyperparasitized hosts (9/102). And finally, the eggs laid on 31 hyperparasitized D. basalis L5 hosts died during embryonic development (31/102).
Discussion
Parasitoids use a large number of physical and chemical cues when they forage for hosts, and many of them mark the host patch, the host substrate, and/or the host itself ( Godfray 1994 ; Quicke 1997 ). These marks may enable females to discriminate between unexploited and previously exploited resources, and also inform other females, conspecifics or heterospecifics, about the presence of a possibly superior competitor ( Roiteberg and Mangel 1988 ). In this way, D. basalis parasitoid females are able to discriminate the quality of their host, but detailed behavioural observations show that this host discrimination is based on internal cues, and the lack of evidence of external marks is unexpected ( Gauthier et al. 2002 ). For D. basalis females, host quality (healthy or parasitized 24h or 48h beforehand) does not affect the attractiveness of seeds in which hosts are concealed ( Gauthier et al. 2002 ). It is only when the D. basalis female has drilled the cotyledons of the seed to reach the host chamber and has probed this chamber with her ovipositor that the decision is made whether to accept or reject the host for oviposition ( Gauthier et al. 2002 ). These internal cues have low accessibility but are the most reliable indicators of previous parasitism and of the age of parasitic instars ( Gauthier 1996 ). However, D. basalis females exhibit a wide range of oviposition behavioural plasticity, and host discrimination ability does not always involve avoidance of superparasitism ( Gauthier et al. 1996 ). Under unfavourable conditions, D. basalis females are able to resorb the unlaid eggs with no effect on future oviposition because they have a relatively large daily oviposition window compared to the potential number of eggs laid ( Gauthier 1996 ; Gauthier et al. 1996 ; Gauthier and Monge 1999a ).
Superparasitism may sometimes be advantageous when unparasitized hosts are scarce, and/or the neonatal larvae born from eggs of the last oviposition have a better chance of competiting successively against older competitors, and/or the risk of parasitism by conspecific or heterospecific females is high ( Visser et al. 1992 ; Godfray 1994 ). In this way, when D. basalis females superparasitize hosts, the survival probability of the second egg laid has been shown to vary with the age of the first parasite already on the host ( Gauthier 1996 ). In fact, the survival rate of the second parasitoid increases with the interval between ovipositions ( Visser et al. 1992 ; Mackauer 1990 ).
The use of D. basalis in granaries as a biological bruchid control agent leads to an increased number of parasitized hosts creating unfavourable conditions during the storage period ( Sanon et al. 1998 ; Amevoin et al. 2007 ). These conditions were recreated in the laboratory using D. basalis virgin females that did not receive suitable hosts, i.e. primary bruchid L4 or pupae, for four consecutive days. After this extreme oviposition pressure caused by host deprivation, the females were presented with D. basalis L5 larvae hosts. Only virgin D. basalis were used in these experiments to eliminate all consequences of parthenogenetic reproduction, i.e. number of matings, sex ratio at egg-laying, etc. The D. basalis L5 larvae hosts had molted to a passive stage corresponding to the end of the feeding period on the primary host, i.e. bruchid L4 or pupae. Under these experimental conditions, the D. basalis virgin females were able to hyperparasitize the D. basalis L5 larvae hosts However, future research should investigate the behaviour of inseminated D. basalis females under unfavourable conditions, particularly in view of the fact that the progeny of hyperparasitized inseminated Eupelmus vuilleti females are largely male ( Rojas-Rousse et al. 2005 ). Since D. basalis hyperparasitoid larvae feed externally, no special adaptations may be needed to attack the primary parasitoid ( Godfray 1994 ). Under our experimental conditions, 60.78% of hyperparasitized D. basalis L5 larvae hosts became hyperparasitoid miniaturized adult males. Approximately a third of the hyperparasitized D. basalis L5 larvae hosts died after being stung by adult females during egg-laying, although the venomous sting generally only induces permanent paralysis and developmental arrest of the host during the development stages of first larvae ( Doury et al. 1995 ). This hyperparasitism corresponds to the facultative hyperparasitism arising from competition between parasitoids for host resources, which is considered as one possible evolutionary pathway leading to obligatory hyperparasitism ( Godfray 1994 ).
The hyperparasitism behaviour did not modify the reproductive behaviour of the egg-laying females because stinging always occurred before egg-laying, although the eggs were laid externally. However, this behaviour did not exclude superparasitism because more than one egg per host could be laid. In fact, 64.7% of the hyperparasitized D. basalis L5 larvae hosts were super-hyperparasitized. However, 8.82% of the hosts without stings reached the primary parasitoid male adult stage after all the neonatal hyperparasitoid larvae died during fights.
Under our experimental conditions, the D. basalis virgin females could choose between D. basalis L5 larvae and pupal hosts. Egg distribution showed that no D. basalis pupa host was ever hyperparasitized. Besides their physiological differences, D. basalis L5 larvae and pupae can be distinguished by the texture of their integument (soft in the L5 larvae and chitinised in the pupae), which is the basis of mechanical and/or chemical cues perceived by the females' ovipositor at egglaying. These cues could underlie the variability of responses to host quality. Another hypothesis is that it is more difficult for a young hyperparasitoid neonatal larva to become implanted on the chitinised integument of a pupa than on the soft integument of an L5 larva, especially if it has not been immobilised, i.e. paralysed, by the adult female at egg-laying. This behaviour, induced by notable physical differences between the L5 and the pupae, was also observed during the primary parasitism of the C. maculatus host. In fact, egg-laying is greater on the larval stages than on the younger non-chitinised and older chitinised pupae ( Terrasse and RojasRousse 1986 ) The immobility of the D. basalis pupae alone did not seem to induce the cues favouring their rejection, because when the D. basalis virgin females could choose between healthy (i.e. non-paralysed) or paralysed C. maculatus L4 larvae, they laid eggs on both categories, although significantly more on the healthy host.
The responses of D. basalis females when faced with host deprivation leading to extreme oviposition pressure revealed that they were able to parasitize their own developing primary last instar larvae. Some species of primary parasitoids can be facultative hyperparasitoids, but the hyperparasitism is always interspecific ( Strand 1986 ). The first recorded observation of a facultative hyperparasitoid developing within its own species was Anaphes victus (Hymenoptera: Mymaridae), and no other hyperparasitoids are known in this family ( Sullivan 1987 ; van Baaren et al. 1995 ).
Facultative hyperparasitism is functionally similar to conspecific superparasitism. In solitary parasitoids, conspecific superparasitism can be advantageous if there are high levels of competition, long inter-patch travel times, low quality patches and timelimited resources ( van Alphen and Visser 1990 ). The D. basalis females hyperparasitized their own species when they were confronted with a low-quality patch ( D. basalis L5 larvae or D. basalis pupae) and when there was a high level of competition with females who were or had been present in their habitat. With an average success rate of 60%, the facultative hyperparasitism of D. basalis females can be seen as highly adaptive when faced with low quality patches. In fact, we observed that the D. basalis females were also able to hyperparasitize other species, such as Eupelmus vuilleti and Monoksa dorsiplana Boucek (Hymenoptera: Pteromalidae) (primary parasitoids of bruchids), but secondary parasitoid adults never emerged (personal observations). However, there are fitness costs associated with this mode of development, because a significant decrease in the size of secondary parasitoids has been observed following the depletion of host resources ( Brodeur, 2000 ). | Results
Choice between D. basalis L5 and pupae hosts
In each data set studied, some D. basalis L5 hosts presented at the beginning of the test reached the pupal stage, and the egg-laying distribution indicated that the D. basalis pupae were not hyperparasitized ( Table 1 ).
Some of the D. basalis L5 hosts were hyperparasitized ( Table 1 ). The null hypothesis was tested that there would be no difference in the proportion of hyperparasitized and non-hyperparasitized D. basalis L5 hosts between the six data sets. Since this hypothesis of homogeneity was confirmed, the six data sets were pooled (χ 2 calculated = 3.65: α = 0.05, χ 2 ddl 5 = 11.07).
To evaluate the behaviour of D. basalis females exposed to L5 hosts, the numbers of hyperparasitized (N = 43) and nonhyperparasitized (N = 29) hosts observed were compared with those theoretically expected under the null hypothesis. Under these conditions, D. basalis females hyperparasitized as many L5 hosts as they avoided (χ 2 calculated = 2.72: α = 0.05, χ 2 ddl 1 = 3.84). These results indicate only that the D. basalis females were able to lay on their last stage larvae.
Choice between primary non-stung (healthy) and stung-paralysed C. maculatus hosts
To test whether D. basalis pupae hosts were avoided due to their immobility, D. basalis females were presented alternately with stung-paralysed and non-stung (i.e., healthy) primary C. maculatus L4 hosts. They were able to lay eggs on both categories of hosts ( Table 2 ).
The null hypothesis was tested whereby no difference of parasitism would be observed between the seven data sets. As this hypothesis of homogeneity was confirmed, the seven data sets were pooled (χ 2 calculated = 1.53: α = 0.05, χ 2 ddl 6 = 12.59).
To evaluate whether D. basalis females showed a preference for one or other type of primary host, the numbers of parasitized and avoided hosts were compared with those theoretically expected under the null hypothesis (i.e., 44 parasitized and avoided C. maculatus L4 hosts: 43 + 45/2) ( Table 2 ). The observed number of primary hosts (healthy or stung) parasitized by D. basalis females was not significantly different from the theoretically expected number (χ 2 calculated = 0.19: α = 0.05, χ 2 ddl 1 = 3.84). Thus, when D. basalis females could choose between non-stung (healthy) or stungparalysed C. maculatus L4 larvae, they parasitized both categories equally.
However, analysis of the eggs laid per host showed that 51.8% (29/56) of stung-paralysed primary hosts, and 64.28% (36/56) of the healthy primary hosts presented, were superparasitized (i.e., more than one egg laid per host) ( Figure 1 ). The percentage of superparasitized hosts did not differ significantly between the two types of primary host presented alternately to D. basalis females ( t -test for frequencies comparison: t = 1.02, significance level α = 0.05, t [.05] ∞ = 1.96). However, on average, D. basalis females laid significantly more eggs per healthy primary host than per stung-paralysed primary host: 4.25 ± 1.05 and 2.04 ± 0.29 respectively (mean number of eggs laid ± 95% confidence interval); (Student's test: t = 3.65 significance level α = 0.05 t [.05] ∞ = 1.96).
Development of eggs laid on hyperparasitized D. basalis L5 hosts
The individual development of 102 hyperparisitized D. basalis L5 hosts was observed in the Plexiglass cells. Since one to twelve eggs were laid per hyperparasitized D. basalis L5 host, hyperparasitism did not prevent the occurrence of superparasitism ( Table 3 ). Although only one egg per host reached the adult stage due to the solitary behaviour of the D. basalis larvae, under our experimental conditions we observed that 64.7% of D. basalis L5 hosts were superhyperparasitized (66/102) ( Table 3 ). Under these experimental conditions, egg development occurred in three possible ways. First, 60.78% of the hyperparasitized D. basalis L5 hosts (62/102) reached the miniaturized hyperparasitoid adult male stage. Second, the eggs hatched but the neonatal hyperparasitoid larvae died allowing the D. basalis L5 host to reach the adult stage (i.e., the primary parasitoid adult); this was observed in nine hyperparasitized hosts (9/102). And finally, the eggs laid on 31 hyperparasitized D. basalis L5 hosts died during embryonic development (31/102). | Discussion
Parasitoids use a large number of physical and chemical cues when they forage for hosts, and many of them mark the host patch, the host substrate, and/or the host itself ( Godfray 1994 ; Quicke 1997 ). These marks may enable females to discriminate between unexploited and previously exploited resources, and also inform other females, conspecifics or heterospecifics, about the presence of a possibly superior competitor ( Roiteberg and Mangel 1988 ). In this way, D. basalis parasitoid females are able to discriminate the quality of their host, but detailed behavioural observations show that this host discrimination is based on internal cues, and the lack of evidence of external marks is unexpected ( Gauthier et al. 2002 ). For D. basalis females, host quality (healthy or parasitized 24h or 48h beforehand) does not affect the attractiveness of seeds in which hosts are concealed ( Gauthier et al. 2002 ). It is only when the D. basalis female has drilled the cotyledons of the seed to reach the host chamber and has probed this chamber with her ovipositor that the decision is made whether to accept or reject the host for oviposition ( Gauthier et al. 2002 ). These internal cues have low accessibility but are the most reliable indicators of previous parasitism and of the age of parasitic instars ( Gauthier 1996 ). However, D. basalis females exhibit a wide range of oviposition behavioural plasticity, and host discrimination ability does not always involve avoidance of superparasitism ( Gauthier et al. 1996 ). Under unfavourable conditions, D. basalis females are able to resorb the unlaid eggs with no effect on future oviposition because they have a relatively large daily oviposition window compared to the potential number of eggs laid ( Gauthier 1996 ; Gauthier et al. 1996 ; Gauthier and Monge 1999a ).
Superparasitism may sometimes be advantageous when unparasitized hosts are scarce, and/or the neonatal larvae born from eggs of the last oviposition have a better chance of competiting successively against older competitors, and/or the risk of parasitism by conspecific or heterospecific females is high ( Visser et al. 1992 ; Godfray 1994 ). In this way, when D. basalis females superparasitize hosts, the survival probability of the second egg laid has been shown to vary with the age of the first parasite already on the host ( Gauthier 1996 ). In fact, the survival rate of the second parasitoid increases with the interval between ovipositions ( Visser et al. 1992 ; Mackauer 1990 ).
The use of D. basalis in granaries as a biological bruchid control agent leads to an increased number of parasitized hosts creating unfavourable conditions during the storage period ( Sanon et al. 1998 ; Amevoin et al. 2007 ). These conditions were recreated in the laboratory using D. basalis virgin females that did not receive suitable hosts, i.e. primary bruchid L4 or pupae, for four consecutive days. After this extreme oviposition pressure caused by host deprivation, the females were presented with D. basalis L5 larvae hosts. Only virgin D. basalis were used in these experiments to eliminate all consequences of parthenogenetic reproduction, i.e. number of matings, sex ratio at egg-laying, etc. The D. basalis L5 larvae hosts had molted to a passive stage corresponding to the end of the feeding period on the primary host, i.e. bruchid L4 or pupae. Under these experimental conditions, the D. basalis virgin females were able to hyperparasitize the D. basalis L5 larvae hosts However, future research should investigate the behaviour of inseminated D. basalis females under unfavourable conditions, particularly in view of the fact that the progeny of hyperparasitized inseminated Eupelmus vuilleti females are largely male ( Rojas-Rousse et al. 2005 ). Since D. basalis hyperparasitoid larvae feed externally, no special adaptations may be needed to attack the primary parasitoid ( Godfray 1994 ). Under our experimental conditions, 60.78% of hyperparasitized D. basalis L5 larvae hosts became hyperparasitoid miniaturized adult males. Approximately a third of the hyperparasitized D. basalis L5 larvae hosts died after being stung by adult females during egg-laying, although the venomous sting generally only induces permanent paralysis and developmental arrest of the host during the development stages of first larvae ( Doury et al. 1995 ). This hyperparasitism corresponds to the facultative hyperparasitism arising from competition between parasitoids for host resources, which is considered as one possible evolutionary pathway leading to obligatory hyperparasitism ( Godfray 1994 ).
The hyperparasitism behaviour did not modify the reproductive behaviour of the egg-laying females because stinging always occurred before egg-laying, although the eggs were laid externally. However, this behaviour did not exclude superparasitism because more than one egg per host could be laid. In fact, 64.7% of the hyperparasitized D. basalis L5 larvae hosts were super-hyperparasitized. However, 8.82% of the hosts without stings reached the primary parasitoid male adult stage after all the neonatal hyperparasitoid larvae died during fights.
Under our experimental conditions, the D. basalis virgin females could choose between D. basalis L5 larvae and pupal hosts. Egg distribution showed that no D. basalis pupa host was ever hyperparasitized. Besides their physiological differences, D. basalis L5 larvae and pupae can be distinguished by the texture of their integument (soft in the L5 larvae and chitinised in the pupae), which is the basis of mechanical and/or chemical cues perceived by the females' ovipositor at egglaying. These cues could underlie the variability of responses to host quality. Another hypothesis is that it is more difficult for a young hyperparasitoid neonatal larva to become implanted on the chitinised integument of a pupa than on the soft integument of an L5 larva, especially if it has not been immobilised, i.e. paralysed, by the adult female at egg-laying. This behaviour, induced by notable physical differences between the L5 and the pupae, was also observed during the primary parasitism of the C. maculatus host. In fact, egg-laying is greater on the larval stages than on the younger non-chitinised and older chitinised pupae ( Terrasse and RojasRousse 1986 ) The immobility of the D. basalis pupae alone did not seem to induce the cues favouring their rejection, because when the D. basalis virgin females could choose between healthy (i.e. non-paralysed) or paralysed C. maculatus L4 larvae, they laid eggs on both categories, although significantly more on the healthy host.
The responses of D. basalis females when faced with host deprivation leading to extreme oviposition pressure revealed that they were able to parasitize their own developing primary last instar larvae. Some species of primary parasitoids can be facultative hyperparasitoids, but the hyperparasitism is always interspecific ( Strand 1986 ). The first recorded observation of a facultative hyperparasitoid developing within its own species was Anaphes victus (Hymenoptera: Mymaridae), and no other hyperparasitoids are known in this family ( Sullivan 1987 ; van Baaren et al. 1995 ).
Facultative hyperparasitism is functionally similar to conspecific superparasitism. In solitary parasitoids, conspecific superparasitism can be advantageous if there are high levels of competition, long inter-patch travel times, low quality patches and timelimited resources ( van Alphen and Visser 1990 ). The D. basalis females hyperparasitized their own species when they were confronted with a low-quality patch ( D. basalis L5 larvae or D. basalis pupae) and when there was a high level of competition with females who were or had been present in their habitat. With an average success rate of 60%, the facultative hyperparasitism of D. basalis females can be seen as highly adaptive when faced with low quality patches. In fact, we observed that the D. basalis females were also able to hyperparasitize other species, such as Eupelmus vuilleti and Monoksa dorsiplana Boucek (Hymenoptera: Pteromalidae) (primary parasitoids of bruchids), but secondary parasitoid adults never emerged (personal observations). However, there are fitness costs associated with this mode of development, because a significant decrease in the size of secondary parasitoids has been observed following the depletion of host resources ( Brodeur, 2000 ). | Associate Editor: Robert Jeanne was editor of this paper.
This study investigated the egg-laying behaviour of ectoparsitoid, Dinarmus basalis Rondani (Hymenoptera: Pteromalidae), females when faced with a prolonged deprivation of suitable hosts leading to extreme ‘oviposition pressure’. The egg-laying behaviour of virgin D. basalis females was tested with Callosobruchus maculatus (F.) (Coleoptera: Bruchidae) hosts previously parasitized by the conspecific females in which the developing larvae had reached the last larval instar (L5) or pupae. The hyperparasitism did not prevent the occurrence of superparasitism, but only one D. basalis egg from a hyperparasitized D. basalis L5 larvae reached the adult stage due to the solitary behaviour of the D. basalis larvae. Under these experimental conditions, 60.78% of the D. basalis adults emerging from larvae were miniaturized due to the depletion of host resources.
Key words | Acknowledgements
I would like to pay my last respects to Professor Vincent Labeyrie, who died 8 th September 2008. With the support of the CNRS, he created the first Experimental Ecology laboratory in Tours (France) and taught me about the ecological problems arising from the population dynamics of phytophagous and entomophagous insects in agrosystems. I would like to thank Elizabeth Yates (Inter-connect) for correcting the English text. | CC BY | no | 2022-01-12 16:13:43 | J Insect Sci. 2010 Jul 9; 10:101 | oa_package/4c/14/PMC3016815.tar.gz |
||
PMC3016842 | 20673188 | Introduction
Heat shock proteins (Hsp) are a family of proteins that help organisms to modulate stress response and protect organisms from environmentally induced cellular damage. They usually act as molecular chaperones, promoting correct refolding and preventing aggregation of denatured proteins ( Johnston et al. 1998 ; Feder and Hofmann 1999 ). On the basis of molecular weight and homology of amino acid sequences, Hsp can be divided into several families including Hsp90, 70, 60, 40 and small Hsp ( Feder and Hofmann 1999 ; Sørensen et al. 2003 ).
Hsp60 is mostly located in mitochondria of eukaryotic cells ( Gatenby et al. 1991 ). Under normal conditions, Hsp60 operates the bending and assembling of enzymes and other protein complexes related to energy metabolism; under adverse environmental conditions the synthesis of Hsp60 increases and the protein then renatures damaged proteins to restore their biological activity ( Buchner et al. 1991 ; Cheng et al. 1989 ; Ostermann et al. 1989 ; Martin et al. 1991 ). As a molecular chaperone, Hsp60 helps protect against protein aggregation ( Sanders et al. 1992 ), and in the transport of proteins from cytoplasm to organelles ( Fink 1999 ).
Molecular analysis of thermal stress has been extensively studied in Drosophila melanogaster and shows various responses of each Hsp at the transcriptional and translational level ( Hoffman et al. 2003 ; Sørensen et al. 2007 ). Recently, research has been extended to several other insect species that are important in agricultural, medical and industrial fields ( Yocum 2001 ; Chen et al. 2005 , 2006 ; Sonoda et al. 2006 ; Huang and Kang 2007 ; Wang et al. 2007 ; Kim et al. 2008 ). The stem borer, Chilo suppressalis (Walker) (Lepidoptera: Pyralidae) is one of the most serious pests of rice. This pest has been widely distributed in all rice fields of China, and is constantly adapting to its environment. In the present study, the full-length cDNA of C. suppressalis hsp60 was cloned. In addition, the expression profiles of Hsp60 in larvae haemocytes from C. suppressalis at mRNA and protein levels were analysed by using real-time quantitative PCR and flow cytometry across temperature gradients from 31 to 39°C. | Materials and Methods
Insect rearing and Isolation of haemocytes
The larvae of C. suppressalis were initially collected from the paddy fields in suburbs of Yangzhou City, China. The rice stem borers were reared using a method described by Shang et al. ( 1979 ) at 28 ± 1°C, 16:8 L:D and RH >80%. Hatched larvae were fed on rice seedlings until larvae reached the 5th stadium.
Larvae from each experimental group were washed two times with distilled water and were anaesthetised by chilling on ice. The proleg was then cut off and haemolymph was collected directly into 1ml Eppendorf tubes containing 200 μl cold PBS. The haemocytes were separated from the haemolymph by centrifugation for 5 min at 800 g at 4°C. Sedimented haemocytes were washed twice with PBS and used for RNA extraction.
RNA extraction and cDNA synthesis
Total RNA from the haemocytes was isolated using TRIzol reagent (Sangon, www.sangon.com ). Single-stranded cDNA was synthesized from 1 μg of RNA with the MBI RevertAid First Strand cDNA Synthesis Kit (MBI Fermentas, www.fermentas.com ) according to the manufacturer's instructions. Single-stranded cDNA for 3′-Rapid amplification of cDNA ends (3′-RACE) and 5′-RACE experiments was synthesized from 1 μg of RNA using the TaKaRa RACE cDNA Amplification Kit (TaKaRa,_ www.takarabio.co.jp ) according to the manufacturer's instructions.
Degenerate PCR for isolation of hsp60 fragments
The partial clones of hsp60 from C. suppressalis were amplified by PCR using primer sets: P1, P2 ( Table 1 ). Pairs of primers were designed using consensus amino acid of insect Hsp60. PCR was carried out using 2 μl cDNA, 15 pmol of each primer, 10 nmol of each deoxynucleoside triphosphate (dNTP), and 1 unit of Taq DNA polymerase (TaKaRa) in the supplied buffer giving a final concentration of 2.0 mM MgCl 2 in 25 μl. Cycle conditions were as follows: initial denaturation at 94°C for 4 min; 3°Cycles of 94°C for 40 s, 55°C for 40 s, and 72°C for 1 min; final extension at 72°C for 10 min. Amplification products were purified from 1% agarose gels using a gel extraction kit (BioTeke, www.biotek.com ). The purified fragment was cloned into the pMD-18T vector (TaKaRa) and sequenced.
3′-RACE PC
Two semi-nested 3′-RACE reactions were conducted on 2 μl of the C. suppressalis 3′RACE-ready cDNA. In the first reaction, a sense gene-specific primer designed from P1 and P2 PCR product sequence ( hsp60 3′RACE outer primer; Table 1 ) and an antisense 3′-RACE outer primer ( Table 1 ; TaKaRa) were used. The 3′-RACE inner PCR was performed using the hsp60 3′-RACE inner primer ( Table 1 ) and the antisense 3′-RACE inner primer ( Table 1 ; TaKaRa). The 50-μl amplification mix was prepared according to the TaKaRa cDNA protocol using the La Taq polymerase mix (TaKaRa). The outer and inner PCR were performed using the following conditions: 3 min at 94°C, followed by 25 cycles of 40 s at 94°C, 40 s at 55°C, and 2 min at 72°C and finishing with chain extension at 72°C for 10 min.
5′-RACE PCR
Two semi-nested 5′-RACE reactions were conducted on 2 μl of C. suppressalis 5′RACE-ready cDNA. In the first reaction, a sense gene-specific primer designed from the sequences obtained following 3′-RACE outer and inner PCR amplifications of hsp60 ( hsp60 5′-RACE outer primer; Table 1 ) and an antisense 5′-RACE outer primer ( Table 1 ; TaKaRa) were used. The 5′-RACE inner PCR was performed using the hsp60 5′-RACE inner primer ( Table 1 ) and the antisense 5′RACE inner primer ( Table 1 ; TaKaRa). PCRs were performed as described for the 3′-RACE PCR.
Cloning and sequencing
After gel extraction, the 3′- and 5′-RACE fragments were cloned into a pMD-18T vector (TaKaRa). Recombinant plasmids were isolated using the Plasmid Mini kit (BioTeke), and sequenced. The full-length of the sequence assembled by RACE was verified by sequencing the fragment amplified from the primers P3, P4 (located at 5′ UTR and 3′UTR) ( Table 1 ) and subjected to homology analysis.
Amino acid sequence comparisons and phylogenetic analysis
Database searches were performed with the BLAST program (NCBI-BLAST network server). The open reading frame was identified using ORF Finder ( http://www.ncbi.nlm.nih.gov/gorf/gorf.html ) and the amino acid molecular weight was calculated by the SWISS-PROT (ExPASy server) program ‘Compute pI/Mw’ ( http://au.expasy.org/ ). Sequence alignment and homology analysis was performed using Clustal X. A phylogenetic tree (neighborjoining method) was then constructed with 1000 bootstrap replicates using MEGA version 3.1 based on the deduced amino acid sequence of C. suppressalis Hsp60 as well as the known sequences of fourteen other insect species.
Real-time quantitative PCR
Fifteen fifth-instar larvae in three replicates were randomly selected from each experimental group and exposed to 31, 33, 36 or 39°C for 2 h with 28°C as control. Each treatment was repeated three times.
The haemocytes from control and treated groups were collected as above. The collected haemocytes were immediately used for RNA extraction, and cDNA was synthesized according to the methods described above. 18S rRNA gene in C. suppressalis was cloned (GQ265912) and used as the housekeeping gene. Based on the cDNA sequences of C. suppressalis hsp60 and the 18S rRNA gene sequences, primer pairs ( hsp60 sense / hsp60 antisense and 18S sense / 18S antisense) for real-time PCR were designed ( Table 1 ). The PCR reactions were performed in a 20 μl total reaction volume including 10 μl of 2 × SYBR® Premix EX TaqTM master mix (TaKaRa), 5 μM each of primers ( Table 1 ) and 1 μl cDNA templates. They were carried out on the ABI Prism 7000 Sequence Detection System (Applied Biosystems, www.appliedbiosystems.com ). The thermal cycler parameters were as follows: 10 s at 95°C, then 4°Cycles of 5 s at 95°C, 20 s at 55°C and 20 s at 72°C.
For each amplification, the reaction was carried out in 3 replicates, from which mean threshold cycle (CT) values plus standard deviations were calculated. Relative transcripts of C. suppressalis hsp6°C DNA amounts were calculated applying the 2 -ΔΔ C T method ( Livak and Schmittgen 2001 ).
Flow cytometric determination of Hsp60 levels
The larval treatment was carried out as described above. Each temperature treatment was repeated three times. The haemocytes were harvested as described above. The extent of Hsp60 levels in larvae haemocytes was investigated following the method of Shen and Zhou ( 2002 ).
The collected haemocytes were fixed in 4% paraformaldehyde used for analysis of Hsp60 levels. Each sample was divided into two groups, one group for positive treatment, another as a negative baseline control. Positive treatment was performed as follows: the fixed haemocytes were centrifuged for 5 min at 800 g and washed twice with PBS. Subsequently, the haemocytes were permeabilized in PBS with Triton X-100 and incubated with the rabbit anti-Hsp60 polyclonal antibody (Boster, www.immunoleader.com/ ) (1:200) for 30 min at 18°C, and then centrifuged for 5 min at 800g and washed twice with PBS. Following treatment with the primary antibody, the haemocytes were incubated for 30 min at 18°C with FITC conjugated goat anti-rabbit secondary antibody (Boster) (1:300) and then centrifuged for 5 min at 800g and washed twice with PBS. Finally the haemocytes were resuspended in PBS for analysis. Negative baseline control procedure was similar to that of positive treatment except that the primary antibody incubation was eliminated.
Cellular fluorescence, reflecting cellular Hsp60 levels, was determined at 525 nm using flow cytometry (Becton Dickinson, www.bd.com ). For each group, approximately 1000°Cells were analysed. Subsequently, the average fluorescence values of the positive treatment were divided by those of the negative baseline control, and the folds were used as the relative levels of Hsp60 from each sample.
Statistical analysis
Data were expressed as mean values ± S.D. based on three separate experiments. Statistical analysis was carried out by Student's t-test.The asterisks denote statistical significance when compared with control: *p < 0.05; **p < 0.01. | Results
Sequence analysis
Degenerative primers based on a conserved region of hsp60 were used to amplify the cDNAs derived from the haemocytes of C. suppressalis . A PCR product of 596 bp was obtained. The 596 bp amplified fragment was highly homologous to hsp60 , and used to obtain 5′- and 3′ flanking sequences using RACE 5′- and 3′ RACE of hsp60 produced fragments of 456 and 1142 bp. After assembly of the 2 sequences, a 2142 bp full-length cDNA sequence was obtained ( Figure 1 ). This sequence contained a 158 bp 5′UTR, a 265 bp 3′UTR with a canonical polyadenylation signal sequence (AATAAA), and a poly (A) tail, as well as a single 1,719 bp open reading frame (ORF) encoding a polypeptide comprised of 572 amino acids with a molecular mass of 61014 Da and a pI of 5.69 (GenBank accession number: GQ265913).
The deduced amino acid sequence of Hsp60 of C. suppressalis was highly similar to that of other insects including: Culicoides variipennis (86%), Pteromalus puparum and Nasonia vitripennis (85%), Anopheles gambiae, Apis mellifera and Culex quinquefasciatus (83%), Drosophila melanogaster , Liriomyza sativae , Lucilia cuprina and Aedes aegypti (82 %), Liriomyza huidobrensis (81%), Tribolium castaneum (79%), Myzus persicae (78%), Acyrthosiphon pisum (77%). Based on the amino acid sequences of Hsp60, a phylogenetic tree was constructed using the programs of CLUSTALX and MEGA2.1 ( Figure 2 ).
Real-time quantitative analysis of hsp60 expression
To determine whether hsp60 responds to heat treatments, fifth-instar larvae of C. suppressalis were kept at a target temperature (31, 33, 36 or 39°C) for 2h and expression levels were compared with levels observed in non heat-treated individuals at 28°C. As shown in Figure 3 hsp60 mRNA was expressed at extremely low levels in the untreated groups (28°C). Under thermal stress, the baseline levels of hsp60 mRNA were found to vary. A significant induction of hsp60 was shown at 33, 36 and 39°C, reaching 5.61, 9.13 and 8.67 fold versus the control, respectively, and there was no significant difference for hsp60 expression from 28 to 31°C. Interestingly, the degree of increase in the level of hsp60 reached a maximum at 36°C and then dropped at 39°C with increasing temperatures.
Hsp60 verification at protein levels
To study the heat induction of Hsp60 at the translational levels, the expression of Hsp60 was determined by using a flow cytometer. Figure 4 shows that thermal stress significantly elevated the level of Hsp60 synthesis in larvae haemocytes. Compared with the control (28°C), the relative levels of Hsp60 increased to 1.40, 1.47, 1.88 and 1.74 fold at 31, 33, 36, and 39°C, respectively. These results revealed that the expression profiles of Hsp60 at the mRNA and protein levels are in high agreement with each other from 33 to 39°C. | Discussion
Using a combination of RT-PCR and RACE techniques, the full-length hsp6°C DNA was cloned from haemocyts of C. suppressalis . The C-terminal repeats (GGM) n , which are a characteritic of mitochondrial Hsp60 ( Tsugekils et al. 1992 ), are present in C. suppressalis Hsp60, indicating that the isolated gene is a mitochondrial hsp60 . In addition, the lengths of the cDNA and the ORF, as well as the predicted protein size are similar to those of other Hsp60s. An ATP binding motif, which is highly conserved ( Wong et al. 2004 ), was found in the deduced Hsp60 amino acid sequence. The great similarity in this region may indicate that among Hsp60s, the mechanism of coupling ATP hydrolysis to the substrate-refolding process is similar. Using the BLAST X programme of the NCBI website, the deduced amino acid sequence of Hsp60 of C. suppressalis showed high identity and similarity with known Hsp60s of other insect species (more than 77% similarity in all the matches). A phylogentic tree was constructed based on the full amino acid sequences of Hsp60 of the fifteen insect species in this study.
To find the temperatures for maximal induction of hsp60 expression, the relative mRNA levels of hsp60 in larvae haemocytes of C. suppressalis were quantified by real-time quantitative PCR at temperatures from 31 to 39°C. It was found that hsp60 mRNA in larval haemocytes was expressed at extremely low levels in the control groups (under normal conditions). The baseline levels of hsp60 mRNA were found to vary under heat stress. The results revealed that hsp60 gene in larvae haemocytes was significantly upregulated with increasing temperatures, reached a maximum at 2h of exposure to a 36°C heat shock and then dropped at 39°C. However, after exposure at 39°C for 2h, hsp60 mRNA levels exhibited a reduction in expression, indicating that transcription had decreased. Huang and Kang ( 2007 ) observed a similar response in the induction of heat shock in L. sativae hsp60 using real-time quantitative PCR methods, and the expression of L. sativae hsp60 was inhibited when temperatures were higher than 42.5°C for 1 h, which exceeds the tolerance limit of L. sativae . However, the majority of previous studies of inducible hsp60s expression was virtually undetectable by stress factors. For instance, the expression levels of mitochondrial hsp60 are not influenced by heat or cold in Trichinella spiralis ( Wong et al. 2004 ). Furthermore, the expression of hsp60 does not respond to various types of stresses, such as H2O2 ( Martinez et al. 2002 ), acidic and oxidative stress ( Wong et al. 2004 ). In those studies, mRNA expression was monitored using Northern Blotting or semi-quantitative RTPCR, which is less sensitive than real-time quantitative PCR.
Hsp60 is known to function as a molecular chaperone in many species and is absolutely essential for the proper functioning of cells under normal and stress conditions ( Lindquist 1986 ; Hemmingsen et al. 1988 ; Goloubinoff et al. 1989 ; Hartl 1996 ). In this study, we detected up-regulation of Hsp60 in haemocytes of C. suppressalis in adaptation to thermal stress. Hsp60 in haemocytes was found to be increased at 31°C while hsp60 was not increased. The results indicate that the thermal responses of Hsp60 in the haemocytes at the mRNA and protein levels are high in agreement with each other from 33 to 39°C. Our findings also agree with those of Wheelock et al. ( 1999 ) for B. plicatilis in which Hsp60 response increased up to 3–4 fold when heat exposure occurred. Rios-Arana et al. ( 2005 ) also reported that Hsp60 was induced 2–4 fold in P. patulus exposed to heat. Other arthropod studies have compared expression levels at gene and protein levels. For example, protein levels of Hsp70 followed thermotolerance and reached the highest levels 49 h after heat hardening in adult Orchesella cincta , and the expression of hsp70 messenger RNA reached a peak within the first 2 h and then sharply decreased after 6 h( Bahrndorffetal. 2009 ).
Mitochondria are essential eukaryotic organelles that serve as a site for many vital metabolic pathways and supply the cell with oxidative energy. Hsp60 plays a central role in the folding of newly imported and stress-denatured proteins ( Martin et al. 1992 ; Martinus et al. 1995 ). As so, it was demonstrated that yeast containing mutated mt-Hsp60 do not grow at elevated temperatures ( Cheng et al. 1989 ; Dubaquie et al. 1998 ) and show irreversible aggregation of a large number of newly imported proteins ( Dubaquie et al. 1998 ). The higher level of Hsp60 expression induced by heat stress strengthens the idea that this protein has a significant role in the adaptation of various environmental conditions.
Some studies indicated that the induction of Hsp60 expression is tissue-specific. Lakhotia and Singh ( 1996 ) reported Hsp60 heat-induced expression in D. melanogaster larval Malpighian tubules following heat shock. A tissue-specific variation in the heat-induced expression of Hsp60 was also reported in grasshopper ( Spathosternum prasiniferum ), cockroach ( Periplanata americana ) and gram pest ( Heliothis armigera ) ( Singh and Lakhotia 2000 ). The level of Hsp60 in L. cuprina was significantly enhanced upon heat shock in some tissues ( Sunita et al. 2006 ). Moreover, over-expression of Hsp may cause some negative effects on growth, development, survival and fecundity ( Krebs and Feder 1997 ; Krebs and Feder 1998 ; Huang et al. 2007 ), suggesting that the expression of Hsp may relate to physiological processes ( Huang et al. 2007 ). Interestingly, the expression levels of Hsp60 in the haemocytes of C. suppressalis reached a maximum at 36°C and then declined at 39°C with increasing temperatures in the present study. Such a drop of Hsp60 may be due to approach of a threshold limit in the cell. A similar scenario was observed in an earlier study in which Hsp60 synthesis in L. huidobrensis were found to increase to a maximum at 42.5°C and then dropped when heat stress was enhanced ( Huang and Kang 2007 ). Kristensen et al. ( 2002 ) also reported that inbred larvae of Drosophila buzzatii expressed more Hsp70 at high temperatures except at very high temperatures close to the physiological limit. This further supports the fact that Hsp plays a major protective role against cellular damage with high temperature exposure and yet may not be able to protect the cells beyond the threshold limit. | Associate Editor : Megha Parajulee was editor of this paper.
Heat shock protein 60 is an important chaperonin. In this paper, hsp60 of the stem borer, Chilo suppressalis (Walker) (Lepidoptera: Pyralidae), was cloned by RT-PCR and rapid amplification of cDNA end (RACE) reactions. The full length cDNA of hsp6°C onsisted of 2142 bp, with an ORF of 1719 bp, encoding 572 amino acid residues, with a 5'UTR of 158 bp and a 3'UTR of 265 bp. Cluster analysis confirmed that the deduced amino acid sequence shared high identity with the reported sequences from other insects (77%–86%). To investigate whether hsp60 in C. suppressalis responds to thermal stress, the expression levels of hsp60 mRNA in larval haemocytes across temperature gradients from 31 to 39°C were analysed by real-time quantitative PCR. There was no significant difference for hsp60 expression from 28 to 31°C. he temperatures for maximal induction of hsp60 expression in haemocytes was close to 36°C. Hsp60 expression was observed by using flow cytometry. These results revealed that thermal stress significantly induced hsp60 expression and Hsp60 synthesis in larval haemocytes, and the expression profiles of Hsp60 at the mRNA and protein levels were in high agreement with each other from 33 to 39°C.
Keywords | Acknowledgments
This work was supported by the National Basic Research and Development Program of China (2006CB102002).
Abbreviatons
heat shock protein | CC BY | no | 2022-01-12 16:13:43 | J Insect Sci. 2010 Jul 9; 10:100 | oa_package/64/f9/PMC3016842.tar.gz |
||
PMC3016857 | 21073345 | Introduction
In recent years, the diamondback moth, Plutella xylostella (Lepidoptera: Plutellidae) has become the most destructive insect of cruciferous plants throughout the world, and the annual cost for its management is estimated to be US $1 billion ( Talekar 1993 ). In order to find the solution, differentially expressed genes from this insect should be identified, cloned, and studied. In this regard, the study of chemosensory proteins of P. xylostella will be helpful in providing critical information about their behavioral characteristics and relative physiological processes.
Insect chemosensory proteins (CSPs) and odorant-binding proteins (OBPs) are believed to be involved in chemical communication and perception, and these two soluble proteins belong to different classes. OBPs have the size of approximately 150 amino acid residues, out of which six highly conserved cysteines are paired to form three disulfide bridges. It has been experimentally demonstrated that OBPs are involved in the binding of pheromones and odorant molecules (Vogt 1881; Kruse 2003 ; Andronopoulou 2006 ). CSPs are small proteins of about 110 amino acids that contain four cysteines forming two disulfide bridges ( McKenna 1994 ; Pikielny 1994 ; Jansen 2007 ). In comparison to OBPs, which are specifically reported in olfactory sensilla ( Vogt and Riddiford 1981 ; Steinbrecht 1998 ), the CSPs are expressed more extensively in various insect tissues such as the antennae, head, thorax, legs, wings, epithelium, testes, ovaries, pheromone glands, wing disks, and compound eyes, suggesting that CSPs are crucial for multiple physiological functions of insects ( Gong 2007 ). Similarly, the study of gene expression in different insect stages can reveal the possible extent of activity of these specific genes in the physiology of the different stages.
In the last two decades, insect chemosensory proteins have been studied extensively for their structural properties, various physiological functions, affinity to small molecular ligands, expression pattern in insects, and subcellular localization, but little research has been reported on the analysis of the 5′-regulatory sequence of the chemosensory protein gene. In this study, the full-length cDNA was cloned for two chemosensory protein genes ( Pxyl-CSP3 and Pxyl-CSP4 ) in P. xylostella , using rapid amplification of cDNA ends (RACE). It was followed by the genome walking method to obtain the 5′-upstream regulatory sequence of Pxyl-CSP3 and Pxyl-CSP4 . The results revealed not only the core promoter sequences (TATA-box), but also several transcriptional elements (BR-C Z4, Hb, Dfd, CF2-II etc). | Materials and Methods
Insects
P. xylostella pupae were collected from an insecticide-free cabbage field and taken to the laboratory for rearing. Larvae were allowed to feed on cabbage leaves in the insect growth room with conditions set at 25 ± 1° C, 16:8 L:D, and 70–85% RH until pupation.
RNA preparation and synthesis of firststrand cDNA
Total RNA was extracted from adults of P. xylostella using the Trizol reagent (Invitrogen, www.invitrogen.com ) according the protocol provided by the manufacturer. First-strand cDNA was synthesized from the total RNA with reverse transcriptase AMV and oligod (T)18 (TaKaRa, www.takara-bio.com ). 5′- and 3′-RACE-ready cDNA were prepared according to the instructions of the Gene RacerTM Kit protocol (Catalog #: L1500-01, Invitrogen).
Cloning of Pxyl-CSP3 and Pxyl-CSP4
Two degenerate primers were designed by alignment of published CSP-like transcripts from distantly related species. The 3′ RACE forward primers of Pxyl-CSP3 and Pxyl-CSP4 are 5′-(C/T)AC(A/G)GA(T/C)AA(A/G)CA (C/G)GAA(A/G)C(C/A)(A/T)GCCGTGA-3′ and 5′-GAA(A/G)ACCA(C/T)C(C/T)GCGG CAA (G/C/A)TGCA -3′, respectively, and oligod (T)18 was used as the reverse primer. The PCR reaction was performed with the following conditions: one cycle (94° C, 2 min); 35 cycles (94° C, 1 min; 55° C, 1 min; 72° C, 1 min); and a last cycle 72° C for 10 min. The PCR product was then cloned into a pMD-20-T vector (TaKaRa), and positive clones were sequenced.
According to the CSP-like transcript fragment amplified from P. xylostella by 3′ RACE degenerate primers, the 5′-RACE specific nest primers were designed and used to amplify the full-length cDNA of Pxyl-CSP3 and Pxyl-CSP4. The 5′-RACE primer and 5′-RACE nest primer of Pxyl-CSP3 are 5′-CCTCC ACTCCGCGGGCTTGTGGTTGAT-3′ and 5′-TACGCCTTGACAGCGCGCAGTTGGT CC-3′, respectively. The 5′-RACE primer and 5′-RACE nest primer of Pxyl-CSP4 are 5′-CTTGGCGAAGGAGTCCTTGTACTCTCC-3′ and 5′-TCAGAAGATGTCATCTAAGT TC-3′, respectively. The first PCR conditions were as follows: one cycle (94° C, 2 min); 5 cycles (94° C, 30 s; 72° C, 1 min); 5 cycles (94° C, 30 s; 72° C, 1 min); 25cycles (94° C, 30 s; 66° C, 30 s; 70° C, 1 min); and a last cycle of 72° C for 10 min. Full-length cDNA of Pxyl-CSP3 and Pxyl-CSP4 was obtained by overlapping the two cDNA fragments.
Genomic DNA isolation and DNA sequence amplification
Genomic DNA was extracted from P. xylostella according to the instructions from the TIANamp Genomic DNA kit protocol (Tiangen, www.tiangen.com ). Genomic DNA was precipitated with ddH 2 O, and agarose gel electrophoresis was carried out to determine its quality. It was shown on a single band. The specific primers were designed to amplify the genomic DNA corresponding to the cDNA code region of Pxyl-CSP3 and Pxyl-csp4. In order to clone the genomic sequence of Pxyl-CSP3 , the sense primer was 5′-ATGAACTCCTTGGTACTAGTATGCCTTG-3′, and the antisense primer was 5′-TACGCCTTGACAGCGCGCAGTTGGTCC-3′. For Pxyl-CSP4 , the sense primer was 5′-ATGCAGACCGTGACTCTCCTATGCCTG T-3′, and the antisense primer was 5′-TTAATCAGATCCTTCGAGGAACTTGGC G-3′. The PCR reaction was performed with the following conditions: one cycle (94° C, 2 min); 35 cycles (94° C, 30 s; 68° C, 45 s; 72° C, 1 min) and a last cycle 72° C for 10 min. The amplified DNA was sequenced.
Isolation of genomic 5′- upstream region of Pxyl-CSP3 and Pxyl-CSP4
Genomic DNA of P. xylostella was prepared as above. In order to obtain the 5′-upstream regulatory sequences of the chemosensory protein genes, the genome walking approach was performed according to the introductions of the kit (TaKaRa). The PCR principle of the genome walking approach is thermal asymmetric interlaced PCR (Tail-PCR). The specific reverse primers were designed according to 5′-terminal nucleotide sequence of Pxyl-CSP3 and Pxyl-CSP4 ( Table 1 ), and the forward primers were supported by the kit. The conditions for the were PCR reaction were set according to the kit's introductions. The PCR fragments obtained through the genome walking approach were detected using 1.5% agarose gel electrophoresis and purified for sequencing using SP3 specific primer.
RT- PCR analysis
RT-PCR was used to measure gene expression at different developmental stages. The cDNA samples from male and female adults, from all stages of larvae and from pre-pupae and pupae, were prepared using the plant RNA kit (Catalog #: R6827, Omega, www.omega.com ) and reverse transcriptase AMV (TaKaRa).
The gene-specific primer was designed from the cDNA sequences of Pxyl-CSP3 and Pxyl-CSP4 , named CSP3-sqPCR and CSP4-sqPCR, respectively ( Table 1 ). The 18S rRNA gene of P. xylostella was used as the reference with the following primers: 18S-F: 5′-CCGATTGAATGATTTAGTGAGGTCTT-3′; 18S-R: 5′-TCCCCTACGGA AACCTTGTTACGACTT-3′. The cDNA (1–2 μl) was used for amplification, and the final volume of the reaction mixture was 50 μl. The PCR amplification was performed using the following thermal cycle conditions: one cycle (94° C, 2min); 27 cycles (94° C, 30 s; 60° C, 45 s; 72° C,1 min) and a last cycle 72° C for 10 min. PCR products were detected by 1.5% agarose gel electrophoresis.
Bioinformatics analysis
Amino acid sequences of CSPs ( n = 27) were retrieved from an NCBI protein search using the keywords “chemosensory protein” and “lepidopteran”. Molecular mass and isoelectric point was predicted using the software, ExPASy ( http://www.expasy.ch/ ). Multiple sequence alignment was carried out with the online service at http://bioinfo.genotoul.fr/multalin/multalin.html ( Corpet 1988 ). Promoter prediction and characterization were carried out using the Neural Network Promoter Prediction (NNPP) server ( http://www.fruitfly.org/seq_tools/promoter.html ) ( Reese 2001 ). Sequence analysis seeking transcriptional regulation response elements was carried out with TFSEARCH ( http://www.cbrc.jp/research/db/TFSEARCH.html ) ( Heinemeyer 1998 ). The signal peptide was predicted using SignalP 3.0 ( Nielsen 1997 ) at http://www.cbs.dtu.dk/services/SignalP-3.0/ . The phylogenetic tree was constructed using MEGA 3.0 software ( Kumar 2004 ) using the neighbour joining method, and it was reconstructed with 1000-replicate bootstrap analysis. | Results
Gene cloning of Pxyl-CSP3 and Pxyl-CSP4
A 526 bp cDNA of Pxyl-CSP3 ( Figure 1B ) was obtained by RACE-PCR using the degenerate primers. The cDNA included a 62 bp 5′ untranslated region (UTR), a 108 bp 3′ UTR, with an AATAAA box and 25 bp poly (A) tail, and a 381 bp open reading frame (ORF) that encodes 126 amino acids. It exhibited significant similarity to CSP5 of Bombyx mori (59%), CSP3 of Bombyx mandarina (58%), and CSP3 of Mamestra brassicae (58%), as revealed by Blast database research. The deduced protein has a computed molecular mass of 14.1 kDa and a predicted isoelectric point of 8.79.
An 864 bp cDNA of Pxyl-CSP4 ( Figure 2 ) was obtained by RACE-PCR using the degenerate primers. The cDNA included a 54 bp 5′ untranslated region (UTR), a 429 bp 3′ UTR, with an AATAAA box and 23 bp poly (A) tail, and a 381 bp open reading frame (ORF) that encodes 126 amino acids. It exhibited significant similarity to CSP6 of Papilio xuthus (68%), CSP8 of Bombyx mori (52%) and CSP4 of Choristoneura fumiferana (46%), as revealed by Blast database research. The deduced protein has a computed molecular mass of 14.0 kDa and a predicted isoelectric point of 8.25.
Genomic characterization of Pxyl-CSP3 and Pxyl-CSP4
PCR amplification of genomic DNA with primers designed corresponding to the cDNA of Pxyl-CSP3 and Pxyl-CSP4 resulted in products of about 1452 bp and 1268 bp, respectively. By comparing their genomic sequence and cDNA sequence, it was found that Pxyl-CSP3 and Pxyl-CSP4 included one intron, and the intron began with ‘GT’, ended with AG, and had 926 bp and 404 bp, respectively. The sequences of the exon/intron-splicing junctions of Pxyl-CSP3 and Pxyl-CSP4 are shown in Figure 1B and Figure 2 , respectively.
5′ upstream regulatory region analysis of Pxyl-CSP3 and Pxyl-CSP4
Using the genome working approach, the 5′ regulatory regions of Pxyl-CSP3 and Pxyl-CSP4 were isolated and had 2242 bp and 533 bp, with the Genebank Numbers FJ948816 and FJ948817, respectively. Nucleotide sequence alignment of the isolated genomic sequence with the full-length Pxyl-CSP4 cDNA showed that the nucleotide sequence of 264 bp was isolated from the 5′ UTR of Pxyl-CSP4 , including a part of the intron sequence.
Nucleotide sequence alignment of the isolated genomic clone with the full-length Pxyl-CSP3 cDNA revealed that the 5′ UTR ( Figure 1A ) was interrupted by an intron of 323 bp, and thus was split in two exons of 61 and 75 bp, respectively. This intron also is in line with the GT-AG rule. The Pxyl-CSP3 5′ upstream region of 1921 bp was analyzed to predict the transcription factor binding site, using the online server of TFSEARCH. The results of Figure 1A showed that the 5′ upstream region of Pxyl-CSP3 included not only the core promoter sequences (TATA-box), but also several transcriptional elements (BR-C Z4, Hb, Dfd, CF2-II, etc.).
Expression profile of Pxyl-CSP3 and Pxyl-CSP4
RT-PCR was used to investigate the expression at different developmental stages. The results showed that Pxyl-CSP3 and Pxyl-CSP4 have different expression patterns in examined developmental stages. Pxyl-CSP3 ( Figure 3A ) was highly expressed in the first instar larva, second instar larva, third instar larva, fourth instar larva ♀, fifth instar larva ♂, pre-pupa ♀, and pre-pupa ♂, but no expression was obtained in pupa ♀ or pupa ♂. Lower expression was observed in adult ♀ and adult ♂. In the case of Pxyl-CSP4 ( Figure 3B ), higher expression was found in first instar larva, second instar larva, third instar larva, fourth instar larva ♀, and fifth instar larva ♂, while pre-pupa ♀, pre-pupa ♂, adult ♂, and adult ♂ expressed lower expression, and no expression was found in pupa ♀ or pupa ♂.
Homology and phylogenetic analysis
The evolutionary relationships among the two P. xylostella CSPs and 25 lepidopteran insect homologs that are reported so far were investigated. An unrooted neighbor-joining tree ( Figure 4 ) was constructed to represent the relationship among selected CSPs. One CSP of Daphnia pulex was used for the outgroup. The results obtained from the phylogenetic analysis showed that lepidopteran insects consist of three branches, and Pxyl-CSP3 and Pxyl-CSP4 belong to different branches as well. It provides clues about the diversification of these proteins in this insect order.
Amino acid sequence alignment from selected lepidopteran CSPs revealed that the conserved Cys spacing pattern was CX6CX18CX2C, and it was the common spacing pattern within the CSP family. Pxyl-CSP3 and Pxyl-CSP4 have only 38% similarity. Pxyl-CSP3 showed high similarity to CSP3 of Mamestra brassicae (56%), but Pxyl-CSP4 showed higher similarity to CSP of Papilio xuthus (69%), suggesting that CSPs from the species of P. xylostella are more similar to CSPs from other species than to that of some members of its own. | Discussion
Insect chemosensory proteins (CSPs) have been supposed to transport chemical stimuli from air to olfactory receptors. However, CSPs are expressed in various insect tissues including non-sensory tissues, suggesting that these proteins are also vital for other physiological processes. In this study, two full-length cDNA coding for chemosensory proteins of P. xylostella (Pxyl-CSP3 and Pxyl-CSP4) were obtained by RACE-PCR, and the GenBank accession numbers are ABM92663 and ABM92664, respectively.
The majority of CSP genes in insects have an intron; only three Anopheles gambiae and four Drosophila CSP genes lack introns; the intron splice site is always located on one nucleotide after a conserved lysine (Lys) codon, and its position is indicated by dark cycle ( Figure 5 ). These results are accordant with the findings of Wanner ( 2004 ), as the intron splice sites of Pxyl-CSP3 and Pxyl-CSP4 are after the nucleotide acids AAA (Lys) T and AAA (Lys) C, respectively. This conserved splice site is considered to be a general characteristic of the CSP gene family, so it is evident that these clones belong to this family.
Insect CSP genes are not only expressed in the olfactory tissues but also in non-olfactory tissues, including the antennae, head, thorax, legs, wings, epithelium, testes, ovaries, and pheromone glands ( Gong 2007 ; Lu 2007 ).
This wide tissue expression pattern may indicate that CSPs have olfactory and nonolfactory functions. The data here shows that Pxyl-CSP3 and Pxyl-CSP4 have different expression profiles in different developmental stages and that they were all expressed in larval stage. So, it is suggested that Pxyl-CSP3 and Pxyl-CSP4 have important functions for early development of P. xylostella , but the detailed physiological role is still unknown.
CSPs are widely distributed in insect species and so far have been identified in 10 insect orders, including Lepidoptera ( Maleszka 1997 ; Robertson 1999 ; Nagnan-L Meillour 2000 ; Picimbon 2000 ), Diptera ( McKenna 1994 ; Pikielnyl 1994 ), Hymenoptera ( Danty 1998 ; Briand 2002 ), Orthoptera ( Angeli 1999 ), Phasmatodea ( Tuccini 1996 ), Blattoidea ( Kitabayashi 1998 ), Hemiptera ( Jacobs 2005 ), Phthiraptera ( Zhou 2006 ), Trichoptera ( Zhou 2006 ), and Coleoptera ( Zhou 2006 ). A CSP-like protein has been reported in a non-insect arthropod, the brine shrimp Artemia franciscana , suggesting that CSPs might be present across the arthropods ( Pelosi 2006 ). But CSPs belong to a conserved protein family, and CSPs in different insect orders have shared common characteristics such as: conserved Cys residues spacing pattern; aromatic residues at positions 27, 85, and 98 that are also highly conserved; and a novel type of α-helical structure with six helices connected by α-α loops. This data ( Figure 5 ) corresponds to those sequence and structure characteristics as confirmed by multiple sequence alignment. Homology and phylogenetic tree analysis indicated that CSPs from the species of P. xylostella are more similar to CSPs from other species than to some members of its own, suggesting evolutionary divergence in CSPs of P. xylostella.
Gene promoter sequence and transcription factor recognition site analysis are important for understanding regulation and feedback mechanisms in specific physiological processes. This study succeeded in isolating the 5′ regulatory region of Pxyl-CSP3 and is the first report about the 5′ upstream regulatory sequence of the insect chemosensory protein gene. This data revealed that the 5′ regulatory region of Pxyl-CSP3 have a lot of specific transcription factor binding sites including BR-C Z4, Hb, Dfd, CF2-II, etc. The transcription factor binding site of BR-C Z4 has appeared many times in this regulatory region, which may play an important role for duplication and expression of Pxyl-CSP3. It has been reported that BR-C Z4 directly mediates the formation of the steroid hormone ecdysone for Drosophila melanogaster larvae metamorphosis ( Kalm 1994 ). However, there is no direct evidence for the role of CSPs in insect metamorphosis, but some scientists reported that CSPs are expressed in the pheromonal gland of M. brassicae and the ejaculatory duct of D. melanogaster ( Jacquin-Joly 2001 ; Sabatier 2003 ). A recent report also showed that the CSP homologue of Agrotis segetum has upregulation expression in the insect-pheromone binding domain; this CSP has also been reported to be the same as juvenile hormone binding protein ( Strandh 2008 ). These findings are in line with the data from the transcription factor binding site analysis, as well as the high expression in the larval stage, which may implicate a function of Pxyl-CSP3 for steroid hormone production or transport in this insect larval stage. Chemosensory protein association with insect development has been confirmed by many scientists, especially in embryo development. For example, CSP5 of Apis mellifera is an ectodermal gene involved in embryonic integument formation ( Maleszka 2007 ). In the cockroach Periplaneta americana , the CSP p10 increases transiently during limb regeneration at the larval stages ( Kitabayashi 1998 ). The transcription factor binding sites of Hb, Dfd, and CF2-II have been shown to be involved in developmental regulation; for instance, Hb regulates gene expression in the development of the thoracic region of Drosophila embryos ( McGregorl 2001 ), and CF2 may potentially regulate distinct sets of target genes during development ( Gogos 1992 ). This study will provide clues to better understand the function of CSPs in insect development. | Associate Editor: Zhijian Jake Tu was editor of this paper.
Chemosensory proteins play an important role in transporting chemical compounds to their receptors on dendrite membranes. In this study, two full-length cDNA codings for chemosensory proteins of Plutella xylostella (Lepidoptera: Plutellidae) were obtained by RACE-PCR. PxylCSP3 and Pxyl-CSP4 , with GenBank accession numbers ABM92663 and ABM92664, respectively, were cloned and sequenced. The gene sequences both consisted of three exons and two introns. RT-PCR analysis showed that Pxyl-CSP3 and Pxyl-CSP4 had different expression patterns in the examined developmental stages, but were expressed in all larval stages. Phylogenetic analysis indicated that lepidopteran insects consist of three branches, and Pxyl-CSP3 and Pxyl-CSP4 belong to different branches. The 5′regulatory regions of Pxyl-CSP3 and Pxyl-CSP4 were isolated and analyzed, and the results consist of not only the core promoter sequences (TATA-box), but also several transcriptional elements (BR-C Z4, Hb, Dfd, CF2-II, etc.). This study provides clues to better understanding the various physiological functions of CSPs in P. xylostella and other insects.
Keywords | Acknowledgments
The work was supported by grants from the China National Nature Science Foundation (No. 30671387 and 30770291) and Foundation for the Author of National Excellent Doctoral Dissertation of P R China (FANEDD, No.200461).
Abbreviations
Broad-Complex Z4;
Chemosensory protein;
Zinc finger domain;
Deformed;
Hunchback;
open reading frames;
Plutella xylostella;
rapid amplification of cDNA ends;
untranslated region. | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Sep 10; 10:143 | oa_package/e8/06/PMC3016857.tar.gz |
||
PMC3016858 | 21067415 | Introduction
Lectins are proteins or glycoproteins with the ability to bind, selectively, free or conjugated saccharides in a reversible way by two or more binding sites ( Sharon 1993 ) and are ubiquitous in all forms of living matters, including bacteria, plants, and animals ( Rhodes 1999 ). They recognize sequences of two or more saccharides with specificity towards both inter-residue glycosidic linkages and anomeric configuration, so that they have anti-bacterial and anti-tumor effects by recognizing residues of glycoconjugates on the surface of cells ( Glatz et al. 2004 ; Simone et al. 2006 ).
Apoptosis, a type of programmed cell death, is an active process. It is a normal component of the development and health of multicellular organisms. The study of apoptosis is an important field of biological inquiry since a deficiency or an excess of apoptosis is one of the causes of cancers, autoimmune disorders, diabetes, Alzheimer's, organ and bone marrow transplant rejection, and many other diseases. There are some pathways that lead to apoptosis, p53, Bcl-2 and BAD proteins play central roles in the mitochondrial pathway. P53 is a tumor suppressor protein that in humans is encoded by the TP53 gene ( Matlashewski et al. 1984 ; Isobe et al. 1986 ; Kern et al. 1991 ). In a normal cell p53 is inactivated by its negative regulator. Upon DNA damage or other stresses, various pathways will lead to the dissociation of sp53. Once activated, p53 will induce a cell cycle arrest to allow either repair and survival of the cell or apoptosis to discard the damaged cell. As such, p53 has been described as “the guardian of the genome”. Bcl-2 family proteins regulate and contribute to programmed cell death or apoptosis. It is a large protein family and all members contain at least one of four bcl-2 homology domains. Certain members are pro-apoptotic (BAD, Bax, Bak and Bok among others) or antiapoptotic (including Bcl-2 proper, Bcl-xL, and Bcl-w, among others).
Recently, extensive studies have revealed that a number of lectins from plants can be used for prevention and/or treatment of cancer. For example, mistletoe lectins have been shown to have therapeutically active anti-cancer effects on cancer ( Franz 1985 ). Lectins from mushroom species including Agaricus bisporus, Boletus satanus, Flamulina velutipes, Ganoderma lucidm have antitumor activities ( Wang et al. 1998 ). In addition, some studies have shown that animal lectins, for example, Drosophila C-type lectins ( Jingqun et al. 2007 ), Sarcophaga lectin ( Itoh et al. 1986 ), and a sea urchin lectin ( Masao 2006 ), play important roles in controlling immune responses and antitumor activities in vitro and in vivo. But relatively few studies have been conducted on lectins from Musca domestica L. (Diptera: Muscidae).
Although some studies have tested lectins for potential anti-tumor effects, the mode of action of lectins has not been elucidated in detail, and there is no report on the effect of MLL-2 purified from M. domestica larvae on human breast cancer MCF-7 cells. Thus, the aim of the present study was to evaluate the growth inhibition effects of MLL-2 purified from M. domestica larvae against MCF-7 cells and its apoptosis-inducing activity, with special emphasis on its mode of action. | Materials and Methods
Isolation and Purification
Musca domestica was obtained from the Tianjin Sanitation and Epidemic Prevention Station, Tianjin, P.R.China. Approximately 100 g larvae were ground with a mortar at 4° C by adding 50 ml buffered insect saline (10 mM Tris/HCl, 130 mM NaCl, 5 mM KCl, pH 7.4) and 1g phenylthiocarbamide (Sigma, www.sigmaaldrich.com ). The whole body extract was extracted for 30 minutes followed by centrifugation (8000 rpm, 20 min at 4° C). The resulting supernatant was the crude body extract.
The crude extract was mixed with 20 ml Sepharose-4B with slow stirring for at least 1.5 h at 4° C. The mixture was put into a 1.5×22 cm column (Bio-Rad, www.bio-rad.com ) and washed with 200 ml saline with a flow rate of 1 ml min -1 until no protein was detected in the elute by monitoring the absorbance at 280 nm. Then attached proteins were subsequently eluted by 0.2 M D-galactose
The fractions were pooled in an ultrafiltration cell (8050, Millipore, www.waters.com ) and concentrated by ultrafiltration with a 3 kDa membrane (Polyethersulfone, Biomax PB, Millipore). Samples were freeze-dried before storage ( Cao et al. 2010 ).
Purification by HPLC
Lectin samples were applied to an HPLC column (TSK gel Super SW3000, 4.6 mm × 30 cm, Tosoh, www.tosoh.com ) at a lectin concentration of 1 mg ml -1 . Phosphate buffered solution at pH 6.4 was used as elution buffer at a flow rate of 1 ml min -1 successively and the absorbance at 280 nm was monitored.
Haemagglutinating assay
To measure haemagglutinating activity of lectin, 25 μl of a serially diluted sample was mixed with 25 μl 2.0% trypsinized red blood cells of rabbits. The suspension containing 1% (w/v) bovine serum albumin was incubated for 1 h at 37° C in a well of a V-bottomed micro-titer plate. The haemagglutinating titer, defined as the reciprocal of the highest dilution exhibiting haemagglutination, was one haemagglutinating unit. The specific activity was calculated as the number of haemagglutinating units per mg protein ( Wang et al. 2000 ).
Cell lines and culture
Human breast cancer (MCF-7) cells and human lung fibroblasts (HLF) cells were obtained from Tianjin Medical University. MCF-7 cells were cultured in RPMI-1640 medium (GIBCO, www.lifetech.com ) supplemented with 10 % heat inactivated (56° C, 30 min) fetal calf serum (Beijing Yuanheng Shengma Research Institution of Biotechnology, Beijing, China), 100 U ml -1 penicillin (GIBCO) and 100 μg ml -1 streptomycin (GIBCO). HLF cells were routinely grown in Dulbecco's Modified Eagle's Medium (GIBCO) containing 10% fetal bovine serum, 100 U ml -1 penicillin and 100 μg ml -1 streptomycin. All cells were cultivated at 37° C with 5% CO 2 .
Growth inhibition assay
Cytotoxic effects on the growth and viability of cells were determined using tetrazolium dye (MTT) assay as previously described ( Horiuchi et al. 1988 ). Briefly, MCF-7 cells and HLF cells at concentration of 1.5×10 5 cells ml -1 were seeded in the 96-well plates (COSTAR, www.corning.com/lifesciences ). After 24 h, the cells were treated with various concentrations (6.5–250 μg ml -1 ) of MLL-1 or MLL-2 for 24, 36 and 48 h respectively. MTT reagent (20 μl) was added to each of the 100 μl culture wells. The MTT reagent was prepared at 5 mg ml -1 in PBS, filter sterilized and stored in the dark at 4° C for a maximum of 1 month. After incubation for 4 h at 37° C, the water-insoluble formazan dye formed was solubilized by adding 150 μl DMSO to the culture wells. The plates were shaken mildly for 10 min at room temperature and optical density of the wells was determined using ELISA microplate reader at a test wavelength of 570 nm and a reference wavelength of 690 nm. Control cells were grown under the same conditions without addition of MLL-1 or MLL-2. Inhibition(%) was calculated according to the method: [(C-T)/C]×100, in which C is the average OD of control cells group and T stands for the average OD of MLL-1, MLL-2 treated group.
Observation of morphological changes
Acridine orange (AO) staining is a routine diagnostic technique for apoptotic cells morphology. MCF-7 cells were seeded on slides, and placed in culture-plates. After 24 h, the cells were treated with MLL-2 (100 μg ml -1 ) for 0, 24, 48 and 72 h, respectively. The cells were fixed with 70% ethanol and slides were washed three times with PBS. AO (8.5 μg ml -1 ) was added to each slide. The slide was incubated at room temperature under reduced light for 20 min. Fluorescence was detected by a fluorescence microplate reader using filter sets, excitation at 485 nm, emission at 530–640 nm ( Petit et al. 1994 ; Gallet et al. 1995 ).
Apoptosis TUNEL Assay
Terminal deoxynucleotidyl transferase-mediated dUTP-biotin nick-end labeling (TUNEL) is an in situ method for detecting the 3′-OH ends of DNA exposed during the internucleosomal cleavage that occurs during apoptosis. Incorporation of biotinylated dUTP allows detection by immunohistochemical procedures. This technique consists of pretreatment of cells with protease and then incorporation of a labeled oligo (dU) into the DNA breaks with terminal deoxy-transferase. Finally, dU is visualized with peroxidase. The morphologic features were visualized by light microscopy. The TUNEL reaction is highly specific and only apoptotic nuclei are stained ( Gavrieli et al. 1992 ).
Flow cytometric analysis of the cell cycle and ratio of apoptotic cells
The effects of MLL-2 on proliferation of cells was evaluated by measuring the distribution of the cells in the different phases of the cell cycle by flow cytometry. Cells were treated with 100 μg ml -1 MLL-2 for the indicated times and harvested by centrifugation at 1,000 rpm for 5 min at room temperature. Resuspended cells were pellet in 1 ml of cold PBS, and cells were fixed by adding cold 75% ethanol at 4° C for 18 h. Fixed cells were washed with PBS. 100 μl of 200 μg ml -1 DNase-free, RNase was added and cells were incubated at 37° C for 30 min, and resuspended in a staining solution containing propidium iodide (1 mg ml -1 ) and incubated at room temperature for 5–10 min in the dark. The cell suspensions were placed in 12 × 75 falcon tubes and analyzed on a fluorescenceactivated cell sorter flow cytometer (Coulter Epics XL, Beckman Coulter, www.beckmancoulter.com ). Results shown are from three different experiments.
Measurement of intracellular free calcium [Ca 2+ ] i
Briefly, MCF-7 cells were plated on slides at a density of 2×10 6 cells ml -1 , and incubated with 5% CO 2 at 37° C for 24 h. After cells were stimulated by 100 μg ml -1 MLL-2 for 4, 8, 12 h, and loaded with Fluo-3/AM (5 μmol L -1 , Sigma) at 37° C under reduced light for 60 min. Intracellular free calcium [Ca 2+ ]i was measured by laser scanning confocal microscopy (LSCM, Leica, www.leicamicrosystems.com ).
Western blot analysis of p53, Bcl-2 and Bad expression
Western blot analysis of p53, Bcl-2 and Bad protein from the lysates of MCF-7 cells treated with 100 μg ml -1 MLL-2 for 4, 8, 12, 24 and 48 hours were mixed with an equal volume of 1 × SDS-PAGE sample buffer, heated at 100° C for 5 min and loaded onto a 10% SDS Polyacrylamide gel. The Nc membrane was blocked with 5% non-fat milk in phosphate-buffered saline containing 0.05% Tween-20 prior to antibody treatments. A primary antibody (Santa Cruz Biotechnology, www.scbt.com ) was then added to the solution which binds to its specific protein. A secondary antibodyenzyme conjugate, which recognizes the primary antibody, is added to find locations where the primary antibody bound. The protein of interest was visualized by enhanced chemiluminescence reagents (ECL kit, Amersham Pharmacia Biotech, www.apbiotech.com ). The protein side of the membrane was exposed to X-ray film.
Measurement of caspase-3 activity
Cells treated with 100 μg ml -1 MLL-2 for 12, 24, 36, 48 hours, respectively. Cultured MCF-7 cells were lysed with a lysis buffer (50 mM Hepes, 100 mM NaCl, 0.1% CHAPS, 1 mM EDTA, 10 mM DTT, 10 % Glycerol, Sigma). The soluble fraction of the cell lysate was assayed for caspase-3 activity using Ac-DEVD- p NA substrate (Sigma). After incubation for two hours at 37° C with 5% CO 2 the intensity of the color reaction was measured using a microplate reader at 405 nm.
Statistical analysis
The experiments were repeated three times and the mean values from both treated and untreated groups were analyzed by a twotailed unpaired t -test. The level of P <0.05 was considered to be statistically significant. | Results
Purification results
After affinity chromatography using Sepharose-4B and further separation on a TSK gel Super SW3000 HPLC column, three peaks were observed with molecular weights of 59 kDa (M), 46 kDa (MLL-1) and 38 kDa (MLL-2) ( Figure 1 ). The 59 kDa (M) peak did not exhibit haemagglutinating activity. However, the MLL-1 and MLL-2 peaks exhibited haemagglutinating activity and the MLL-2 haemagglutinating activity was higher than MLL-1 haemagglutinating activity.
Inhibitory effect of MLL-1, MLL-2 on MCF-7 cells growth
MLL-2 inhibited the viability of MCF-7 cells in a time-and dose-dependent manner ( Figure 2 ). HLF cells were used to examine cytotoxic effect of MLL-1, MLL-2 on normal human lung fibroblast cells, and viability of these cells was not significantly affected(<15% inhibition). 48 hours of continuous exposure to different doses of MLL-1 or MLL-2 resulted in cessation of cell growth followed by significant cell death. The IC50 of MLL-1 was 200 μg ml -1 and the IC50 of MLL-2 was 100 μg ml -1 for MCF-7 cells at 48 hours. MLL-2 had higher inhibitory effects on MCF-7 cells growth, thus MLL-2 were used for the following experiments.
Morphological changes
AO staining was used to study the apoptosis morphology of MCF-7 cells that were treated with 100 μg ml -1 MLL-2. In controls, the cytoplasm of MCF-7 cells is fluorescent red, and the nucleus is fluorescent yellow ( Figure 3A ). After 24 hours, the the cells were still intact and budding ( Figure 3B ). After 48 hours, the nuclei gradually disintegrated ( Figure 3C ). After 72 h, the nuclei were completely disintegrated, and green/yellow fluorescent AO staining indicated that the nuclei had fully condensed into small apoptotic bodies ( Figure 3D ).
Result of TUNEL staining in MCF-7 cells
To determine the apoptosis of MCF-7 cells in different groups, TUNEL staining was used. The results of TUNEL staining are shown in Figure 4 . The 24 hour group showed similar apoptosis compared with control group, almost all cells were pink ( Figure 4A and Figure 4B ). In the 48 hour group, MLL-2 increased apoptosis of MCF-7 cells, as evidenced by the more TUNEL-positive cells (blue purple) ( Figure 4C ). After 72 h, majority of cell were blue-purple (apoptotic cells).
Effect of MLL-2 on MCF-7 cells cycle and ratio of apoptotic cells
Flow cytometry was used to examine whether MLL-2 could interfere with the cell cycle of MCF-7 cells stained with propidium iodide. The results of the cell cycle distribution are presented in Table 1 . After MCF-7 cells were treated with 100 μg ml -1 of MLL-2 for various times, an arrest of the cell cycle at the G 2 /M phase (from 15.00±0.20% to 30.40±0.25%, P<0.01) and significantly increased apoptotic rates (from 5.38±0.04% to 17.70±2.24%, P<0.01) were observed.
Effect of MLL-2 on the expression of intracellular free calcium ([Ca 2+ ] i )
The production of [Ca 2+ ] i increased rapidly in the MCF-7 cells after stimulation with 100 μg ml -1 MLL-2 for 4, 8 and 12 hours. The fluorescent intensity of Fluo-3/AM in the MCF-7 cells was 67.67 ( Figure 5B ), 128 ( Figure 5C ) and 223.38 ( Figure 5D ) while the control group was 28.33 ( Figure 5A ). Laser scanning confocal microscopy showed that compared to the control group, the average intensity of [Ca 2+ ] i fluorescent signal in MCF-7 cells treated with MLL-2 increased significantly (P <0.01). Fluorescent signal intensity was time-dependent ( Figure 6 ).
Effect of MLL-2 on the expression of p53, Bcl-2 and Bad
P53 has been found to be involved in apoptosis induced by a broad range of agents. Western blot analysis was used to see whether MLL-2 has any effect on the expression of this proapoptotic proteins in MCF-7 cells using antibody directed against wild-type p53.
Results in Figure 7 show that 100 μg ml -1 MLL-2 treatment, p53 started increasing from as early as 4 hours, reached a maximum level by 12 hours and persisted up to 24 hours. By 48 hours of MLL-2 treatment the level of p53 was found to decrease to control levels. The level of Bcl-2 was moderately low in MCF-7 cells and remained almost unaltered after MLL-2 treatment. On the other hand, the level of Bad increased significantly (P<0.01) after 8 hours and attained a peak at 24 hours after MLL-2 treatment.
Caspase-3 assay result
A specific caspase-3 substrate (Ac-DEVD- p NA) was used to estimate the activity of caspase-3 in lysates from MCF-7 cells. Caspase-3-relative activity was significantly increased in the lysates of MCF-7 cells treated with MLL-2 for 48 hours compared with controls (407.4±3.0 vs 1749.2±6.0,P<0.01, Figure 8 ). Caspase-3 levels increased with treatment time. | Discussion
Although some studies have tested lectin for potential anti-tumor effects ( Franz 1986 ; Kuttan et al. 1990 ; Kiss et al. 1997 ; Pryme et al. 1996 ; Ryder et al. 1998 ), no specific result was reported of the effect of lectin purified from M. domestica larva. In recent years, studies of the antitumor activities of M. domestica have been of particular interest. Studies by Hou's group proved that crude extract from M. domestica exhibited antitumor activity, which appeared to be the first evidence that crude extract from M. domestica can induce anti-tumor activity in vitro ( Hou et al. 2007 ).
In this paper, M, MLL-1 and MLL-2 from Musca domestica larva was purified using affinity chromatography and HPLC. The M, MLL-1, MLL-2 were examined using haemagglutinating and growth inhibition assays against human breast cancer MCF-7 cells, which showed that MLL-2 has significant haemagglutinating activity and an inhibitory effect on MCF-7 cell growth. In this study, MLL-2 was able to inhibit MCF-7 cell proliferation in a dose- and time-dependent manner. However, MLL-2 appears to be less toxic to normal cells. Thus, MLL-2 is a promising candidate as an antitumor agent.
Apoptosis is an active, physiologic form of cell death that is mediated by the internal machinery of certain cells. It is a tightly regulated form of cell death, also called programmed cell death, which is morphologically and biochemically distinct ( Reed 2001 ). Our study demonstrated that MLL-2 induced morphological changes, DNA fragmentation, increase apoptotic rates and arrested cell cycle of MCF-7, which indicated that MLL-2 induced apoptosis in these cells. Morphologically, MCF-7 cells treated with MLL-2 were characterized by chromatin condensation and cell shrinkage shortly after treatment. The nucleus and cytoplasm then fragment, collapse of the nucleus occurs forming small intact fragments (apoptotic bodies) that can be engulfed by phagocytes. Many antitumor agents and DNA-damaging agents arrest the cell cycle and induce apoptotic cell death. The cell cycle checkpoints may ensure that cells have time for DNA repair, whereas apoptosis may eliminate the damaged cells ( Klucar et al. 1997 ; Dirsch et al. 2002 ; Gamet et al. 2000 ). Our data demonstrated that MLL-2 had antitumor activity by arresting MCF-7 cells in the G2/M phase and inducing cell apoptosis.
There are three different approaches for inducing apoptosis: the mitochondrial, death receptor, and endoplasmic reticulum pathways. However, the mechanism of MLL-2 action has not been studied in detail. The aim of the experiments was thus to identify the events which may ultimately initiate the apoptotic cascade leading to cancer cell death as a result of MLL-2 treatment.
Distribution of Ca 2+ is uneven intracellularly. The content of [Ca 2+ ] i is higher in mitochondria and endoplasmic reticulum than in cytoplasm and caryon ( Woo et al. 2002 ). Such a concentration gradient between organelle and cytoplasm is a precondition that [Ca 2+ ] i can be an intracellular messenger. Slow accumulation of [Ca 2+ ] i in mitochondria causes overloading of [Ca 2+ ] i so as to open the permeability transition pore, leading to a prompt decrease of the membrane potential (Δ φ m), swelling of mitochondria, and finally apoptosis ( Anuradha et al. 2001 ). In our study, it was found that the [Ca 2+ ] i level of the experimental group was significantly higher than that of the control group. Since [Ca 2+ ] i is a messenger for the mitochondrial pathway, this indicates that the mitochondrial pathway plays an important role in the process of MCF-7 cells apoptosis.
One of the most interesting questions in the p53 field is how a cell makes the decision to undergo growth arrest or apoptosis. It has been proposed that p53 may induce two sets of genes upon stress signals. One set mainly functions in cell growth control, such as p21/Waf-1 and GADD45, and the other set acts on apoptosis, such as Bcl-2 ( Agarwal et al. 1998 ). In our study, it was observed that MLL-2 is capable of inducing apoptosis in MCF-7 cells, in which expression of p53 can be induced. Apoptosis is accompanied by an increase in levels of p53. It is well recognized that whether a cell becomes committed to apoptosis partly depends upon the balance between proteins that mediate cell death, e.g. Bad, Bax, and proteins that promote cell viability, e.g. Bcl-2 or Bcl-xL ( Miyashita et al. 1994 ; de Aguilar et al. 2000 ). Interestingly, p53 has been shown to be capable of both down-regulating the death suppressor Bcl-2 and up-regulating the death promoter Bad, thereby changing the Bcl-2/Bad ratio and disposing to programmed cell death. We selected MCF-7 cells in which p53 is constitutively expressed. In these cells, MLL-2 induced apoptosis with an increase in p53 level. Interestingly, the MLL-2-induced increase in p53 expression precedes that of Bad thereby leading us to hypothesize that p53 transactivates Bad expression. In these cells the Bcl-2 level remained almost unchanged thereby shifting the Bcl-2/Bad ratio towards apoptosis. All these observations are evidence for the implication of the p53 signaling pathway in MLL-2 induced tumor cell apoptosis and support the candidature of Bad as the downstream effector in the mitochondrial pathway.
We examined the roles of caspase-3 in MLL-2-induced MCF-7 cells apoptosis. Based on the results of caspase activities, we conclude that caspase-3 played key roles in the post-mitochondrial apoptotic pathway. MLL-2 decreased Bcl-2 expression and activated caspase-3 cascade. Further studies on MLL-2-induced apoptotic signal goes through the caspase pathways in MCF-7 cells are warrented.
Over the years, cancer therapy had witnessed many exciting developments, but the cure of cancer has still remained as complex as the disease itself, since the mechanisms of tumor killing are still not fully realized. Identification of individual components of signaling pathways leading to tumor cell death as well as targeted alteration of those molecules may be of immense help to selectively induce apoptosis in cancer cells. In summary, our study demonstrates that MLL-2 induce MCF-7 cells apoptosis by mitochondrial pathway. Knowledge acquired from this study will, therefore, lead us one step forward towards that goal. | Associate Editor: Susan Paskewitz was editor of this paper.
A new lectin was purified from larvae of the fly, Musca domestica L. (Diptera: Muscidae) (MLL-2, 38 kDa) using affinity chromatography and HPLC. Anti-tumor activity of MLL-2 was demonstrated by its inhibition of proliferation of human breast cancer (MCF-7) cells in a time-and dose-dependent manner. The results of acridine orange staining indicated that MLL-2 caused apoptosis in MCF-7 cells. DNA fragmentation in MCF-7 cells has been detected by TUNEL. Flow cytometric analysis also demonstrated that MLL-2 caused dose-dependent apoptosis of MCF-7 cells through cell arrest at G2/M phase. The MLL-2 induced a sustained increase in concentration of intracellular free calcium. Western blot revealed that MLL-2 induced apoptosis in MCF-7 cells was associated with typical apoptosis proteins in the mitochondrial pathway. In addition, the caspase-3 activity in MCF-7 cells treated with MLL-2 for 48 hours was significantly increased compared to controls (407.4 ± 3.0 vs. 1749.2 ± 6.0, P <0.01). Since MLL-2 induced apoptosis in MCF-7cells the mitochondrial pathway may be the main pathway of antitumor activity.
Keywords | Acknowledgements
We gratefully acknowledge the financial support by the National High Technology Research and Development Program of China (863 Program, No.2007AA10Z319), the Chinese National Natural Science Foundation (31000768) and Tianjin Natural Science Foundation (07JCZDJC02900).
Abbreviations
acridine orange staining;
human lung fibroblasts;
growth inhibition assay;
apoptosis assay | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Sep 29; 10:164 | oa_package/28/68/PMC3016858.tar.gz |
||
PMC3016859 | 21062139 | Introduction
In tropical and sub-tropical regions of Mexico, where cattle are raised, the main ectoparasite of economic importance is Rhipicephalus ( Boophilus ) microplus (Canestrini) (Acari: Ixodidae) ( Murrell and Barker 2003 ), as it causes direct damage by blood feeding and transmitting babesiosis and anaplasmosis ( Bram et al. 2002 ). The control of this tick parasite is based on chemical products. However, R. microplus has developed resistance to almost all pesticides used including organophosphates, pyrethroids, and amidines, requiring higher doses or a mixture of several products for their effective control. These practices result in increased production costs and contamination of the environment ( Li et al. 2004 ; Miller et al. 2005 ).
An alternative is the use of biological control such as the use of predators, parasitoids, and entomopathogens, including fungi, bacteria, viruses, and nematodes. Within the bacterial group, the microorganism most widely used worldwide with the highest success in the control of several insect pests is the bacterium, Bacillus thuringiensis Berliner (Bacillales: Bacillaceae). B. thuringiensis has been shown to be useful for the control of different insect pests that affect plant crops, forest trees, or that are vectors of human diseases such as dengue and malaria ( Crickmore 2005 ; de Maagd et al. 2003 ; Schnepf et al. 1998 ). B. thuringiensis represents an important portion of the biopesticides market ( Porter et al. 1993 ), with annual sales around 140 million US dollars and with more than 40% of the sales in the United States ( National Academy of Sciences 2003 ). The use of B. thuringiensis is increasing rapidly because it is highly specific, significantly lowering the damage to other organisms compared to use of chemical insecticides, and also because it is biodegradable and is therefore accepted as an environmentally friendly alternative. In addition, B. thuringiensis has no adverse effects on humans. B. thuringiensis products can be combined with other pest control techniques and it is an essential component in Integrated Pest Management (IPM).
The use of B. thuringiensis for cattle tick control has been previously reported ( Ostfeld et al. 2006 ). Hassanain et al. ( 1997 ) evaluated the activity of three subspecies of B. thuringiensis ( kurstaki, israeliensis, and thuringiensis ), spraying spore/crystal mixtures on the soft tick Argas persicus and the hard tick Hyalomma dromedario. In another report, Samish and Rehacek ( 1999 ) mentioned 100% mortality using mixtures of B. thuringiensis spores and blood to feed Ornithodoros erraticus through an artificial membrane. Zhioua et al. ( 1999 ) evaluated a B. thuringiensis kurstaki strain against engorged larvae of Ixodes scapularis, achieving 96% mortality with a dose of 10 8 spores/ml.
In this work, the pathogenicity of some native strains of B. thuringiensis against a tick R. microplus population that is resistant to chemical pesticides was evaluated. | Materials and Methods
The B. thuringiensis strains used in this study belong to the collection of the Vegetal Parasitology Laboratory at the Center of Biological Research at the University of Morelos, Mexico. The GP123, GP138, GP139, GP140 B. thuringiensis strains were grown at the University of Morelos's facilities using solid medium Luria-Bertani (LB), until complete sporulation (72 h). Crystal inclusions were observed through an optical phase-contrast microscope. Spores and crystals produced by the B. thuringiensis strains were recovered using a bacteriological loop and suspended in 20 ml of sterile water. Finally, the 0.1 mM protease inhibitor (PMSF) was added to avoid protein degradation. Total protein was quantified by the Bradford technique ( Bradford 1976 ).
A R. microplus strain resistant to organophosphates, pyrethroids, and amidines was maintained in Holstein steers (250 kg weight) at the facilities of INIFAP-CENID-Veterinary Parasitology, at Jiutepec, Morelos, Mexico, where the bioassays were performed. Two steers were artificially infested with 1 g of R. microplus larvae. Twenty-one days after infestation, fully engorged female ticks began to drop. Females, weighing 0.2 to 0.4 g, were collected to be used in the bioassay. The adult immersion test developed by Drummond ( 1969 ) was used to determine the effect of the B. thuringiensis bacterium against R. microplus ticks. Engorged adult female ticks were immersed for 60 seconds in a 1.25 mg/ml suspension in water of B. thuringiensis. Ticks were then placed individually in 24-well polystyrene plates (Cell Wells, Corning Glass Works, http://www.corning.com/lifesciences ). Inhibition of the individual amount of oviposition and egg hatch were recorded during the bioassay. Tick controls were treated with distilled water. Incubation was performed in a humidity chamber (90–95% relative humidity) at 28° C. For each B. thuringiensis strain tested, 48 female R. microplus ticks were used. Ticks were analyzed under a stereoscope to confirm female tick mortality after 5, 10, 15, and 20 days after inoculation. To measure the effects of bacterial infection on tick fertility and fecundity, an efficiency index was quantified (egg weight/engorged female tick weight) ( Drummond 1969 ). At 10 days after innoculation, egg masses were separated from the female and weighed. The oviposition capacity of control ticks and those surviving the bacterial infection was determined by the efficiency index.
Mortality and egg hatch data were transformed (arcsine) in order to normalize and perform variance analysis (α = 0.05) and mean estimation by using Tukey's test (α = 0.05) and the statistics package SAS 2001. Data obtained from egg weight assessments were not transformed. | Results
A R. microplus strain that is resistant to organophosphates, pyrethroids, and amidines has been used for the assays. As an alternative for the control of this pest, the effectivity of some B. thuringiensis strains that were isolated from different insect and arthropod bodies collected from different regions of Mexico were analyzed. Sixty different native B. thuringiensis strains were tested, which were only characterized by the presence of a crystal inclusion during bacterial sporulation under phase-contrast optical microscope observations. Among these strains were four native strains that caused mortality in the adult immersion test assay. The mortality induced by strains GP123, GP138, GP139, GP140 on the R. microplus adult female was assayed by immersion assay. The immersion assay was first used to determine the toxicity of these four B. thuringiensis strains. All of these B. thuringiensis strains showed high mortality values statistically different (P< 0.0001) from the controls at all tested times after innoculation. None of the strains were significantly different from one another. The data suggest that GP138 strain had an earlier effect than the other strains ( Table 1 ). The causal agent (the B. thuringiensis strains) was recovered from all dead ticks, confirming that B. thuringiensis bacteria were responsible for killing the R. microplus resistant strain.
The effect of the R. microplus strain on oviposition and egg hatch was also analyzed during the immersion trials. Strains GP138, GP139 and GP140 showed similar inhibitory effects without statistically significant differences among them (Tukey's test, α = 0.05) ( Table 2 ), but they were significantly different from the controls. | Discussion
B. thuringiensis are Gram-positive bacteria that are able to produce proteins such as Cry, Cyt, Vip, and S-layer, which have insecticidal properties with different modes of action. These proteins are toxic to insect species belonging to the orders Lepidoptera, Diptera, Coleoptera, Hymenoptera, as well as for acari and nematodes ( Bravo et al. 2007 ; Schnepf et al. 1998 ; Peña et al. 2006 ). However, there is a great diversity of arthropod species, such as ticks, for which no specific insecticidal B. thuringiensis proteins have been found.
Previous reports about the toxicity of different B. thuringiensis strains against ticks are limited. Hassanain et al. ( 1997 ) reported that B. thuringiensis kurstaki produced 100% mortality against A. persicus engorged females after five days at a dose of 1 mg/ml. B. thuringiensis israelensis caused 100% mortality at a dose of 2.5 mg/ml, and B. thuringiensis thuringiensis at a 5 mg/ml dose induced 93.3% mortality. With H. dromedarii, none of the B. thuringiensis strains produced 100% mortality, even at doses as high as 10 mg/ml. In another report, it was shown that B. thuringiensis kurstaki spores (10 6 /ml) were toxic to engorged I. scapularis larvae. However, an LC50 has been reported with 10 7 spores ( Zhioua et al. 1999 ). In this work, one dose (1.25 mg/ml) was used for immersion assays to characterize the B. thuringiensis strain collection (60 strains). The four selected B. thuringiensis strains GP123, GP138, GP139, and GP140 produced 62.5, 81.25, 64.58, and 77.08% mortality, respectively, by the fifth day. These data indicated that the GP138 strain was the most pathogenic. Analysis of the effect of B. thuringiensis strains on R. microplus with the immersion aassay led us to infer that the B. thuringiensis strains can affect R. microplus through approaches other than ingestion, probably by means of the spiracles or genital pore as was previously proposed ( Zhioua et al. 1999 ).
It can be concluded that some B. thuringiensis strains had a toxic effect on R. microplus using the adult immersion assay. The R. microplus acaraside-resistant strain could be controlled with pathogenic B. thuringiensis strains, however, more studies are necessary to optimize the application of the B. thuringiensis. The results indicate that immersion trials are effective to control R. microplus. | Associate Editor: Fernando Vega was editor of this paper
The pathogenicity of four native strains of Bacillus thuringiensis against Rhipicephalus ( Boophilus ) microplus (Canestrine) (Acari: Ixodidae) was evaluated. A R. microplus strain that is resistant to organophosphates, pyrethroids, and amidines, was used in this study. Adult R. microplus females were bioassayed using the immersion test of Drummond against 60 B. thuringiensis strains. Four strains, GP123, GP138, GP130, and GP140, were found to be toxic. For the immersion test, the total protein concentration for each bacterial strain was 1.25 mg/ml. Mortality, oviposition, and egg hatch were recorded. All of the bacterial strains had significant effects compared to the controls, but no significant differences were seen between the 4 strains. It is evident that these B. thuringiensis strains have a considerable detrimental effect on the R. microplus strain that is resistant to pesticides.
Key words | CC BY | no | 2022-01-12 16:13:45 | J Insect Sci. 2010 Oct 22; 10:186 | oa_package/01/f4/PMC3016859.tar.gz |
|||
PMC3016860 | 20874389 | Introduction
Acetylcholinesterase (AChE) catalyses the hydrolysis of the neurotransmitter, acetylcholine, thereby stopping transmission of nerve impulses at synapses of cholinergic neurons in the central and peripheral nervous systems in both vertebrates and invertebrates ( Taylor 1991 ). Consequently, inhibition of AChE leads to paralysis and death. In addition, AChEs are expressed at other sites in animals, where they may act as regulators involved in cell growth and adhesion, probably unrelated to their catalytic properties ( Soreq and Seidman 2001 ). In insects, AChE is a target of organophosphorus and carbamate compounds, which remain widely used pesticides around the world ( Harel et al. 2000 ).
Since the first cloning of an insect AChE gene ( Ace ) from Drosophila melanogaster ( Hall and Spierer 1986 ), 602 AChE sequences from Arthropoda (551 in Hexapoda and 51 in Ixodidae) have been registered with databases ( http://www.uniprot.org/uniprot/?by=taxonomy&query=prosite+PS00941#35237 , 2759, 33208, 119089, 6656, 6939, 6960, 33340, 33342, 7524). Biochemical characterizations of AChE have been carried out in more than 20 insect species ( Gao et al. 1998 ). Gene structures of AChEs from the economic and medical insect species have been characterized in detail, including Anopheles stephensi ( Hall and Malcolm 1991 ), Aedes aegypti ( Anthony et al. 1995 ; Mori et al. 2007 ), Leptinotarsa decemlineata ( Zhu and Clark 1995 ), Musca domestica ( Huang et al. 1997 ), Nephotettix cincticeps ( Tomita et al. 2000 ), Schizaphis graminum ( Gao et al. 2002 ), Nippostrongylus brasiliensis ( Hussein et al. 2002 ), Aphis gossypii ( Li and Han 2004 ; Toda et al. 2008 ), Culex tritaeniorhynchus ( Nabeshima et al. 2004 ), Blattella germanica ( Mizuno et al. 2007 ), and Alphitobius diaperinus ( Kozaki et al. 2008 ). These studies helped in revealing the molecular structure of the insect AChEs and the mechanism of insecticide-resistance in these important insect pests.
The brown planthopper Nilaparvata lugens Stål (Hemiptera: Delphacidae), is one of the most important agricultural pests in rice planting areas. It is a rice specialist feeder that often causes serious loss of rice yield by sucking sap from the phloem and by transmitting the stunt virus disease ( Rubia-Sanchez et al. 1999 ). Insecticides are commonly used to control N. lugens in field, but this often causes insecticide-resistance and resurgence of the insect pest ( Sujatha and Regupathy 2003 ). An altered AChE has been verified in N. lugens as a common mechanism of resistance to organophosphorus and carbamates ( Yoo et al. 2002 ). However, the structure of the AChE gene from N. lugens remains to be elucidated. Cloning of the AChE cDNA is expected to lay a foundation for understanding the molecular properties of the AChE from N. lugens .
In this paper, data is presented on cDNA cloning and characterization, as well as the comparison of an AChE from methamidophosensitive and -insensitive N. lugens strains. The following aspects are reported: (1) the AChE cDNA nucleotide sequence and its deduced amino acid sequence; (2) characteristics of the cDNA-deduced AChE; (3) phylogenetic analysis of this AChE relative to those from other animals; (4) the AChE transcript size and expression level, as well as the gene copy in the genome; and (5) detection of resistance-associated point mutations of methamidopho-insensitive acetylcholinesterase in the resistant strain. | Materials and Methods
Experimental insects
The clone of the susceptible N. lugens was mass reared on plants of Taichung Native 1 at 25 ± 2° C, 80% relative humidity, under the photoperiod of 16:8 L:D. Adult insects were collected for genomic DNA isolation. Fourth instar larvae and adults were used for RNA isolation.
Resistant N. lugens was collected originally from Xianning district, Hubei Province, China, where usage of methamidopho to control this pest was widespread. After screening with methamidopho (50% emulsion, technical grade, Hubei Sanongda Pesticide Co. Ltd.), a single colony was selected to construct field-resistant clones. The Median Lethal Dose, 50%, (LD 50 ) of methamidopho to the resistant strain was 0.150 !g (volume converted to mass) per fourth instar larva, while the dose to the susceptible strain was 0.006 !g per larva. Thus the resistant strain showed a moderate resistance level to methamidopho (resistance ratio: 25). The resistant strain for RNA isolation was reared as described above.
Cloning of AChE cDNA fragments
Total RNA was isolated from fourth instar larvae by TRIzol reagent (Invitrogen, www.invitrogen.com ). poly(A) + RNA was separated from total RNA (1 mg) using oligo (dT) coupled with paramagnetic beads (Promega, www.promega.com ). The first and second strand cDNA was synthesized according to standard protocols ( Sambrook et al. 1989 ). The double-stranded cDNA was purified and dissolved in Tris-EDTA buffer solution (10mM Tris-HCl, 1.0mM EDTA, pH 8.0).
A 278-bp homologous AChE cDNA fragment was generated using semi-nested PCR and degenerate primers (ACE-f1, ACE-f I , and ACE-f 2 ) ( Table 1 ), as described by Zhu et al. ( 2000 ). The 278-bp cDNA fragment was used as template to design anti-sense and sense gene specific primers (GSP1 and GSP2) for 5′ and 3′ RACE reactions. The sequences of primers are listed in Table 1 .
ACE-f1, ACE-f I , and ACE-f2 indicate forward primer, forward inner primer and reverse primer, respectively. GSP1 and GSP2 indicate the gene specific primers for 5′ and 3′ RACE, respectively. GSP-S and GSP-AS indicate the gene specific primers for amplification of the inner cDNA fragment containing the complete coding region of the acetylcholinesterase.
5′ and 3′ rapid amplification of cDNA ends (RACE)
The 5′ and 3′ RACE reactions were carried out according to the instruction manual of the SMART RACE cDNA Amplification Kit (BD Bioscience Clontech Company, www.clontech.com ). cDNAs were synthesized by using primers, 5′-CDS, 3′-CDS and SMART II A Oligo provided by the kit. The 5′ end of cDNA was amplified by using GSP1 and UMP, and the 3′-end of cDNA was amplified by using GSP2 and UMP. Touchdown amplification profiles were used as follows: 94° C for 30 s, 72° C for 3 min for
5 cycles, then 94° C for 30 s, 70° C for 30 s and 72° C for 3 min for 5 cycles, then continued at 94° C for 30 s, 68° C for 30 s and 72° C for 3 min for the remaining 28 cycles. Amplified fragments were routinely cloned into pGEMT vectors (Promega) and sequenced using M13 and M13(-) universal primers at both ends. More than 4 independent clones of each, the 5′ and 3′ ends of cDNAs, were sequenced to eliminate possible PCR mutations.
Sequencing and computer-assisted analysis of AChE cDNA
Molecular mass and isoelectric point were predicted by Compute pI / Mw tool ( http://us.expasy.org/tools/pi_tool.html ). Signal peptide was predicted by SignalP 3.0 Server ( http://www.cbs.dtu.dk/services/SignalP ). A molecular phylogenetic tree was con-structed with PAUP 4.0 software using the bootstrap N-J tree option ( n of bootstrap trials = 1000). The tree was viewed by using Tree View (v. 1.6.6). Potential N-linked glycosylation sites were predicted by using NetNglyc program ( Nielsen et al. 1997 )
Southern and Northern blot analysis
A DNA fragment (1056-bp, 36-1092 in the full cDNA) was generated by digestion of the 5′ end cDNA with Xho I and Nhe I (Promega). The DNA fragment was used as a probe for Southern and Northern blot analyses. The probe was labeled by random primer using “ [ 32 P] dCTP (Perkin Elmer Life Sciences, www.perkinelmer.com ).
The poly(A) + RNA was separated from total RNA of adult N. lugens and fourth instar larvae and analyzed by electrophoresis on 1.5% denaturing, formaldehyde agarose gels (3 μg each lane). An outer lane containing RNA markers was excised from the gel prior to blotting, stained with ethidium bromide and used for size estimations. The RNA gel was blotted onto a Hybond-N + nylon membrane (Amersham, USA) that was then denatured by alkali and subsequently baked at 80° C for 2 h. The filter was prehybridized for 6 h at 65° C, then hybridized overnight at 65° C, washed in I×SSC, 0.2% (weight/volume) SDS at 65° C for 15 min, and then in 0.5×SSC, 0.1% (weight/volume) SDS at 65° C for another 15 min, then exposed to X-ray films (FUJI Film, www.fujifilm.com ) for one week at -80° C.
Genomic DNA was isolated from adult N. lugens according to Sambrook et al. ( 1989 ). Aliquots containing 15 μg genomic DNA were digested with Eco RI, Eco RV, Hind III, and Dra I (Promega), and the resulting fragments were separated by electrophoresis in a 1.5% agarose gel, then transferred to an NC membrane (Amersham) that was then baked at 80° C for 2 h. DNA markers were disposed as described in the case of the Northern blot procedure. The filter was prehybridized for 6 h at 65° C, then hybridized overnight at 65° C with the labeled probe. The membrane was washed, and autoradiography was performed as above.
Detection of mutations in AChE possibly associated with methamidopho resistance
To compare the nucleotide sequences of AChE cDNA between the resistant strain and the susceptible strain, and to find the mutations potentially involved in methamidopho resistance, 5 insect larvae from each strain were separately isolated for total RNA by using an RNA isolation kit (Takara, www.takara-bio.com ). The cDNA retro transcribed from total RNA was used as template, and an inner AChE2 cDNA fragment was amplified by long distance-PCR with gene specific primers, GSP-S and GSP-AS ( Table 1 ). The following amplification profiles were used: 94° C for 2 min; 94° C for 30 s, 55° C for 30 s, and 72° C for 3 min for 35 cycles; then 72° C for 10 min. All amplification reactions were performed in a PE-9700 PCR machine (Perkin Elmer). Amplified fragments were cloned and sequenced. | Results
Cloning and characterization of AChE cDNA from N. Lugens
From the PCR on N. lugens cDNA using degenerate primers, a 278-bp cDNA fragment with deduced amino acid sequences that matched AChEs in GenBank was generated. A 2467-bp full-length cDNA sequence was obtained by RACE reactions based on the 278-bp cDNA fragment. The full sequence consisted of a 5′ untranslated region (UTR) of 403 bp, an open reading frame of 1938 bp, and a 3′ UTR of 123 bp including a poly(A) tail of 29 bp. The 3′ UTR possessed a typical polyadenylation signal (AATAAA) 14 bp upstream of the poly(A) tail ( Figure 1 ).
A putative preproenzyme of 646 amino acid residues was encoded by the open reading frame of the cloned AChE cDNA. The predicted preproenzyme comprised a signal peptide of 30 amino acid residues at the N-terminal predicted by SignalP 3.0 Server and a mature enzyme of 616 amino acids ( Figure 1 ). The predicted molecular mass and isoelectric point of the mature enzyme were 69418.57 and 5.21, respectively, which are close to those of AChE from N. cincticeps (Mw / pI: 73764.15 / 5.30; Ac: AF145235-1) and A. gossypii (Mw / pI: 70541.51 / 4.98; Ac: AF502081-1). There were four potential N-glycosylation sites in the amino acid sequence of the deduced mature AChE (Asn-X-Ser or Asn-X-Thr) ( von Heijne 1987 ). The four sites were located at 113–115 (i.e. Asn-Leu-Ser), 407–409 (i.e. Asn-Met-Thr), 498–500 (i.e. Asn-Met-Ser), and 605–607 (i.e. Asn-Met-Thr) ( Figure 1 ).
Characterization of the cDNA-deduced AChE
For the primary structure of the protein, N. lugens exhibits all the major conserved features revealed by AChE of Torpedo californica (Ac: P04058) ( Sussman et al. 1991 ) ( Figure 2 ). The structure features are as follows (AChE amino acids are numbered from the start of the mature proteins and the corresponding amino acid residues for T. californica are listed in parentheses for reference): (1) conserved active site triad: S242 (200), E371 (327), and H485 (440); (2) a choline binding site: W108 (84); (3) three pairs of cysteines putatively forming intrachain disulfide bonds (C91 (67)-C118 (94), C296 (254)-C311 (265), and C447 (402)-C563 (521)); (4) the sequence FGESAG, flanking S242 (200), conserved in all cholinesterases; (5) a typical invertebrate acyl pocket ( Sutherland et al. 1997 ) that contains only one conserved aromatic site F334 (290); and (6) 10 conserved aromatic amino acid residues out of 14 aromatic residues lining the catalytic gorge, present in the electric ray AChE, were also present in N. lugens AChE (i.e. W108, W151, Y167, W275, W326, F334, Y374, F375, Y378, and W477), in N. cincticeps (i.e. W144, W187, Y203, W311, W362, F370, Y410, F411, Y414, and W513), and A. gossypii (i.e. W123, W163, Y179, W287, W338, F346, Y386, F387, Y390, and W491), but four are not aromatic in N. lugens AChE: glutamic acid 94 (tyrosine 70), methionine 158 (tyrosine 121), serine 336 (phenylalanine 292), and aspartic acid 489 (tyrosine 442).
Homology analysis of amino acid sequences revealed that the cDNA-deduced N. lugens AChE has 83% amino acid identity with that of N. cincticeps (accession number: AF145235-1), 78% with L. decemlineata (Q27677), 74% with B. germanica (ABB89947), 73% with Helicoverpa armigera (AAN37403), 72% with Plutella xylostella (AAL33820), 70% with Bombyx mori (NP_001108113), 68% with Cydia pomonella (ABB76665). The relationship of the predicted AChE with 44 AChEs from various species was analyzed. Phylogenetic tree indicated that the 45 AChEs assort into three lineages. N. lugens AChE and 21 insect AChEs formed a lineage, among which N. lugens AChE was most closely related to AChE of N. cincticeps , forming an independent cluster, suggesting that they may share the same ancestor. In this lineage, a gene loss occurred in the higher Diptera, which have lost their AChE-1 version ( Russell et al. 2004 ). Four extra nematode AChEs (-3 and -4) from Caenorhabditis elegans and Caenorhabditis briggsae form another independent cluster with bootstrap value 966. While vertebrate and invertebrate AChEs or AChE-1s belong to the same lineage containing each copy of human and chicken butyrylcholinesterases (BuChEs). The structure of the phylogenetic tree suggests that major diversifications occurred among vertebrates and invertebrates during the evolution of this enzyme.
Southern and Northern blot analyses
The cloned AChE cDNA contains one Hind III internal restriction site in the coding region (2040–2045), and the blot showed strong hybridization signals to approximately 5.0-and 6.0-kb fragments, suggesting there are no other Hind III sites in the internal sequence of this cDNA. However, the cDNA does not contain any restriction sites for the enzymes Eco RI, Eco RV and DraI . In Eco RV and Dra I digested DNA, strong hybridization was seen to 7.0- and 8.5-kb fragments, respectively. When the probe was used to hybridize to the Eco RI digested fragments, the blot showed strong hybridization to approximately 16- and 3.8-kb fragments ( Figure 4 ). The additional hybridization fragment in the blot of Eco RI digested DNA can be explained by the presence of Eco RI sites in the introns of AChE cDNA. There are nine introns in the AChE gene of D. melanogaster ( Fournier et al. 1989 ). In this study, the additional hybridization fragment in the blot of Eco RI-digested DNA may be explained by the probe corresponding to the region containing most of the exons of AChE gene in D. melanogaster, therefore, multiple bands would be expected. Southern blot analysis suggested that there is likely a single copy of this AChE gene in N. lugens.
This gene exhibited a very low level of mRNA expression because no hybridization signal could be detected in Northern blot analysis using total RNA. When the poly(A) + RNA was used for Northern blot analysis, a distinctive 2.6-kb transcript was revealed in adult insects and larvae ( Figure 5 ). This transcript size matches well with the 2.5-kb of N. lugens AChE cDNA.
Detection of the mutations in acetylcholinesterase from resistant strain
An inner AChE2 cDNA fragment with 2065 bp (base position 320–2384 in the full cDNA sequence) containing the complete coding sequence was amplified from the 10 individual larvae. The 10 fragments were analyzed for genotype by direct sequencing. The result showed that the cDNA sequences were homozygous, and no genetic polymorphisms were observed within the same strain individuals. But there were three base substitutions in the resistant strain cDNA (accession FM866396), compared with the susceptible strain. Two of the three bases were in the same codon and resulted in a nonsynonymous amino acid replacement (GGG by AGC, base position 856–858, resulting in Gly185Ser (Gly118 in T. californica )). The other substituted base is located at the position 2134 and is a synonymous alteration, resulting in change of the codon TTC to TTT, both of which code for Phe. Thus this bp change is likely irrelevant to the resistance. | Discussion
In this study, a full cDNA encoding AChE was isolated from N. lugens and characterized in detail. The N. lugens AChE has the highest amino acid identity (83%) with that from N. cincticeps. Phylogenetic analysis also revealed that N. lugens AChE was most closely related to N. cincticeps, and it belongs to AChE2 subgroup ( Figure 3 ). The deduced N. lugens AChE has the major conserved features found in AChE of T. californica ( Sussman et al. 1991 ). Fourteen aromatic residues in T. californica form the gorge, which directs acetylcholine to the active site serine, and most of these residues are present in AChE from other organisms studied. In this work, four out of the 14 residues were not conserved aromatic in the deduced AChE from N. lugens ( Figure 2 ), and the other thirteen insects were found to lack the four aromatic residues as well, suggesting that whether the four residues are conservative will not affect the enzyme activity in some insects. There were four predicted N-glycosylation sites in the mature AChE in N. lugens . The four sites are likely to be necessary for the function of this enzyme, perhaps for glycophospholipid attachment, which is the main form of membrane attachment of AChEs in invertebrates ( Gagney et al. 1987 ).
It has been known that vertebrates and invertebrates have different forms of AChE genes ( Grauso et al. 1998 ). Vertebrates have a variety of AChE forms encoded by a single gene. These forms differ in the number of subunits and the way they are linked to cell membranes. These AChEs contain the same catalytic domain and catalytic activity but are translated from different mRNAs generated by alternative splicing of the single gene ( Li et al. 1991 ). Invertebrates also have different forms of AChE; some nematodes have different AChEs encoded by more than one gene, and the different forms of AChE have different catalytic activities ( Gao et al. 2002 ). Previous studies revealed that some insects have two different AChEs, which are either orthologous or paralogous to Drosophila Ace ( Li and Han 2002 ; Nabeshima et al. 2003 , 2004 ; Chen and Han 2006 ; Shang et al. 2007 ; Badiou et al. 2007 ). Actually, four gene duplication events and at least one gene deletion event occurred in the evolution of AChEs from nematodes to humans ( Russell et al. 2004 ). The loss of the AChE-1 gene took place in insects, specifically in the higher Diptera. Because AChE-1 processes acetylcholine in the majority of insect and arthropods, the higher Diptera without AChE-1 must rely on their single AChE enzyme (derived from the ancestral ace-2) to execute these functions ( Harel et al. 2000 ; Weill et al. 2002 ). In this study, Southern and Northern blot analyses revealed that there is probably one copy of the AChE gene in the N. lugens genome and one AChE transcript in the transcriptome. However, these data are too limited to draw a conclusion that there is truly a single AChE gene in N. lugens. Whether there is a second gene encoding AChE paralogous to Drosophila AChE is an interesting question currently under exploration.
Many agricultural and medical pests have developed resistance to insecticides by decreasing the sensitivity of AChE. Specific amino acid substitutions at several positions in AChE were shown to cause a decreased sensitivity of AChEs to insecticides in some insects, pointing to the importance of AChE protein primary structure in insecticide resistance ( Li and Han 2004 ; Nabeshima et al. 2003 , 2004 ; Hsu et al. 2006 , 2008 ; Alout et al. 2007 ; Kakani et al. 2008 ; Magaña et al. 2008 ). The absence of protein polymorphism attributable to insecticide insensitivity was reported in N. cincticeps ( Tomita et al. 2000 ). Independent duplications of the AChE gene confer insecticide resistance in the mosquito Culex pipiens ( Labbé et al. 2007 ). In this work, the altered AChE predicted from the cDNA cloned from the resistant N. lugens contained an amino acid replacement, Gly185Ser. Gly185 (Gly118) is an important residue that forms the oxyanion hole with Ala273 (Ala201) and Gly186 (Gly119) in the active site of AChE. The oxyanion hole formed by the peptidic NH group from these three residues forms hydrogen bonds with the carbonyl oxygen of the substrate or inhibitor and performs the function of stabilizing the negative charge on the anionic moiety of the ligand ( Zhang et al. 2002 ). A mutation at this site is likely to change the affinity of AChE for its substrates and inhibitors. A previous study revealed that Gly221Ser in A. gossypii (Gly119) in AP-AChE from OP resistant Cx. pipiens and Anopheles gambiae mosquitoes was a replacement of another of the three residues forming the oxyanion hole ( Weill et al. 2003 ). Subsequently, the third amino acid Ala302 (Ala201) of the three residues was found to be replaced by a Ser in A. gossypii AChE, resulting in reduced susceptibility of the H-16 strain to two organophosphorous insecticides, fenitrothion and malathion ( Toda et al. 2004 ). The resistant N. lugens strain that was studied here exhibited a 25-fold decrease in methamidopho sensitivity. The replacement, Gly185Ser in the altered AChE, likely confers this insecticide insensitivity.
People once controlled N. lugens by using insecticides, but this strategy does not work effectively today. Continuous use of insecticides has reduced the biological regulatory function of natural enemies, resulting in resurgence and insecticideresistance in this pest ( Sujatha and Regupathy 2003 ). To minimize these problems, it is necessary to reduce the usage amount of insecticides and to find a more environmentally-friendly control strategy. The cloned AChE cDNA in this study revealed the molecular properties of the AChE from N. lugens . The altered AChE with the amino acid substitution (Gly185Ser) might confer methamidopho insensitivity in the resistant strain. Further investigation is needed to elucidate the mechanism of this mutation in resistant N. lugens , which would be expected to help in the effective control and resistance management of this pest. | Associate Editor: Mariana Wolfner was editor of this paper
A full cDNA encoding an acetylcholinesterase (AChE, EC 3.1.1.7) was cloned and characterized from the brown planthopper, Nilaparvata lugens Stål (Hemiptera: Delphacidae). The complete cDNA (2467 bp) contains a 1938-bp open reading frame encoding 646 amino acid residues. The amino acid sequence of the AChE deduced from the cDNA consists of 30 residues for a putative signal peptide and 616 residues for the mature protein with a predicted molecular weight of 69,418. The three residues (Ser242, Glu371, and His485) that putatively form the catalytic triad and the six Cys that form intra-subunit disulfide bonds are completely conserved, and 10 out of the 14 aromatic residues lining the active site gorge of the AChE are also conserved. Northern blot analysis of poly(A) + RNA showed an approximately 2.6-kb transcript, and Southern blot analysis revealed there likely was just a single copy of this gene in N. lugens . The deduced protein sequence is most similar to AChE of Nephotettix cincticeps with 83% amino acid identity. Phylogenetic analysis constructed with 45 AChEs from 30 species showed that the deduced N. lugens AChE formed a cluster with the other 8 insect AChE2s. Additionally, the hypervariable region and amino acids specific to insect AChE2 also existed in the AChE of N. lugens . The results revealed that the AChE cDNA cloned in this work belongs to insect AChE2 subgroup, which is orthologous to Drosophila AChE. Comparison of the AChEs between the susceptible and resistant strains revealed a point mutation, Gly185Ser, is likely responsible for the insensitivity of the AChE to methamidopho in the resistant strain.
Keywords | Acknowledgements
This research was supported by grants from the National Natural Science Foundation of China (30500328).
Abbreviations
acetylcholinesterase;
gene specific primer | CC BY | no | 2022-01-12 16:13:43 | J Insect Sci. 2010 Jul 11; 10:102 | oa_package/5f/7a/PMC3016860.tar.gz |
||
PMC3016861 | 20879913 | Introduction
The family Drosophilidae (Diptera) is composed of more than 3,500 described species that occur in a number of ecosystems all over the world ( Bachli 1998 ). Most genera are found in tropical regions. The Drosophila genus is the most abundant and comprises around 53% of the total species. Many of them are endemic to certain regions and a few are cosmopolitan, dispersed mostly in association with human activity. Studies of Drosophila have contributed to our understanding of principles of basic genetics, molecular biology, population genetics and evolution. Drosophila is also being used for the study of population fluctuations, as they are highly sensitive to slight environmental modifications that is reflected in the size of the natural population structure and ecology. It is known that changes in temperature and rainfall affect viability, fertility, developmental time and other factors that influence the rate of population growth and survival ( Torres and Madi-Ravazzi 2006 ). Rainfall and light intensity also have an influence on the supply of resources, principally in relation to the periods of flowering and fruiting of various vegetable resources that provide most of the sites for oviposition and feeding ( Brncic et al. 1985 ). In addition to above physical factors, biotic factors also influence the diversity and abundance of natural populations of Drosophila including intra—inter specific relationships, such as population density, population age, distribution, competition and relationship between Drosophilids and their hosts and predators. The number of the individuals of a species in a locality is significantly influenced by the presence or absence of another species, especially those that are ecologically related ( Putman 1995 ; Begon 1996 ). The ability to colonize multiple niches is an indication of the biological success of many species ( Torres and Madi-Ravazzi 2006 ).
Thus the presence or absence of a species in an ecological niche, and its richness or abundance in that area is an indicator of both biological and ecological diversity of that ecosystem. In addition to physical and biotic factors, the topography and season also affect the animal distribution. Elevation is one important aspect of topography and one has to look at the animal distribution from that perspective. A few attempts have been made to collect Drosophila at different altitudes, but these data are not analyzed with an ecological perspective ( Reddy and Krishnamurthy 1977 ). Reddy and Krishnamurthy ( 1977 ) have also said that physical and biotic factors are the sole determinants of animal communities. If that is so elevation and season should not have any influence on animal distribution. In the present studies we propose to verify the effect of elevation and season on Drosophila community.
Furthermore, in the competitive exclusion theory, Gause suggested that two related species competing for the same resources could not co-exist together in the same ecological niche. Laboratory experiments have questioned the validity of the Gause Principle ( Ayala 1969 ). The presence of taxonomically or phylogenetically related species in an ecological niche indicates their coexistence and absence of such related species suggests competitive exclusion. One aim of the present study is to investigate whether taxonomically or phylogenetically related Drosophila species co-exist in nature or not.
The present analysis of Drosophila community was done at different altitudes of Chamundi hill, Mysore (India). It is a small mountain (11′36′ N Latitude and 76′ 55′ E) with scrubby forest that was uninhabited about forty years ago with a small temple at the hilltop. However, the hill has become a famous tourist spot of Mysore (Karnataka, India) since about thirty years ago with a small township built at the top with a population of 2,000 and experiences the inflow of many tourists. | Materials and Methods
The altitudinal and seasonal fluctuation in Drosophila fauna was studied in four different wild localities of Chamundi hill, Mysore. For this purpose monthly collection of flies were made at the altitudes of 680 m, 780 m, 880 m, and 980 m between February 2005 to January 2006. Both bottle trapping and net sweeping methods were used. For bottle trapping, milk bottles of 250 ml capacity containing smashed ripe banana sprayed with yeast were tied to the twigs underneath small bushes at a height of three to five feet above the ground. Five traps each were kept at each altitude. The following day the mouth of each bottle was plugged with cotton and removed from the bushes. The flies that were collected in the bottles were transferred to fresh bottles containing wheat cream agar medium (consisting 100 gm wheat powder, 120 gm raw sugar, 10 gm agar agar, 7 ml propionic acid boiled in 1000 ml water and cooled, Hegde et al, 2001 ) as food. Net sweeping was done on naturally rotting fruits if available or on fruits placed beneath shaded areas of the bushes one day before the collection. After each sweep, flies were transferred to the bottles containing fresh food. Five sweeps were made at each place so as to maintain uniformity in collection in each locality. The flies were brought to the laboratory, isolated, identified and sexed. Categorization of the collected Drosophila flies was made respective to taxonomic groups by employing several keys ( Sturtevant 1927 ; Patterson and Stone 1952 ; Thorckmorton 1962 ; Bock 1971 ). To study seasonal variation the entire year was divided into three seasons; premonsoon extending from February-May, monsoon from June-September and post monsoon from October-January.
Vegetation Collection sites
At 680 m: The foot of the hill was surrounded by mango orchards along with trees such as Acacia concinna, Acacia catechu, Anacardium occidentale, Bombax ceiba, Breynea restusa, Cassia spectabilis, Celastrus paniculata, Cipadessa baccifera, Clematis trifolia, Dalbergia paniculata, Dioscorea pentaphylla, Ficus religiosa, Ficus bengalensis, Glyrecidia species,, Gymnima sylvestres, Hibiscus malva, Ichnocarpus frutescens, Lantana camera, Pongamia glabra, Phyllanthus species, Tamarindus indica, Thunbergia species, Tectona grandis, Sida retusa , and many shrubs including cactus.
The vegetation both at 780 m and 880 m was the same. Major plants found in these localities were, Albizzia amara, Andrographis serpellifolia, Argyria species, Bignonia species, Breynea restusa, Bridalia species, Cassia fistula, Cassine glauca, Eucalyptus grandis, Garcinia species, Lantana camera, Phyllanthus microphylla, Sida rhombifolia, Terminalia paniculata, Terminalia tomentosa, Vitex negundo, Zizipus oenoplea, Zizipus jujuba .
The vegetation at the top of the hill (980 m) includes, Acacia catechu, Anacardium occidentale, Autocarpus integrifolia, Jasminum species, Jatropa curcus, Lantana camera, Leus aspera, Mallotus philippensis, Murraya paniculata, Tamarindus indica, Zizipus jujuba .
Data Analyses
The relation between altitude, temperature, rainfall and density of flies was assessed through linear regression analysis keeping density as the dependent variable and temperature, altitude and rainfall as independent variables. The seasonal difference in population densities was studied by one-way analysis of variance (ANOVA) using SPSS 10.5. In order to verify the occurrence of a species qualitatively, the occurrence constancy method ( Dijoz 1983 ) was used. The constancy value (c) was obtained by dividing the number of collections in which one species occurred by the total number of collections, and then multiplying that result by 100. Species with index c ≥ 50 were considered constants. Accessory species were those with 25 ≤ c < 50. Accidental species had c < 25. Species that occurred in only one area were considered exclusive. Cluster analysis as described by Mateus et al.( 2006 ) and Giri et al. ( 2007 ) were used to design, analyze and compare different Drosophila populations on the hill. In the cluster study, Euclidean distance was chosen to measure the similarity between different species and Ward's Strategy ( Giri et al. 2007 ) was followed to unite two clusters. A feature of Euclidean distance was that it is a weighted measurement; the higher the absolute value of the variable the higher will be its weight. Drosophila communities were analyzed using ecological indices including Simpson Berger-Parker, and Shannon-Wiener ( Mateus et al. 2006 ).
The relationship between the abundance, richness and diversity of all groups of flies collected throughout the year was assessed by Simpson (D) and Berger-Parker (1/d) indices ( Mateus et al 2006 ). The Shannon-Weiner index was also calculated, but the result was same as the Berger-Parkar index and was not included here. Among these, the Simpson index (D) that measures the probability that two individuals randomly selected from a sample that belong to the same species, was calculated using the formula, Where, n = the total number of organisms of a particular species and N = the total number of organisms of all population
Berger- Parker index (1/d) which shows the relative abundance was calculated using the formula, Where, N = Number of individuals of all species and N max = Number of individuals in the most common species | Results
The distribution pattern of Drosophila species at four different altitudes of Chamundi hill is shown in Table 1 . A total of twenty species were encountered in the hill that belonged to 4 subgenera namely Sophophora, Drosophila, Dorsilopha, Scaptodrosophila . Most of the species belonged to the D. melanogaster species group. D. buskii was the only species belonging to subgenus Dorsilopha . The total number of the flies captured through out the year was 16,671 and number of the species collected was 20. At 680 m, the number of flies collected was the highest (5,464) compared to all other altitudes and the least number was collected at 980 m. D. nasuta, D. neonasuta, D. malerkotliana, D. rajasekari, D. jambulina, and D. bipectinata were the most common species found at all altitudes compared to other species such as D. anomelani, D. coonorensis, D. punjabiensis, D. mysorensis and D. gangotrii. D. kikkawii, D. takahashii, D. suzukii, D. repleta, D. immigrons, D. buskii, D. brindavani, D. nigra, D. mundagensis were not found at all altitudes ( Table 1 ).
The constancy value (c) of all species present at all altitudes along with absolute numbers (A) and relative abundance (r) are presented in Table 2 . Constant species (c ≥ 50) represented approximately 72% of the total collected species (15 out of 20). Three species considered as accessory (18%) and 2 as accidental (10%) were found. D. gangotrii, D. coonorensis, considered as accessory species were found at 880 and 980 m but not found at 780 m and 680 m. All subgenera had constant species and the subgenus Sophophora had the most constant species ( Table 2 ). The value of Simpson,and Berger-Parker indices that indicate the abundance, richness and diversity of Drosophila flies in different altitudes of the hill are given in Table 3 . At the lowest altitude (680 m) Simpson = 0.129; and Berger-Parker = 1.05; and in higher altitude (980 m) Simpson was 0.15, Berger-Parker was 1.1,
The number of Drosophila flies decreased with increasing altitude ( Figure 1 ). The application of student t test between altitude and number of flies suggest that there was a significant difference in the population density of Drosophila at different altitudes. The seasonal variation in the population density of Drosophila is depicted in Figure 2 . The density was low in pre-monsoon, increased in monsoon and again decreased in post-monsoon period. The analysis of variance calculated for pre-monsoon, monsoon and post-monsoon seasons showed significant differences between them (F = 11.20, df 2, 9, P<0.004). Table 4 shows the Linear regression analysis of temperature (r 2 = 0.057; p = 0.1, f = 2.79), altitude (r 2 = 0.025; p = 0.28, f = 1.18), rainfall (r 2 = 0.333; p = 0.001, f = 23.0). There was negative correlation with altitude and temperature and positive correlation with rain.
The cluster analysis performed on the basis of densities of different species showed two clusters ( Figure 3 ). Of these two clusters, the first cluster belongs to montium sub group and included D. kikkawii, D. coonorensis, D. gangotrii, D. takahashii, D. anomelani, D. punjabiensis, D. mundagensis, D. mysorensis but D. suzukii, belongs to suzukii subgroup. Both these subgroups belong to the melanogaster species group of the subgenus Sophophora. D. repleta, D. buskii, and D. immigrons of the same cluster belong to subgenus Drosophila, while D. nigra belongs to subgenus Scaptodrosophila. D. jambulina, belongs to the montium subgroup and D. bipectinata belongs to the ananassae subgroup which is linked with the first cluster. In the second cluster, D. rajasekari belongs to suzukii subgroup of the melanogaster species group of subgenus Sophophora while D. neonasuta belongs to the subgenus Drosophila. D. malerkotliana and D. brindavani sub-cluster which joins with D. rajasekari and D. neonasuta belong to two different taxonomic categories. Among these, D. malerkotliana belongs to subgenus Sophophora and D. brindavani belongs to subgenus Scaptodrosophila. D. nasuta the lone third tier species which joins with the second cluster belong to the subgenus Drosophila and taxonomically more related to D. neonasuta of tier 1 species of this cluster. Thus most of the species of first cluster have closer taxonomic relationships than the second. | Discussion
In the present studies the density of Drosophila at different altitudes of Chamundi hill decreased with increasing altitude ( Table 1 ). At 680 m the density was highest and lowest at 980 m ( Figure 1 ). The results indicate that Drosophila community is affected by elevation. Wakahama ( 1962 ) has reported similar altitudinal variation in the distribution of Drosophila in Mt.Dakesan in Japan. He found that total density decreases with increasing altitude. Reddy and Krishnamurthy ( 1977 ) have also noticed altitudinal variation in Drosophila populations in Jogimatti hills of Karnataka.
The regression analysis showed negative correlation with temperature and altitude and positive correlation with rain ( Table 4 ). This suggests that the rainfall is one of the factors that affect Drosophila population density. The available reports on density of Drosophila are contradictory ( Carson 1965 ; Reddy and Krishnamurty 1977 ). Some suggest that higher elevation is congenial and some suggest that lower elevation is congenial. The present study however clearly demonstrates that the altitude and other biotic and abiotic factors such as rain together determine the Drosophila community in a given ecosystem. The ecological conditions of Chamundi hill change with changing altitude, the lower altitude is comparatively cooler with lesser rain and dryness. Temperature and rain increase with increasing altitude except on the top of the hill.
According to Hegde et al. ( 2000 ) the growth and size of the population depend on several environmental factors in addition to genetic structure. Several earlier workers have been able to collect more flies of D. nasuta and D. immigrons at high altitudes than at low altitudes. These two species belong to the subgenus Drosophila and in the present study the authors collected 1,774 individuals of this subgenus at 880 m.
The fluctuation in population size of Drosophila through different seasons reflects the close relationship between population density with wet and dry seasons. Dobzhansky and Pavan ( 1950 ) showed that rainfall appears to have a greater influence on the abundance of Drosophila than temperature. In our study density was lowest during pre monsoon, which is the hot season, compared to monsoon season when rainfall increases. Population density declined from the middle of post monsoon when cold and dry weather prevail. There are number of factors that may influence the species richness of a community. They may be classified as 1) geographical (e.g. latitude and longitude); 2) environmental (an environment with a greater variety of niches would be able to host a greater variety of species); and 3) biological (the relationships of predation, competition and population density etc). These factors may have important consequences on the number of species in a given ecosystem. The changes in the natural environment caused by the alteration of seasons, would result in the change in relative frequency of different species from season to season ( Figure 2 ). In tropical areas, especially in Brazil, changes in the environment are caused by the alteration between the dry and rainy seasons ( Dobzhansky and Pavan 1950 ). It should be emphasized that the months with higher species richness occur during the rainy season. These differences suggest that at different altitudes the capacity to support Drosophila species varies. Thus the existence of seasonal variation in Drosophila species is quite evident by the presence of greater numbers of species in monsoon compared to pre and post monsoon periods. However, in temperate regions population densities decline to an extremely low level during cold winter months indicating the influence of temperature on the regulation of population size as is true in several Drosophila species inhabiting temperate regions (Patterson 1943; Dobzhansky and Pavan 1950 ; William and Miller 1952 ; Wakahama 1961 ). Thus it is evident that Drosophila community structure is affected by physical and biotic factors in addition to physiographic factors.
Table 2 shows that D. anomelani, D. punjabiensis, D. repleta, D. immigrons, D. nigra, D. mundagensis, D. mysorensis, D. buskii, D. jambulina, D. bipectinata, D. nasuta, D. malerkotliana, D. rajasekari, D. brindavani and D. neonasuta are constant species which are common in the hill. D. coonorensis, D. gangotrii and D. takahashii are accessory species while D. kikkawii, D. suzukii, are accidental species. In the cluster analysis, both accidental and accessory species occupy the first the cluster ( Figure 3 ). Further in the first cluster, all species except D. immigrons, D. buskii, D. nigra and D. repleta are morphologically and phylogenetically related and hence they are classified in one subgenus Sophophora. The study therefore indicates the coexistence of species having similar ecological preferences supporting the view of Ayala ( 1969 ). Further in the second cluster, there are species belonging to different taxa, occupying different subclusters but joining with the main cluster at different tiers.
In the Simpson index (D) 0 represent infinite diversity and 1, no diversity, i.e, the greater the value of D the lower is the diversity but the reverse is true in case of Berger-Parker and Shannon-Wiener indices ( Ludwig and Reynold, 1988 Mateus et al 2006 ). Applying these indices to understand the measures of biodiversity of flies at different altitudes of Chamundi hill demonstrates that the lower altitude of 680 m has a lower value (D) and higher value of 1/d indicating more biodiversity compared to the higher altitude of 980 m ( Table 3 ). Although, these three indices revealed greater diversity at 680 m, more species were collected at 980 m. The reason for this may be easily understood if we observe the quantity and dominance of each species at each altitude, since the index combines two functions: number of species and uniformity, i.e. the number of individuals presented in each species ( Ludwig and Reynold 1988 ; Torres and Madi-Ravazzi ( 2006 ). Again, this may be correlated to the vegetation and flowering plants at different altitudes. Thus, from the present ecodistributional analysis of Drosophila in Chamundi hill it is clear that the distributional pattern of a species or related group of species is uneven in space and time. D. malerkotliana and D. nasuta could be considered as dominant species, as they are registered in all altitudes with high numbers. | Associate Editor: Megha Parajulee was editor of this paper.
A year long study was conducted to analyze the altitudinal and seasonal variation in a population of Drosophila (Diptera: Drosophilidae) on Chamundi hill of Mysore, Karnataka State, India. A total of 16,671 Drosophila flies belonging to 20 species of 4 subgenera were collected at altitudes of 680 m, 780 m, 880 m and 980 m. The subgenus Sophophora was predominant with 14 species and the subgenus Drosilopha was least represented with only a single species. Cluster analysis and constancy methods were used to analyze the species occurrence qualitatively. Altitudinal changes in the population density, and relative abundance of the different species at different seasons were also studied. The diversity of the Drosophila community was assessed by applying the Simpson and Berger-Parker indices. At 680 m the Simpson Index was low at 0.129 and the Berger- Parker index was high at 1.1 at 980 m. Linear regression showed that the Drosophila community was positively correlated with rainfall but not elevation, Furthermore the density of Drosophila changed significantly in different seasons (F = 11.20, df 2, 9; P<0.004). The distributional pattern of a species or related group of species was uneven in space and time. D. malerkotliana and D. nasuta were found at all altitudes and can be considered as dominant species.
Keywords | Acknowledgments
We thank the Chairman, Department of Studies in Zoology, Manasagangotri, Mysore, India, for facilities and University Grants Commission, New Delhi for Financial support under Departmental Special Assistance Programme. | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Aug 3; 10:123 | oa_package/f6/82/PMC3016861.tar.gz |
||
PMC3016862 | 21073344 | Introduction
The soybean aphid, Aphis glycines Matsumura (Hemiptera: Aphididae), was first discovered in the United States in 2000 and has spread throughout the soybean, Glycine max L. (Fabales: Fabaceae), growing regions of the North Central United States ( Venette and Ragsdale 2004 ). By 2004, soybean aphid was present in 21 states and two Canadian provinces, encompassing 80% of the soybean production area in North America. The economic threshold of the soybean aphid was estimated to be 273 aphids per plant, assuming a 7 day lead time to reach the economic injury level (674 aphids per plant) ( Ragsdale et al. 2007 ). The soybean aphid has caused significant yield losses in northern soybean-producing states including Illinois ( NSRL 2001 ), Iowa ( Rice et al. 2004 ), Michigan ( DiFonzo and Hines 2002 ) and Minnesota ( MacRae and Glogoza 2005 ).
Observations from Asia indicate that soybean aphid populations were extremely low in environments similar to the North Central United States ( Fox et al. 2004 ). The soybean aphid populations in Asia are believed to be under the control of a number of natural enemies ( Van Den Berg et al. 1997 ; Rongcai et al. 1994 ; Miao et al. 2007 ; Han 1997 ; Liu et al. 2004 ; Chang et al. 1994 ; Ma et al. 1986 ). In China, Wang and Ba ( 1998 ) identified coccinellids as principle to soybean aphid suppression due to high predation rates and high populations.
Studies conducted in the Midwest identified key predators of the soybean aphid; these included the insidious flower bug, Orius insidiosus Say (Hemiptera: Anthocoridae), and the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), which can account for over 85% of all predators in some environments ( Rutledge et al. 2004 ; Fox et al. 2004 ). Harwood et al. ( 2007 ) found little intraguild predation between O. insidiosus and H. axyridis. The presence of predatory insects may prevent soybean aphid population growth and also reduce established populations ( Van Den Berg et al. 1997 ; Brown et al. 2003 ; Fox et al. 2004 ; Rutledge and O'Neil 2005 ; Costamagna and Landis 2006 ;). Predatory insects that respond early in the season, and in large numbers, may be more successful in this regard ( Fox et al. 2005 ; Brosius et al. 2007 ; Yoo and O'Neil 2009 ). In some Midwest states, ambient levels of predatory insects are capable of controlling soybean aphid populations ( Costamagna et al. 2007a ). Orius insidiosus is the most common predaceous insect in Missouri soybean ( Barry 1973 ; Marston et al. 1979 ) and may be responsible for suppressing soybean aphid populations below economic levels.
Soybean thrips, Neohydatothrips variabilis (Beach) (Thysanoptera: Thripidae), are an important food source for O. insidiosus along with the soybean aphid ( Harwood et al. 2007 ; Butler and O'Neil 2008 ). Before the arrival of the soybean aphid, it was generally accepted that the soybean thrips was the primary prey species of O. insidiosus ( Marston et al. 1979 ). Thrips arrive early in the season (unifoliate stage, VI) in both early and late planted soybean, reproduce rapidly, and are abundant by the time O. insidiosus arrives (V5–V8 for May planted; V2–V4 for June planted) ( Isenhour and Marston 1981b ). This relationship may change with the introduction of the soybean aphid. The soybean aphid is an adequate prey item for O. insidiosus , and a combination of soybean aphid and thrips resulted in increased survival, development, and fecundity of O. insidiosus versus thrips alone ( Butler and O'Neil 2007a ; Butler and O'Neil 2007b ). However, the presence of thrips has been shown to decrease the predation of O. insidiosus on soybean aphid ( Desneux and O'Neil 2008 ).
Along with predation, plant properties affect soybean aphid populations (i.e. bottom-up control of aphid numbers). Potassium deficient soybeans have higher soybean aphid populations, possibly due to an increase in free nitrogen in plant phloem or a change in the composition of amino acids in the phloem ( Myers and Gratton 2006 ; Walter and DiFonzo 2007 ). Plant phenology may also significantly impact soybean aphid population growth, as was seen with Myzus persicae and Aphis fabae ( Williams et al. 1999 ; Van Den Berg et al. 1997 ; Kift et al. 1998 ; Costamagna et al. 2007b ).
The exclusion of predators by physical barriers, followed by observations of the prey population, is a method commonly used to assess the importance of predators on a population (i.e. top-down control of aphid numbers) ( Luck et al. 1988 ). Several exclusion studies have been conducted to evaluate the role of predators in the establishment and spread of soybean aphid ( Van Den Berg et al. 1997 ; Liu et al. 2004 ; Fox et al. 2004 ; Fox et al. 2005 ; Desneux et al. 2006 ; Costamagna and Landis 2006 ; Miao et al. 2007 ; Gardiner and Landis 2007 ; Costamagna et al. 2008 ; Chacón et al. 2008 ). All of these studies indicated that predators play a role in suppression of soybean aphid populations. Whenever resident predators are capable of suppressing soybean aphid populations below threshold, insecticide applications can be avoided.
Despite the presence of soybean aphid in southern soybean producing states such as Missouri, yield losses have been limited. Some speculate that soybean aphid rarely reaches economic threshold in Missouri because high summer temperatures negatively affect aphid development. However, this speculation was not supported by preliminary research, as soybean aphid reached outbreak levels in exclusion cages in central Missouri during the summers of 2001 and 2002. Within a three-week period, soybean aphid populations increased from 5–10 per plant to more than 5,000 per plant (T.L.C., unpublished data). These data suggest that temperature was not the primary reason populations remain low in Missouri. It is more likely that resident predators are responsible, as ambient levels of predatory insects are capable of controlling soybean aphids in some Midwestern states ( Costamagna et al. 2007a ). The purpose of this research was to evaluate the predator complex inhabiting central Missouri soybean fields and to determine their impact on soybean aphid populations at different plant growth stages. This design encompasses top-down (predator exclusion) and bottom-up (plant phenology, i.e. nutritional quality) factors affecting soybean aphid populations. | Materials and Methods
Experimental Design
The study was conducted at the University of Missouri, South Farms, in the summer of 2004. South Farms (92° 17′ W, 92° 12′ N; elevation ≈ 272 m) is located approximately 5.8 km southeast of University of Missouri campus. Cages were 1.5 m apart and replications were 6 m apart within the soybean field. Fields were cultivated using reduced primary tillage (disc), cages were placed and soybean variety DKB 38–52 (Asgrow® Roundup Ready®, Monsanto Company, www.monsanto.com ) was planted six seeds to a cage on 22 June 2004. A non-standard planting density was utilized to facilitate sampling by observers. Cages and nearby plots were kept weed free by the application of Roundup WeatherMAX® (glyphosate) at a rate of 864 g (AI)/ha (Monsanto) on 17 July and 13 August. The experiment was set up as a randomized complete block split-plot design in a 4 × 3 (infestation date × mesh size) factorial arrangement replicated four times, with the main plot of mesh, and a subplot of infestation date ( Figure 1 ). A no mesh treatment was included as a control; however, due to herbivory this treatment was dropped from the analyses. In addition, cages were sampled over time requiring a repeated measures analysis.
Predator Exclusion Trials
Aphidophagous predators (Coccinellidae, Syrphidae, Chrysopidae, and Anthocoridae) and soybean aphid densities were monitored throughout the season. Cage frames were constructed of PVC pipe and fittings (1.3 cm outside diameter; Lasco Fittings, Inc., www.lascofittings.com ). Cages were 1 m 3 with approximately 10 cm placed in the soil and secured with 10 cm wire landscape staples ( Figure 2 ). Three sizes of mesh were used: Econet S (300 squares per cm), Econet L (140 squares per cm) (LS Climate Control Pty Ltd., www.svensson.com.au ) and mosquito netting (6 squares per cm) (Econet Specifications http://insect-screen.usgr.com/econet-insectscreen.html ). Mesh was sewn to fit the cage frame with excess material on the bottom to allow burial. Mesh was buried in the soil and secured with 10 cm wire landscape staples. Access was provided by Velero® closures along the top and side of one panel.
Mesh sizes were chosen based on predator size. Small mesh (Econet S) was selected to exclude all arthropods, even mites. Medium mesh (Econet L) was selected to exclude all insects larger than thrips and whiteflies. Large mesh (mosquito netting) was selected to exclude all insects larger than O. insidiosus. However, in all exclusion cages, predators that should have been excluded were sometimes present. This occurred because adult insects (particularly Coccinellidae, Chrysopidae, and Syrphidae) laid eggs on the outside of the mesh and neonate larvae crawled through. Whenever this occurred, the number of predators was recorded and they were removed from the cage.
Aphid Infestation
Each exclusion cage was infested with 15 apterous soybean aphid nymphs < 48 h old obtained using the following procedure: alate soybean aphids were placed on excised soybean leaves in Petri dishes with moist filter paper for 48 hours. After this period, the alates were removed and the remaining nymphs were transferred using a camel's hair brush to infest the exclusion cages. This was done to assure even age of nymphs and also to mimic an alates behavior of depositing nymphs and then moving to another plant, as suggested by Liu et al. ( 2004 ). Cages were infested at three different plant growth stages: vegetative (V5), beginning bloom (R1), and beginning pod set (R3). Infestation times were selected to simulate different arrival times of migrant soybean aphids. Nymphs were dispersed among the six plants by placing them onto the top expanded trifoliates.
Data were collected at approximately seven day intervals from 28 July until 29 September. On each sample date, temperature and relative humidity inside each cage were measured at canopy height by inserting a probe (EasyView 20; Extech Instruments www.extech.com ) through the Velero® before opening the cage. Number of thrips per leaf were estimated on a scale of zero to four; 0 = 0 thrips per leaf, 1 = 1–10 thrips per leaf, 2 = 11–25 thrips per leaf, 3 = 26–75 thrips per leaf and 4 = >75 thrips per leaf. Soybean aphid populations early in the season were directly counted. Once populations became large, soybean aphid numbers were estimated by sampling several leaves, averaging the number of aphids, then multiplying by the number of leaves on the plant. The method of McCornack et al.( 2008 ), although slightly different from ours, was found to be highly correlated with whole plant soybean aphid numbers. Predatory insects were directly counted; predators that should not be present were then removed. Additionally, the height of each plant in the cage was measured and plant development was recorded using the method by Fehr et al. ( 1971 ).
Statistical Analysis
The soybean aphid and predator counts were square root transformed (x + 1) prior to analysis to fit the model's assumptions ( Snedecor and Cochran 1989 ). Data were analyzed using repeated measures PROC MIXED ( SAS 2001 ) (as outlined by Littell et al. ( 1998 )). The ANOVA was a randomized complete block split plot in space and time as outlined by Steel and Torrie ( 1980 ). Blocks represented field position, the main plot was mesh, and the subplot was infestation date. The repeated measure was sampling over time in each cage. Rep within mesh infestation was used as the denominator of F for testing infestation and mesh × infestation. Rep × weeks after infestation (WAI) was used as the denominator of F for testing WAI. All other interactions were tested using the residual. Differences between means were determined using Fisher's least significant difference test. Because of differences in the number of sampling dates between infestation times (V5, 10; R1, 8; R3, 4), two separate analyses were performed ( Table 1 ). One analysis included all four infestations (V5, R1, R3, and uninfested control) and the first four WAI. Another analysis included three infestations (V5, R1, and uninfested control) and weeks 5– 8 WAI. Samples from dates 9 and 10 WAI were not included because only comparisons between the V5 infestation and the uninfested control were possible. For treatments that exceeded the economic threshold, time to threshold was compared using PROC MIXED. Analyses of temperature, relative humidity, and plant height were performed similar to above. However, all sample dates were used and the only treatment considered was mesh type with WAI.
The rate of increase of soybean aphid populations in cages of different mesh sizes was analyzed using a program created by MR Ellersieck (available on request, [email protected] ). Slopes from initial infestation to peak population were determined and compared. Peak dates for V5, R1, R3, and uninfested control were 1 September, 29 September, 22 September, and 22 September, respectively. One degree of freedom polynomial contrasts were conducted in order to test differences between soybean aphid population slopes ( P ≤ 0.05).
A stepwise regression was also performed to predict O. insidiosus populations as they relate to thrips populations and soybean aphid populations. As before, two separate analyses were performed. One analysis included all four infestations (V5, R1, R3, and uninfested control) and the first four WAI ( Table 1 ). Another analysis included three infestations (V5, R1, and uninfested control) and weeks 5– 8 WAI. Sample dates 9 and 10 WAI were not included because only comparisons between the V5 infestation and uninfested control were possible. Small, medium, and large mesh treatments were included. | Results
The rate of increase for soybean aphid populations differed significantly with treatment and infestation date ( Table 2 ). Among cages infested at V5, aphid populations in cages with small mesh (excluding all predators) had a significantly higher ( P ≤ 0.05) rate of increase than aphid populations in cages with medium or large mesh. Among cages infested at R1, aphid populations in cages with small and medium mesh had significantly higher ( P ≤ 0.05) rates of increase compared to aphid populations in cages with large mesh. Cages infested at R3 and uninfested cages maintained very low populations of soybean aphid despite infestation. Uninfested cages with large and medium mesh had higher aphid populations than cages with small mesh. However, some aphids were observed in uninfested small mesh exclusion cages. Cages were 1.5 m apart and blocks were 6 m apart and all areas between cages were maintained weed free, so it is likely that stray aphids were accidently introduced by the observer from other cages.
Predator exclusion significantly affected ( P < 0.05) the length of time from aphid infestation until economic threshold (250 aphids/plant or ∼1500 aphids/cage) was reached for the V5 and R1 infestations ( Figure 3 ). Among cages infested at V5, economically significant populations of soybean aphid were established two, three, and four and a half weeks after infestation of small, medium and large mesh cages, respectively. Among cages infested at R1, economically significant populations of soybean aphid were established five and six weeks after infestation of small and medium mesh cages. No cages infested at R3 or uninfested cages reached the economic threshold.
Throughout WAI 1–4, O. insidiosus numbers were variable and no clear pattern was discernable. In WAI 5–8, more O. insidiosus were found in cages infested at V5 than any other cage type ( F = 3.89; df = 2, 28; P = 0.0395) ( Figure 4 ). The most abundant predators observed during the study were O. insidiosus and several coccinellid species ( Table 3 ). Orius insidiosus adults and immatures comprised 39.5%, while coccinellid adults and immatures comprised 37.4% of observed predators ( Figure 5 ). Harmonia axyridis (Pallas) was the most prevalent coccinellid species observed, whereas Coccinella septempunctata (L.) was observed rarely. Syrphidae adults and immatures (9.6%) and Chrysopidae adults and immatures (4.2%) were also observed, but to a lesser extent.
During WAI 1–4, thrips numbers were a better predictor of O. insidiosus numbers than soybean aphid numbers ( O. insidiosus =1.15 + 0.378 × thrips; R 2 = 0.2185). In WAI 5–8, both thrips and soybean aphid numbers were important in predicting the number of O. insidiosus ( O. insidiosus = 1.25 + 0.244 × thrips -0.049 × aphids; R 2 = 0.1781).
Cage Effects
Temperature between mesh types differed significantly over the sampling period ( F = 24.29; df = 27, 282; P < 0.0001) ( Table 1 ); mean temperature varied by ± 1.3° C on average among mesh treatments ( Figure 6 ). Relative humidity also differed significantly throughout the sampling period ( F = 27.08; df = 27, 282; P < 0.0001) ( Table 1 ), varying among mesh treatments by ± 3.2% on average. Plant height differed significantly over the sampling period ( F = 79.72; df = 27, 270; P < 0.0001; Figure 7 ) ( Table 1 ). | Discussion
Thrips were the primary food source of O. insidiosus before the arrival of soybean aphid in the United States ( Isenhour and Marston 1981a ; Isenhour and Yeargan 1981 ). Research by Yoo and O'Neil ( 2009 ) suggests that thrips may serve as a food source for O. insidiosus early in the season, before the arrival of soybean aphid, thus assuring that O. insidiosus is present when soybean aphid is becoming established. Our research supports this theory, as thrips numbers were a much better predictor of O. insidiosus numbers early in the infestation (WAI 1–4). Later, as soybean aphid became established, both aphids and thrips were important in predicting O. insidiosus numbers.
Both top-down (predation) and bottom-up (plant stage) effects were found to impact soybean aphid population growth; predatory insects and increasing plant maturity decreased the rate of soybean aphid population growth ( Figure 3 , Table 2 ). Similar results were found by previous researchers, validating the importance of these effects on soybean aphid population growth ( Fox et al. 2004 ; Fox et al. 2005 ; Desneux et al. 2006 ; Costamagna and Landis 2006 ; Costamagna et al. 2007a ; Brosius et al. 2007 ; Gardiner et al. 2009 ).
Venette and Ragsdale ( 2004 ) suggested that Missouri would provide a suitable climate for soybean aphid, but economic populations have not occurred in Missouri. However, in total predator exclusion (small mesh) cages, soybean aphid populations exceeded the economic threshold ( Figure 3 ), suggesting that no intrinsic differences between the environments of Missouri and other Midwest states limited economic populations. Researchers such as Fox et al. ( 2005 , 2004 ) and Rutledge et al. ( 2004 ) determined that predation had a significant impact on soybean aphid establishment and population growth. Our results concur with theirs and indicate that when smaller predators (mainly O. insidiosus ) were allowed access to soybean aphid populations, aphid populations were delayed from reaching economic threshold (as in large mesh cages) ( Figure 3 ). The role of resident predatory insects should be considered when making management decisions. Similar to other aphid species, the soybean aphid has been shown to rapidly increase population numbers following the elimination of predacious insects by insecticide application ( Sun et al. 2000 ; Myers et al. 2005 ). Both O. insidiosus and coccinellids were present throughout the experiment and act to suppress soybean aphid population growth.
Field experiments are commonly less than perfect due to environmental uncertainties. One problem encountered during this experiment was the presence of predatory insects in cages from which they should have been excluded. This occurred because predator adults would lay eggs on the outside of the mesh and the immature insects were able to crawl through the mesh, or adults simply entered through an unnoticed opening in the Velcro®. This was a particular problem with the coccinellids in the V5 infestation date ( Figure 5 ) at WAI 7–9. R1, R3, and uninfested cages had very low numbers of coccinellids, as expected. There was no significant difference in the number of coccinellids between mesh types, indicating that cages were equally ‘leaky’. Orius insidiosus was effectively kept out of the small mesh cages; however, there was no significant difference in the number of O. insidiosus found between the large mesh (allow O. insidiosus ) and medium mesh (exclude O. insidiosus ).
In exclusion cages, Liu et al. ( 2004 ) proposed three hypotheses to explain the growth of aphid populations: 1) microclimates may differ and thus affect aphid reproduction or survival 2) cages may reduce aphid emigration 3) cages may reduce aphid mortality by excluding predators
The plant growth stages used in this experiment may have affected soybean aphid establishment, survival, and subsequent reproduction. The effect of plant phenology on soybean aphid population growth has not been studied, and studies involving other aphid species are mixed on the impact of plant maturation on aphid population growth ( Williams et al. 1999 ; Honek and Martinkova 2004 ). The decreasing nutritional value of maturing plants could explain why such low aphid populations were recorded for the late (R3) infestation ( Figure 3 ); however, since different plant phonologies weren't tested simultaneously (i.e. by different planting dates), it is impossible to rule out the possibility that seasonal effects (i.e. differences in day length or temperature) were partly responsible. The data do suggest that soybean aphids establishing late in the season are less likely to need to be controlled with insecticide applications.
Cage material characteristics may have affected soybean aphid population growth by altering the microclimate. Econet S and Econet L, used in cages with small and medium mesh, reduce available light and airflow. Econet S reduces airflow by 45% and available light by 9% while Econet L reduces airflow by 5% and available light by 16% (U.S. Global Resources). These characteristics could reduce aphid mortality due to rain and wind compared to cages with large mesh. Heavy rainfall has been shown to be an important mortality factor in other aphid species ( Shull 1925 ; Hughes 1963 ; Maelzer 1977 ; Singh 1982 ; Walker et al. 1984 ). During the experiment, the Bradford Research and Extension Center reported only three days with rainfall greater than 2.5 cm and seven days with rainfall greater than 1.25 cm. Only three days with rainfall greater than 1.25 cm and winds greater than 48 km/hr were recorded: August 4, August 24, and August 25. Thus, the impact of rain and wind seem minimal over the time of the experiment. However, the reduction in available light may have impacted the growth rate of the caged plants, though no difference in plant height was observed ( Figure 7 ).
The optimum temperature range for soybean aphid development is reported to be between 22 and 27° C; above 32° C developmental time increases and survival rate decreases ( McCornack et al. 2004 ; Hirano et al. 1996 ). No temperatures inside any of the cages rose above 32° C and the cages with the highest temperatures also had the highest number of aphids, suggesting no negative effects of high temperature in the study. Given that there was little difference between temperature, relative humidity, and plant height between cages, it seems that cage environment had little effect on soybean aphid populations.
The soybean aphid is a competent flyer and will take flight under a wide range of environmental conditions ( Zhang et al. 2008 ). Cages would have prevented soybean aphid emigration, potentially increasing soybean aphid populations inside cages. However, large numbers of alate aphids were not observed until late September, when plants were in R5 (beginning seed set). A similar pattern of alate production was observed by Hodgson et al. ( Hodgson et al. 2005 ). Because this was the last sampling date, it is unlikely that reinfestation of plants by alatae affected aphid populations during the course of the study.
Soybean aphid population growth is influenced by top-down (predation) and bottom-up (plant phenology) forces. Our research confirms that the presence of predatory insects decreases the rate of soybean aphid population increase. Often, this resulted in the soybean aphid population not reaching the economic threshold. Also, soybean aphid population growth was reduced on plants in later growth stages (reproductive vs. vegetative). These results suggest that predatory insect populations should be conserved (i.e. avoid insecticide application if possible) in young soybean fields to slow soybean aphid population growth, and that soybean aphid populations establishing at later plant growth stages would not need insecticide treatments. | Associate Editor: J.P. Michaud was editor of this paper.
Although soybean aphid, Aphis glycines Matsumura (Hemiptera: Aphididae), has caused economic damage in several Midwestern states, growers in Missouri have experienced relatively minor damage. To evaluate whether existing predatory insect populations are capable of suppressing or preventing soybean aphid population growth or establishment in Missouri, a predator exclusion study was conducted to gauge the efficacy of predator populations. Three levels of predator exclusion were used; one that excluded all insects (small mesh), one that excluded insects larger than thrips (medium mesh), and one that excluded insects larger than Orius insidiosus (Say) (Hemiptera: Anthocoridae), a principal predator (large mesh). Along with manipulating predator exposure, timing of aphid arrival (infestation) was manipulated. Three infestation times were studied; vegetative (V5), beginning bloom (R1), and beginning pod set (R3). Timing of aphid and predator arrival in a soybean field may affect the soybean aphid's ability to establish and begin reproducing. Cages infested at V5 and with complete predator exclusion reached economic threshold within two weeks, while cages with predators reached economic threshold in four and a half weeks. Cages infested at R1 with complete predator exclusion reached economic threshold within five weeks; cages with predators reached economic threshold within six weeks. Cages infested at R3 never reached threshold (with or without predators). The predator population in Missouri seems robust, capable of depressing the growth of soybean aphid populations once established, and even preventing establishment when the aphid arrived late in the field.
Key words | Acknowledgements
Many thanks to B. Hibbard for providing comments on the manuscript. Thanks to E. Lindroth, F. Lloyd, and C. Meinhardt for assistance in sampling.
Abbreviations
beginning bloom;
beginning pod set;
vegetative;
weeks after infestation | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Sep 10; 10:144 | oa_package/02/d7/PMC3016862.tar.gz |
||
PMC3016864 | 21067414 | Introduction
The beet armyworm, Spodoptera exigua (Hübner) (Lepidoptera: Noctuidae) is an important pest of numerous crops, and it causes economic damage in China. Historically, S. exigua has been managed as part of the control of pests of cotton, Gossypium hirsutum L. (Malvales: Malvaceae), and initially received little attention following the adoption of control procedures for cotton varieties in China. The failure of chemical measures to control this insect has shifted the emphasis toward effective implementation of an integrated pest management (IPM) program.
Cotton cultivars with high gossypol levels are considered to be resistant to herbivorous insects and have been adopted by cotton growers for the purpose of herbivorous insects' management ( Gao et al. 2008 ). Gossypol, a phenolic sesquiterpenoid aldehyde, is an important allelochemical occurring in gossypol glands of cotton cultivars. This allelochemical exhibits antibiosis to many pests and contributes to the host plant resistance of cotton varieties with gossypol glands ( Zhou 1991 ). Du et al. ( 2004 ) and Gao et al. ( 2008 ) have indicated that high gossypol levels in the cotton plant had negative effects on Aphis gossypii and produced a positive effect on growth and development of Propylaea japonica at the third trophic level. Chen et al. ( 1997a , 1997b ) reported that wheat plants with high levels of resistance to aphids had high amounts of plenolics and tannin. Wu et al. (unpublished data) found that different cotton gossypol levels significantly affected the enzyme activities in S. exigua. Other researchers showed a relationship between gossypol level and herbivorous insects' population abundance ( Bottger et al. 1964 ; Meng and Li 1999 ).
Most published studies of responses of herbivorous insects to botanical secondary substances have been short-term experiments (e.g., one generation, or a certain instar) measuring development and consumption rates ( Du et al. 2004 ; Gao et al. 2008 ). However, experiments conducted over more than one generation revealed differences in responses between generations and also showed that the outcome could depend on conditions of host-plant growth ( Brooks and Whittaker 1998 , 1999 ; Wu et al. 2006 ). Despite the well-recognized need for additional long-term studies on this topic ( Lindroth et al. 1995 ; Wu et al. 2008 ), few studies have evaluated the development and food utilization of herbivorous insects over multiple generations ( Wu et al. 2007a ). Wu et al. ( 2006 ) reported that significantly longer larval life-span for the third generation and lower pupal weight for all generations were observed in Helicoverpa armigera fed on milky grains of spring wheat grown in elevated CO 2 . The performance of the leaf beetle, Gastrophysa viridula, was slightly affected by elevated CO 2 after three consecutive generations fed on Rumex plants, despite measurable declines in indices of foliage quality and small eggs at the end of the second generation, which led to fewer and smaller larvae in the third generation ( Brooks et al. 1998 ). Chen et al. ( 2007 ) reported an increased cotton bollworm larval lifespan, food consumption rate, relative consumption rate (RCR), and approximate digestibility (AD) of transgenic Bt cotton. Results from the above experiments show that further multigenerational studies are needed to predict the development and consumption rates of herbivorous insects at individual and population levels.
As a serious pest of cotton, S. exigua was added to a list of out-break insect pests in 2001 in China ( Shi et al., 2006 ). Despite the recognized need for long-term studies, few studies published to date have been conducted for more than one generation to evaluate population dynamics ( Wu et al. 2007a ). In the present study, three successive generations of S. exigua fed on three cotton cultivars were reared, and the objective was to evaluate the cumulative effect of gossypol on the development, food utilization, and population performance of S. exigua over three successive generations. | Materials and Methods
Cotton variety and growth condition
The three cotton cultivars used in the study included ZMS13, HZ401, and M9101 with gossypol contents of 0.06%, 0.44%, and 1.12%, respectively ( Du et al. 2004 ; Gao et al. 2008 ). The three cotton cultivars were planted in plastic pots (15 cm diameter, 13 cm height) in a climate controlled chamber. The temperature was maintained at 28 ± 1° C, and relative humidity was maintained at 70–80%. For each cotton cultivar, 120 pots were randomly placed in the chamber and re-randomized once a week to minimize position effects. Soil pH was 7.1, organic matter 14.5%, available N 397.7 mg Kg -1 (hydrolic N, 1 N NaOH hydrolysis), available P 269.0 mg Kg -1 (0.5 M NaHCO 3 extraction), and available K 262.2 mg Kg -1 (1 N CH 3 COONH 4 extraction). Water (200 ml) was added to each pot once every three days after cotton seedling emergence. No chemical fertilizers or insecticides were used throughout the experiment.
Insect feeding
The egg masses of & exigua were obtained from the Insect Virology Laboratory, Institute of Zoology, Chinese Academy of Sciences and hatched in a growth chamber ((PRX-500D-30; Haishu Safe Apparatus, http://nbhssfsy.cn.china.cn/ ). The chamber was maintained at 75 ± 5% RH, 28 ± 0.5° C, and 14:10 L:D at 30,000 LX of active radiation supplied by 39 26-W fluorescent lamps in the chamber.
During the three cotton cultivars' 8–10 leaf stage (≈ 40–50 d after planting), leaves were gathered and supplied as food to neonate larvae kept in the same chamber temperature, RH, and photoperiod conditions as detailed above. Twenty insects were reared individually in a glass dish (75 mm in diameter) with four replications per treatment. Ten leaves were randomly selected with three replicates for each cotton cultivar, oven dried at 80° C for 72 h, and used to calculate the proportion of dry matter to water content immediately prior to the beginning of the insect rearing trials. Simultaneously, fresh leaves with petioles were provided daily to S. exigua larvae. Each day, the frass and remaining portion of cotton leaves were collected and oven dried at 80° C for 72 h. The fresh body weights were measured every other day. Larval development was calculated as the period from hatch to pupation. Pupal weight was measured ∼12 h after pupation was noted, and the rate of pupation was recorded. For each treatment, survival rate was calculated as the number of moths emerged divided by the number of first instar larvae.
S. exigua were sexed and recorded to calculate the proportion of females among the total number of adults after emergence. The emerging moths were enclosed in a cage (30 × 30 × 40 cm) for two days to achieve mating, and then paired (one female and one male) in a plastic cup (9 cm in diameter; height of 15 cm) with a net cover of absorbent cotton yarn for oviposition. The numbers of eggs laid in each cup were counted daily, and the cotton yarn was replaced daily. The eggs were tracked for each female and kept in artificial climate incubators (PRX-500D-30; Haishu Safe Apparatus, www.nbhssfsy.cn.china.cn ) to record hatching. Eighty neonates from each generation were followed through the complete life cycle for three successive generations following the same rearing protocol as the first generation.
Foliar chemical compositions assays of cotton plants
Comparable leaves to those in the feeding studies were randomly collected at the same time, placed in liquid nitrogen for 3 h, and then transferred to a -20° C refrigerator for later use in chemical composition assays. Five leaves with three replicates were taken for each cotton cultivar. Leaf water content, as a proportion of fresh weight, was calculated after drying at 80° C for 72 h. Protein, total amino acids, and free fatty acids were assayed according to manufacturer's instructions (Nanjing Jiancheng Ltd. Co., www.njjcbio.com ). Nitrogen content was measured using a CNH analyzer (Model ANCA-nt; Europa Elemental Instruments).
Development, consumption, and food utilization indices of S. exigua
Growth and development indices. Four indices were used to measure the growth and development of S. exigua including larval development, pupal weight, survival rate, and fecundity.
Indices for larval consumption and utilization. The conventional, ratio-based nutritional indices, including relative growth rate (RGR, mg/g/day), relative consumption rate (RCR, mg/g /day), efficiency of conversion of ingested food (ECI, %), and approximate digestibility (AD), were determined gravimetrically following the methods of Waldbauer ( 1968 ) and Scriber and Slansky ( 1981 ). The amount (mg) of food ingested, frass produced, larval body weight, and weight gain were all calculated as dry weights. Formulae for calculation of the indices are shown in Chen et al. ( 2005 ).
Data Analysis
One-way ANOVAs (SAS 6.12, SAS Institute Inc. USA, 1996 ) were used to analyze the foliar chemical compositions of the three cotton cultivars. Population indices, consumption, and frass were analyzed using two-way ANOVAs with cotton cultivar and S. exigua generations as sources of variability, where the cotton cultivar was the main factor and S. exigua generation was a sub-factor deployed in a split-plot design. The data for larval consumption and digestibility indices were analyzed using an analysis of covariance (ANCOVA) with initial weight as a covariate for RCR, RGR, ECI, and AD. Food consumption was a covariate for ECI to correct for the effect of variation in the growth and food assimilated on intake and growth ( Raubenheimer and Simpson 1992 ); and food assimilated was also used as a covariate to analyze the ECD parameter ( Hägele and Martin 1999 ). The assumption of a parallel slope between covariate and dependent variables was satisfied for each analysis. Means were separated using the least significant difference (LSD). | Results
Foliar chemical composition assays of cotton plants
Significantly lower foliar nitrogen content (p < 0.001), protein content (p < 0.05), and free fatty acid content (p < 0.001) were observed in the high gossypol cultivar, M9101, compared with the two low gossypol cultivars, ZMS13 and HZ401. However, foliar water content (p > 0.05) and total amino acids (p > 0.05) were not significantly different between the three cotton cultivars ( Table 1 ).
Growth and development of three successive generations of S. exigua
Larval life-span and pupal weight. Cotton cultivar significantly affected larval life-span (p < 0.0001) and pupal weight (p < 0.0001). S. exigua generation also significantly influenced larval life-span (p < 0.01) and pupal weight (p < 0.01). The interaction between cotton cultivar and S. exigua generation significantly affected pupal weight (p < 0.01) ( Table 2 ).
There was a difference in larval life-span in the first ( F = 4.87; df = 2, 9; p = 0.04), second ( F = 27.56; df = 2, 9; p = 0.0001), and third ( F = 19.35; df = 2, 9; p = 0.001) generations of S. exigua fed on M9101 compared with S. exigua fed on ZMS13 and HZ401. The larval life-span of the third generation was significantly longer than that of the previous two generations fed on the low gossypol cultivar, ZMS13 ( F = 6.35; df = 2, 9; p = 0.02). There was a difference in pupal weight in the second ( F = 15.11; df = 2, 9; p = 0.001) and third ( F = 16.30; df = 2, 9; p = 0.001) generations of & exigua fed on ZMS13 compared with S. exigua fed on M9101 and HZ401. The pupal weight of the first generation was significantly lower than that of the previous two generations fed on low gossypol cultivar ZMS13 ( F = 17.65; df = 2, 9; p = 0.001) ( Table 3 ).
Survival rate and fecundity. Cotton cultivars significantly affected survival rate (p < 0.001) and fecundity (p < 0.0001). S. exigua generation also significantly influenced survival rate (p < 0.0001) and fecundity (p < 0.0001) ( Table 2 ).
There was a difference in the survival rate of the third generation ( F = 9.33; df = 2, 9; p = 0.01) of S. exigua fed on ZMS13 compared with those fed on M9101 and HZ401. The survival rate was found to be significantly different among the three successive generations of S. exigua fed on M9101 ( F = 22.2; df = 2, 9; p = 0.0003), HZ401 ( F = 36.3; df = 2, 9; p = 0.0001), and ZMS13 ( F = 48.9; df = 2, 9; p = 0.0001). There was a difference in fecundity in the second ( F = 7.43; df = 2, 9; p = 0.01) and third ( F = 5.59; df = 2, 9; p = 0.03) generations of & exigua fed on M9101 compared with those fed on ZMS13 and HZ401. The fecundity was significantly decreased for the first generation compared to that of the latter two generations fed on M9101 ( F = 13.4; df = 2, 9; p = 0.002), HZ401 ( F = 9.36; df = 2, 9; p = 0.01), and ZMS13 ( F = 13.76; df = 2, 9; p = 0.002) ( Table 3 ).
Consumption and food utilization of three successive generations of S. exigua
Consumption and frass. Cotton cultivar significantly affected consumption per larva (p < 0.001). Successive generation significantly influenced the consumption and frass per larva (p < 0.0001). The interaction between cotton cultivar and generation significantly affected consumption per larva (p < 0.001) ( Table 2 ).
There was a difference in consumption per larva in the third generation ( F = 57.43; df = 2, 9; p = 0.0001) of S. exigua fed on ZMS13 compared with those fed on M9101 and HZ401. The consumption per larva was significantly decreased in the first generation compared to that of the latter two generations fed on M9101 ( F = 8.35; df = 2, 9; p = 0.0089), HZ401 ( F = 11.73; df = 2, 9; p = 0.003), and ZMS13 ( F = 80.88; df = 2, 9; p = 0.0001) ( Figure 1A ). There was a difference in frass per larva in the first generation compared to that of the latter two generations fed on HZ401 ( F = 8.77; df = 2, 9; p = 0.0077) and ZMS13 ( F = 38.02; df = 2, 9; p = 0.0001) ( Figure 1B ).
Indices for larval utilization
RGR and RCR. Cotton cultivar significantly affected the relative growth rate (RGR, mg/g/day) and relative consumption rate (RCR, mg/g/day) of S. exigua (p < 0.0001). S. exigua generation significantly influenced the RCR (p < 0.0001). The interaction between cotton cultivar and generation significantly affected the RGR (p < 0.05) and RCR of S. exigua (p < 0.0001) ( Table 2 ).
There was a difference in RGR in the first ( F = 15.41; df = 2, 9; p = 0.001), second ( F = 15.19; df = 2, 9; p = 0.001), and third ( F = 53.07; df = 2, 9; p = 0.0001) generations of S. exigua fed on M9101 compared with those fed on ZMS13 and HZ401. However, significantly higher RGR was found in the third generation than that of the previous two generations when fed on ZMS13 ( F = 8.34; df = 2, 9; p = 0.009). There was a difference in RCR in the third generation ( F = 162.27; df = 2, 9; p = 0.0001) of S. exigua fed on M9101 compared with those fed on ZMS13 and HZ401. The RCR of S. exigua was significantly decreased in the first generation compared to that of the latter two generations fed on M9101 ( F = 7.15; df = 2, 9; p = 0.01), HZ401 ( F = 9.32; df = 2, 9; p = 0.01), and ZMS13 ( F = 108.56; df = 2, 9; p = 0.0001) ( Table 4 ).
ECI and AD. Cotton cultivar significantly affected the efficiency of conversion of ingested food (ECI, %) (p < 0.01). S. exigua generation significantly influenced the ECI (p < 0.0001). The interaction between cotton cultivar and S. exigua generation significantly affected the ECI (p < 0.01) ( Table 2 ).
There was a difference in ECI in the first ( F = 7.83; df = 2, 9; p = 0.01) and third ( F = 11.29; df = 2, 9; p = 0.004) generations of S. exigua fed on HZ401 compared with those fed on ZMS13 and M9101. The ECI of S. exigua was significantly increased in the first generation compared to that of the latter two generations fed on M9101 ( F = 11.98; df = 2, 9; p = 0.003), HZ401( F = 8.48; df = 2, 9; p = 0.01), and ZMS13 ( F = 89.32; df = 2, 9; p = 0.0001). There was a difference in AD in the second generation compared with the first and third generations ( F = 10.08; df = 2, 9; p = 0.005) fed on ZMS13 ( Table 4 ). | Discussion
Secondary metabolic compounds of plants are an important biochemical basis for plant resistance to insects. Using plant resistance to insects is a major method for controlling insect pests in modern integrated pest management ( Cai et al. 2004 ; Wu et al. 2007b ). Gossypol produced by cotton is one of the most important toxic chemicals to herbivorous insects and is a major source of modern biological insecticide, which is considered one of the key insect resistance mechanisms. Du et al. ( 2004 ) reported that high gossypol in host cotton had an negative effect on A. gossypii and showed a positive effect on growth and development of P. japonica at the third trophic level. Stipanovic et al. ( 2008 ) found that higher gossypol concentrations were required to reduce survival and pupal weights and increase days-to-pupation for larvae of Heliothis virescens larvae compared with the concentration needed to affect larvae of H. zea (Boddie). Most of these initial published studies focused on responses of herbivorous insects to gossypol in short-term experiments. Combining short- and long-term experiment findings can provide a clearer picture of insect population dynamics in response to gossypol.
In the present experiments, larval development was increased by 4.54% and 8.18% in the first generation, 3.09% and 9.69% in the second generation, and 4.27% and 14.73% in the third generation after feeding on the high gossypol cultivar M9101 compared with those fed on ZMS13 and HZ401. These results showed that direct negative effects of secondary metabolic plant compounds (e.g., gossypol in this study) were observed on S. exigua. The pupal weight and survival rate were not significantly different in the first generation of S. exigua after feeding on the three cotton gossypol cultivars. However, pupal weight was significantly increased by 42.4% in the second generation and by 37.9% in the third generation after feeding on the low gossypol cultivar, ZMS13, compared with those fed on the high gossypol cultivar, M9101. Also, the survival rate significantly increased by 6.06% in the second generation and 8.96% in the third generation after feeding on the low gossypol cultivar, ZMS13, compared with those fed on the high gossypol cultivar M9101. The results showed that S. exigua can develop significant resistance or tolerance to secondary metabolic plant compounds (e.g., gossypol in this study) under continuous selection pressure through three successive generations.
Plant nitrogen is known to be an important element for insect success ( Mattson 1980 ), and reduction in food protein and nitrogen content can often lead to poorer insect performance, or behavioral or physiological adaptation ( Scriber and Slansky 1981 ). Insects feeding on low nitrogen foliage exhibit reduced larval growth ( Roth et al. 1994 , 1995 ), increased foliage consumption (Williams et al. 1994), and reduced fecundity ( Traw et al. 1996 ). Most leaf-chewing insects exhibit compensatory increases in food consumption ( Scriber 1982 ). These results are generally explained as a response of herbivorous insects to reduced forage quality, especially the reduction in forage nitrogen ( Wu et al. 2006 ). Most of the initial published studies focused on food compensate for response of herbivorous insects in short-term experiments ( Wu et al. 2008 ). However, the emergence of combining short- and long-term experiments has provided a clearer picture of the dynamics ( Brooks et al. 1998 ).
In this study, significantly lower relative growth rates (RGR) were found in three successive generations of S. exigua fed on the high gossypol cultivar, M9101, compared with those fed on the low gossypol cultivar, ZMS13. However, the efficiency of conversion of ingested food (ECI) was only significantly decreased in the first generation fed on M9101 compared with those fed on ZMS13. It is likely that the reduction of RGR is due to cumulative effects of secondary metabolic compounds (presumably gossypol in this study) on three successive generations of S. exigua. The results showed food quality on the diet-utilization efficiency of herbivorous insects was different, along with the insect species and insect stages. Other published documents strengthen this standpoint ( Wu et al. 2006 ). Brooks and Whittaker ( 1998 ) reported the ECI of the leaf beetle, Gastrophysa viridula, was significantly reduced by elevated CO 2 , but RGR was significantly reduced in the second generation and was increased by elevated CO 2 in the third generation. RGR of third instars was not affected by elevated CO 2 in any generation. Wu et al. ( 2008 ) observed that the RGR was significantly reduced in three successive generations of S. exigua fed on transgenic Bt cotton compared with those fed on non-transgenic Bt cotton. However, the relative consumption rate (RCR) significantly decreased only in the first generation of S. exigua fed on transgenic Bt cotton compared with those fed on non-transgenic Bt cotton.
The results of this experiment provide a much clearer understanding of the direct effects of the secondary metabolic compound gossypol, on S. exigua. The experiment attempted to determine responses of S. exigua through different developmental stages and generations. Measuring the development and food utilization of S. exigua at the individual and the population level over more than one generation provides more meaningful predictions of long-term population dynamics. Development and implementation of multigenerational pest management tactics has become critical to ensuring the long-term efficiency of cotton cultivars in monitoring the field population dynamics of S. exigua. | Associate Editor: Megha Parajulee was editor of this paper.
The beet armyworm, Spodoptera exigua (Hübner) (Lepidoptera: Noctuidae), is an important pest of numerous crops, and it causes economic damage in China. Use of secondary metabolic compounds in plants is an important method used to control this insect as a part of integrated pest management. In this study the growth, development, and food utilization of three successive generations of S. exigua fed on three cotton gossypol cultivars were examined. Significantly longer larval life-spans were observed in S. exigua fed on high gossypol cultivar M9101 compared with those fed on two low gossypol cultivars, ZMS13 and HZ401. The pupal weight of the first generation was significantly lower than that of the latter two generations fed on ZMS13 group. Significantly lower fecundity was observed in the second and third generations of S. exigua fed on M9101 compared with S. exigua fed on ZMS13 and HZ401. The efficiency of conversion was significantly higher in the first and third generations fed on HZ401 compared with those fed on ZMS13 and M9101. A significantly lower relative growth rate was observed in the three successive generations fed on M9101 compared with those fed on ZMS13 and HZ401. Cotton cultivars significantly affected the growth, development, and food utilization indices of S. exigua, except for frass and approximate digestibility. Development of S. exigua was significantly affected by relative consumption rate and efficiency of conversion of ingested food, but not by relative growth rate or approximate digestibility, suggesting that diet-utilization efficiency was different based on food quality and generation. Measuring the development and food utilization of S. exigua at the individual and population levels over more than one generation provided more meaningful predictions of long-term population dynamics.
Keywords | Acknowledgments
This project was supported by the National Basic Research Program of China (973 Program) (No. 2006CB102004), the National Nature Science Fund of China (No. 30800724, 31071691) and the International Foundation for Sciences (C/4559-1).
Abbreviations
approximate digestibility;
efficiency of conversion of ingested food (%);
relative consumption rate (mg/g/day);
relative growth rate (mg/g/day) | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Sep 29; 10:165 | oa_package/18/f5/PMC3016864.tar.gz |
||
PMC3016865 | 20879918 | Introduction
In augmentative biological control programmes, detailed information concerning thermal requirements and thresholds is useful for selecting natural enemies that are best adapted to conditions favoring target pests ( Jervis and Copland 1996 ; Obrycki and Kring 1998 ). During the last decades, numerous linear and non-linear models have been introduced to estimate the thermal thresholds of different species (e.g. Campbell et al. 1974 ; Stinner et al. 1974 ; Logan et al. 1976 ; Sharpe and DeMichele 1977 ; Lactin et al. 1995 ; Brière et al. 1999 ). The linear approximation enables researchers to calculate two constants: the lower developmental threshold (LDT; the temperature below which development is arrested) and the thermal constant or the sum of effective temperatures (SET; the amount of heat needed for completing a developmental stage), within a limited temperature range (usually 15–30° C). However, the non-linear models describe the developmental rate over a wider range of temperatures and provide estimates of maximum and optimum temperatures for development. The weakness of the nonlinear approach is that estimation of SET cannot be achieved and some models do not yield estimates of LDT ( Jarošik et al. 2002 ; Kontodimas et al. 2004 ).
In order to model insect development as a function of temperature, Logan et al. ( 1976 ) developed two empirical models that have been widely used in biological control studies, the so called Logan models 6 and 10 ( Roy et al. 2002 ). Lactin et al. ( 1995 ) proposed two modifications of the Logan-6 model, of which the Lactin-2 model allows estimation of the LDT. Brière et al. ( 1999 ) proposed two more models with a nonlinear part at low and high temperatures and a linear part at moderate temperatures. In contrast with most of the other models, the Brière-1 and Lactin-2 models enable estimation of all three critical temperatures (minimum, optimum and maximum temperature thresholds). In a number of studies, the Brière-1 and Lactin-2 models proved to be superior at estimating critical temperatures for insects in temperate areas, including coccinellids ( Roy et al. 2002 ; Kontodimas et al. 2004 ). Further, model application is facilitated when estimation of few fitted coefficients is required. The Brière-1 and Lactin-2 models with three and four coefficients, respectively, as well as the linear model with two parameters are among the models that have the fewest coefficients.
Previous studies have demonstrated the thermal limits of coccinellid predators (e.g. Obrycki and Tauber 1978 , 1982 ; Roy et al. 2002 ; Mehrnejad and Jalali 2004 ; Kontodimas et al. 2004 ). The two spotted ladybird, Adalia bipunctata L. (Coleoptera: Coccinellidae) is one of the most common aphid predators occurring in arboreal habitats of Europe, Central Asia and North America ( Majerus 1994 ; Hodek and Honěk 1996 ). It has potential for the augmentative biological control of aphid pests in Europe ( De Clercq et al. 2005 ). There are a few earlier studies that addressed the thermal requirements of A. bipunctata ( Obrycki and Tauber 1981 ; Honěk and Kocourek 1988 ; Sakuratani et al. 2000 ). However, the values obtained in all of these studies are only based on the linear model.
The aim of this laboratory study was to estimate thermal requirements and limits for the development of A. bipunctata when fed either on a mixture of Ephestia kuehniella Zeller (Lepidoptera, Pyralidae) eggs and fresh bee pollen as a factitious food recommended for the rearing of A. bipunctata ( De Clercq et al. 2005 ), or on the natural prey Myzus persicae (Sulzer) (Hemiptera: Aphididae), using the linear, Brière-1 and Lactin-2 models. The findings of the current study may improve our understanding of the predator's ecology in the field and its phenology in different areas of its distribution. Further, the findings can be indicative for the value of the factitious food for immature development of A. bipunctata and as such may contribute to enhancing rearing procedures for this biological control agent. | Materials and methods
Predator Culture
Insects were taken from a laboratory colony that was started in August 2002 with eggs purchased from Biobest NV ( www.biobest.be ); after that, the colony was repeatedly infused with new individuals from the same commercial source. At this commercial facility, the ladybirds had originally been fed with live pea aphids, A. pisum , but during this study the stock colony was reared on an ad libitum supply of a 50–50 (w/w) mixture of frozen bee pollen and eggs of E. kuehniella ( De Clercq et al. 2005 ). Frozen eggs of E. kuehniella and the pollen, consisting of pollen pellets collected by honeybees, used in our study were supplied by Koppert BV ( www.koppert.com ) and stored for no longer than one month at -18° C. The stock colony of the predator was maintained in a growth chamber at 23 ± 1° C, 65 ± 5% RH and a 16:8 L:D photoperiod.
Experiments
The study was carried out between September 2004 and March 2005. Experiments were conducted at six constant temperatures (15, 19, 23, 27, 30 and 35 ± 1° C), 65 ± 5% RH and a 16 h photoperiod and using two diets: a mixture of frozen bee pollen and eggs of E. kuehniella as a factitious food and live M. persicae aphids as a natural food. The aphids were reared on broad bean, Vicia faba L. var. thalia at 26 ± 2° C, 60 ± 20% RH and a 16:8 L:D photoperiod. For the experiments, a mixture of different nymphal stages of the aphid was provided.
Development
For the experiment with factitious food, clutches of A. bipunctata eggs (<24h old) were collected from the stock colony and distributed among six temperature regimes. At each temperature, at least 50 first instar larvae (<12h old) were isolated in individual 9-cm petri dishes. Each dish contained two plastic cups (3.0 × 0.5cm), one with an ad libitum supply of factitious food and another one with a piece of soaked paper as a water source. Food and water were replenished every one (27, 30 and 35° C) or two days (15, 19, and 23° C).
For the experiments with M. persicae , the predator was reared on the aphid prey for one generation at 23° C in order to adapt to the new food source. Eggs from the resulting females were placed in incubators set at 15, 19, 23, 27, 30 or 35° C. At each temperature, at least 50 first instars (<12 h old) were transferred to individual 14-cm petri dishes. Each petri dish was lined with filter paper and contained a 2-leaf seedling of broad bean infested with M. persicae . The seedling stalks were inserted in an Eppendorf tube containing water. Aphids were replenished after each daily observation. In each treatment, development and survival of larval and pupal stages were monitored once (at 15, 19 and 23° C) or twice a day (at 27, 30 and 35° C). For calculation purposes, events were assumed to have occurred at the midpoint between two consecutive observations.
Statistical analysis
Data were checked for normality using the Kolmogorov—Smirnov test (K—S test) and subsequently analysed by Student's t-test. Levene's test was also performed to assess equality of variances. Data were also submitted to two-way ANOVA at α = 0.05 to examine the significance of main effects (food, temperature) and their interaction. Statistical analysis was performed using the SPSS version 15.0 ( SPSS 2006 ) and the JMP version 4.02 ( SAS Institute 1989 ) statistical packages.
Mathematical Models
Three models were applied to estimate the temperature-dependent development of A. bipunctata on M. persicae and the factitious food. where R is the rate of development and D is duration of development (days) at temperature T, a is the intercept and b is the slope of the linear function (e.g. Campbell et al. 1974 ; Obrycki and Tauber 1982 ; De Clercq and Degheele 1992 ; Jarošik et al. 2002 ; Kontodimas et al. 2004 ; Mahdian et al. 2008 ).
where T 0 ( t min ) is the lower threshold, T L ( t max ) the lethal temperature (upper threshold) and α is an empirical constant ( Brière et al. 1999 ; Roy et al. 2002 ; Kontodimas et al. 2004 ; Arbab et al. 2006 , 2008 ).
where ρ , T m , Δ , and λ are fitted coefficients ( Lactin et al. 1995 ; Lactin and Johnson 1995 ; Brière and Pracros 1998 ; Royer et al. 1999 ; Muniz and Nombela 2001 ; Tobin et al. 2001 ; Roy et al. 2002 ; Kontodimas et al. 2004 ; Arbab et al. 2006 , 2008 ).
The following indices were calculated, where applicable, for each of the three models:
The lower developmental threshold ( t min ), defined as the temperature below which there is no measurable development. This value can be estimated by both of the nonlinear models and by the linear model as the intercept value of the temperature axis. The standard error (SE) of t min , when calculated from the linear model, is: where, s 2 is the residual mean square of r , is the sample mean, SE b , is the standard error of b , the slope of linear function, and N is the sample size ( Campbell et al. 1974 ; Kontodimas et al. 2004 ).
The upper developmental threshold (T max ) , defined as the temperature above which the rate of development is zero or life cannot be maintained for any significant period ( Kontodimas et al. 2004 ). This value was estimated only by the nonlinear models, and the SE of t max was calculated from the nonlinear regression.
The optimum temperature for development ( t opt ), defined as the temperature at which the rate of development is maximal. It can be estimated directly from the equations of the nonlinear models and the SE of t opt was calculated from the nonlinear regression.
The thermal constant ( K ), defined as the amount of thermal energy (degree-days) needed to complete development. The thermal constant K can be estimated only by the linear equation: The SE of K is ( Campbell et al. 1974 ; Kontodimas et al. 2004 ):
Fit to data
The following statistical items were used to evaluate the goodness-of-fit: the coefficient of determination (for linear model; R 2 ) or the coefficient of nonlinear regression (for nonlinear models; R 2 ) and the residual sum of squares (RSS). Higher values for R 2 and lower values for RSS reveal better fit.
For each linear regression, the data points at 35° C which deviated from the straight line through the other points were rejected for correct calculation of regression ( Campbell et al. 1974 ; De Clercq and Degheele 1992 ). | Results
Development time
On both diets, the developmental time decreased with increasing temperature from 15 to 30°C ( Table 1 ). The developmental time of the egg stage of the ladybird was more affected by parental food compared to the other developmental stages. A significant difference was observed between the developmental time of the egg stage of A. bipunctata on either diet at the range of tested temperatures (t-tests, p<0.001). However, the duration of the larval stage (L1–L4) and egg-adult at 27° C, and of the pupal stage at 19 and 27° C did not differ significantly when the ladybird was reared on either factitious food or on aphid diet (t-tests, p >0.05). A two-way ANOVA for the duration of development of the different immature stages and of total development (egg-adult) with food and temperature as factors revealed a significant interaction of the two factors in all cases (p <0.001).
Immature survivorship
On both diets, the ladybird successfully developed to the adult stage within the temperature range of 15–30° C. No eggs hatched at 35° C. The highest mortality always occurred in the egg stage and it ranged from 35.9 to 69.6% and from 16.5 to 51.8% on factitious food and aphids, respectively. Overall immature mortality tended to be lowest at low temperatures on the factitious diet, and lowest at intermediate temperatures on the aphid diet ( Table 2 ).
Thermal constants
Thermal requirements for development of the egg and pupal stages were higher when the predator was fed on factitious food than when fed on aphids but this was compensated by a lower thermal requirement for larval development. Thus, thermal constants for complete development of A. bipunctata did not differ between diets ( Tables 3 and 4 ).
Model evaluation
The results of adjusted R 2 and RSS for the different developmental stages of A. bipunctata and also for total development are presented in Tables 3 and 4 . Although the three models yielded a similar R 2 , independent of diet, the Lactin model yielded a somewhat lower RSS compared to the linear and Brière equations. When A. bipunctata was provided with M. persicae instead of factitious food, the three models predicted the developmental rate of the predator more accurately, in many cases. For example, for total development time, the Lactin model resulted in a 4% higher R 2 and 80% lower RSS value for the aphid diet compared to the factitious food.
Although the models estimated similar thermal limit values for the total development time of A. bipunctata on M. persicae ( Figure 1 ), the critical temperatures for pre-imaginal stages of the predator estimated by the Lactin equation were higher than those estimated by the two other models, on both diets ( Tables 3 and 4 ). For instance, the lower threshold for the egg stage on factitious food was estimated to be 8.87 and 9.18° C by the linear and Brière models, respectively. However, the Lactin model predicted it to be 14.79° C. The optimum temperature varied between 29.40 and 33.02° C, depending on developmental stage, diet and the type of non-linear model. Although the experimental data showed that 35° C was lethal for all developmental stages ( Table 2 ), the estimated upper developmental threshold was higher than this value and the employed equations did thus not satisfy the criterion for high model accuracy ( Tables 3 and 4 ). | Discussion
On both diets, the developmental rate of A. bipunctata showed a linear increase with temperature in the range of 15–30°C. Although the average optimum temperature for the different developmental stages was estimated to be in the range of 29.40 to 33.02° C, the mortality data suggested an optimum temperature in the range of 23–27° C. For example, at 30° C the total mortality was 30–45% higher than at 23° C on both diets. A similar result was reported for New York and Ontario populations of A. bipunctata by Obrycki and Tauber ( 1981 ). The latter authors found that the rate of development on A. pisum was highest at 29.4° C, but noted that the mean mortality was approximately 25% lower at 26.7° C than at 29.4° C.
All three models fitted the data of the current study well, as indicated by the high R 2 and low RSS values, but there were marked differences in the estimated values of t min and t max . A common method for evaluating the accuracy of estimating critical temperatures is based on their comparison with experimental data ( Kontodimas et al. 2004 ). In the current study, the ladybirds successfully developed at the lowest temperature examined (15° C) but failed to develop at 35° C, independent of diet. Therefore, we could not use this criterion but comparison with other studies indicates that this parameter was reliably predicted, especially by the Brière and linear models. In the current study, t min values of 10.06 and 9.39° C and thermal constants ( K ) of 267.90 and 266.27 DD were obtained from the linear model for the total development time of A. bipunctata on the aphid diet and the factitious food, respectively. The non-linear equations estimated t min to be 10.47 and 11.31° C (Brière model) and 8.85 and 7.92° C (Lactin model), on the respective diets. From other published studies, we calculated a t min of 8.5° C, and a K of 244.8 DD for egg-adult development of a Finnish population of the ladybird fed on M. persicae ( Ellingsen 1969 ) and a t min of 9.20° C and a K of 251.81 DD for a German population fed on Sitobion avenae (F.) ( Schüder et al. 2004 ). Also, a similar t min of 9.0° C and K of 262 DD for the total development of a Nearctic population of A. bipunctata on A. pisum was reported by Obrycki and Tauber ( 1981 ). In central Bohemia (Czech Republic), t min and K values of 10.5 and 8.9° C, and 41.7 and 78.7 DD, respectively, were calculated for development of the egg and pupal stages of the ladybird ( Honěk and Kocourek 1988 ). On the other hand, a lower t min of 6.3° C and higher K of 322.6 DD for the ladybird was obtained by Sakuratani et al. ( 2000 ) in Japan. The latter authors noted that, in this region, A. bipunctata is univoltine with adult aestivation and a hibernation diapause from mid June to March.
Although some earlier studies showed that the Lactin and Brière model may be superior for estimating temperature thresholds of ladybird beetles compared to other non-linear models ( Roy et al. 2002 ; Kontodimas et al. 2004 ), the current study indicates that these models overestimated the upper developmental thresholds of different developmental stages of A. bipunctata on the tested diets. In conclusion, the linear model fitted well to the experimental data and should be sufficient for describing temperature dependent development of the coccinellid. The linear model has the advantage of being easy to calculate and is the only model enabling the estimation of the thermal constant ( Kontodimas et al. 2004 ).
To predict the maximum number of generations per year in Western Europe for A. bipunctata , we calculated the thermal accumulation for the period of ladybird activity during the year, using the field observations of Hemptinne and Naisse ( 1987 ) and the average minimum-maximum temperature data for Brussels from 1971– 2000 (Royal Meteorological Institute of Belgium, www.meteo.be ). Based on our estimates of t min , we predict that A. bipunctata starts its spring activity in April when average temperatures are near 10° C and migrates to its hibernation sites in October when average temperatures again approach t min . Based on thermal requirements estimated in the current study, A. bipunctata would produce 3 generations per year. Likewise, Hemptinne and Naisse ( 1987 ) stated that in Belgium A. bipunctata has 3 to 4 generations through late spring and summer and in autumn adults undergo reproductive diapause until next spring. A similar phenology of A. bipunctata was reported by Bazzocchi et al. ( 2004 ) in northern Italy.
Certain biotic factors such as food quality and quantity may change the actual number of annual generations of polyphagous predators like A. bipunctata in nature. Obrycki and Tauber ( 1981 ) proposed that temperature determines the maximum developmental rate of A. bipunctata whereas food (availability and type) determines the actual number of generations produced. This is also true for other ladybird species inhabiting temporal regions ( Michaud and Qureshi 2006 ). Food also can directly affect developmental rate ( Schüder et al. 2004 ), length of the preoviposition period and fecundity ( El-Hariri 1966 ), as well as larval development and survival ( Blackman 1967 ) of this ladybird. In our study, food type did not substantially affect thermal requirements of the predator, and larval development was equally successful on the factitious food and the natural prey although in the range of 23–30°C, egg hatch was lower on the factitious food.
Temperature is one of the main ecological factors affecting generation time of a predator and as such determines its ability to track populations of its prey over several generations by its numerical response ( Roy et al. 2002 ). The developmental time of aphidophagous ladybirds like A. bipunctata often spans several aphid generations. So the ratio of generation time of these predators to that of their prey is large and a ladybird's rate of increase depends not only on the present state of a patch of prey, but also on the quality of the patch in the future ( Kindlmann and Dixon 1999 ; Dixon 2000 ; f Kindlmann et al. 2007 ). An important aspect of insect predator-prey dynamics is the difference in the lower temperature thresholds of predator and prey ( Dixon 2000 ). When the predator's lower temperature threshold is substantially higher than the aphid's, the natural enemy is unlikely to have a significant impact on the aphid's abundance, because the predator always arrives too late to prevent aphid population build-up. A review of the literature reveals a much lower t min value for M. persicae as compared with estimates of t min for A. bipunctata . Cividanes and Souza ( 2003 ) noted that the t min and K values of M. persicae on Brassica oleracea L. were 2.2° C and 165.6 DD, respectively., On Brassica campestris ssp. chinensis , Liu and Meng ( 1999 ) calculated t min values of 3.9° C and 4.3° C and K values of 119.8 DD and 133.0 DD for apterae and alatae forms of M. persicae , respectively. Lower developmental thresholds and generation time ratios thus suggest a lack of synchrony between A. bipunctata and one of its major prey, M. persicae , and would in a classical biological control concept be indicative of a low biocontrol potential of the ladybird. However, pessimistic views on the potential of aphidophagous ladybirds in a classical biological control context may not necessarily be valid for their role in other types of biocontrol ( Hodek and Michaud 2008 ). In augmentative biological control programs using A. bipunctata , larval stages (second to third instars) are usually released in relatively high numbers (e.g., Wyss et al. 1999 ; J. Vermeulen, BioBest NV, personal communication). In an inundative approach usually short term pest suppression by the released larvae and the resulting adults is the objective rather than long term control of the aphid populations. In addition, A. bipunctata larvae can be released in aphid hot spots rather than on the entire crop surface.
Our study shows that, irrespective of diet, the developmental rate of A. bipunctata linearly increased with temperature from 15 to 30° C, indicating its capacity for predation activity over a wide range of temperatures. Although there is an increase in the use of non-linear models for the prediction of thermal limits of insect natural enemies, our laboratory study shows that a simple linear model may also yield reliable estimates of thermal requirements. Further, similar thermal constants of A. bipunctata on the natural prey and a mixture of E. kuehniella eggs and pollen corroborate earlier studies indicating the value of latter factitious food for mass production of A. bipunctata ( De Clercq et al. 2005 ). Further studies assessing the interaction between climatic factors (e.g. temperature, photoperiod) and food may be helpful in predicting the development of A. bipunctata populations in the field and may provide a better understanding of the potential of this predator for biological control of aphid pests. | Associate Editor: J. P. Michaud was editor of this paper.
The ability of a natural enemy to tolerate a wide temperature range is a critical factor in the evaluation of its suitability as a biological control agent. In the current study, temperature-dependent development of the two-spotted ladybeetle A. bipunctata L. (Coleoptera: Coccinellidae) was evaluated on Myzus persicae (Sulzer) (Hemiptera: Aphididae) and a factitious food consisting of moist bee pollen and Ephestia kuehniella Zeller (Lepidoptera: Pyralidae) eggs under six constant temperatures ranging from 15 to 35° C. On both diets, the developmental rate of A. bipunctata showed a positive linear relationship with temperature in the range of 15–30° C, but the ladybird failed to develop to the adult stage at 35° C. Total immature mortality in the temperature range of 15–30° C ranged from 24.30–69.40% and 40.47–76.15% on the aphid prey and factitious food, respectively. One linear and two nonlinear models were fitted to the data. The linear model successfully predicted the lower developmental thresholds and thermal constants of the predator. The non-linear models of Lactin and Brière overestimated the upper developmental thresholds of A. bipunctata on both diets. Furthermore, in some cases, there were marked differences among models in estimates of the lower developmental threshold ( t min ). Depending on the model, t min values for total development ranged from 10.06 to 10.47° C and from 9.39 to 11.31° C on M. persicae and factitious food, respectively. Similar thermal constants of 267.9DD (on the aphid diet) and 266.3DD (on the factitious food) were calculated for the total development of A. bipunctata , indicating the nutritional value of the factitious food.
Keywords | Acknowledgements
The authors wish to thank the Ministry of Science, Research and Technology of Iran for financial support to M.A. Jalali (Ph.D. grant no. 810086). The helpful comments from Associate Editor J.P. Michaud and two anonymous reviewers are gratefully acknowledged.
Abbreviations
degree day;
thermal constant or the sum of effective temperatures (SET);
Kolmogorov—Smirnov test;
1st to 4th instar larvae;
lower developmental threshold ( t min or t 0 );
residual sum of squares;
see K;
see LDT;
see LDT;
upper development threshold;
see T L ;
optimum temperature | CC BY | no | 2022-01-12 16:13:44 | J Insect Sci. 2010 Aug 3; 10:124 | oa_package/14/37/PMC3016865.tar.gz |