id
stringlengths 47
47
| text
stringlengths 426
671k
| keywords_count
int64 1
10
| codes_count
int64 2
4.68k
|
---|---|---|---|
<urn:uuid:133711c5-d813-40b5-a588-95b114cc7ec4> | Preliminary investigation of a diverse megafossil floral assemblage from the middle Miocene of southern Mississippi, USA
Article number: 22.2.40
Copyright Paleontological Society, July 2019
Plain-language and multi-lingual abstracts
Submission: 12 July 2018. Acceptance: 29 April 2019
Our understanding of Miocene floras in eastern North America is hampered by the rarity of megafossil sites. An early report from the middle Miocene Hattiesburg Formation in Mississippi included palms and Ulmus. A later report listed Taxodium, Salix, either Morus or Celtis, and monocot fragments. The floral assemblage described here was recently recovered from along the Bouie River in southern Mississippi. Ferns are represented by complete Salvinia specimens including attached sporocarps, Woodwardia, and Osmunda. Conifers are represented by branchlets of Taxodium. Angiosperms include leaves attributable to the Lauraceae. Platanus is known from leaves, stipules, and fruits. Leaflets of Sambucus are common. Cercis is recognized from leaves with palmate venation and pulvini. Leaves of Quercus sections Lobatae and Quercus have been recovered. The Juglandaceae include fruits of Juglans and two species of Carya. Morus, Populus, and Salix leaves have been recovered. Of particular biogeographical interest is a seed of Sargentodoxa, which is the first record from the southeastern coastal plain of this current Chinese endemic. Monocots include Cyperus and two types of palm, including one with armed petioles. The first vegetative fossils of Lemna from North America have been identified. Because most of the fossils are related to plants still found in the region today, the climate was probably similar to that of the modern central Gulf Coastal Plain. This flora is now one of the most extensively known in the Neogene of southeastern North America and helps to fill a major gap in our understating of Miocene plant evolution.
Daniel M. McNair. School of Biological, Environmental, and Earth Sciences, University of Southern Mississippi, 118 College Drive #5018, Hattiesburg, Mississippi 39406, USA.
Debra Z. Stults. Biology Department, University of South Alabama, 5871 University Drive North, Room 124, Mobile, Alabama 36688, USA.
Brian Axsmith. Biology Department, University of South Alabama, University of South Alabama, 5871 University Drive North, Room 124, Mobile, Alabama 36688, USA.
Mac H. Alford. School of Biological, Environmental, and Earth Sciences, University of Southern Mississippi, 118 College Drive #5018, Hattiesburg, Mississippi 39406, USA.
James E. Starnes. RPG, Mississippi Departmental of Environmental Quality, Mississippi Office of Geology, 700 North State Street, Jackson, Mississippi 39202, USA.
Keywords: Gulf Coast; Hattiesburg Formation; Lemna; fossil palms; Salvinia; Sargentodoxa
McNair, Daniel, Stults, Debra Z., Axsmith, Brian, Alford, Mac H., and Starnes, James E. 2019. Preliminary investigation of a diverse megafossil floral assemblage from the middle Miocene of southern Mississippi, USA. Palaeontologia Electronica 22.2.40A 1-29. https://doi.org/10.26879/906
Copyright: July 2019 Paleontological Society.
This is an open access article distributed under the terms of Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0), which permits users to copy and redistribute the material in any medium or format, provided it is not used for commercial purposes and the original author and source are credited, with indications if any changes are made.
The Miocene Epoch is poorly represented in the megafossil plant record of the eastern United States, and the few scattered sites that have been described are generally not temporally well constrained. This is unfortunate, as the Miocene is widely considered a critical time in the evolution of modern plant communities, including such events as the rise to ecological dominance of C4 dominated grasslands (Bouchenak-Khelladi et al., 2014), the extirpation of several “Asian” taxa from North America (Manchester, 1999), and continued post-Eocene climatic deterioration (Graham, 2010). The only substantial site in the northeastern USA is the stratigraphically isolated Brandon Lignite in Vermont of probable early Miocene age (Tiffney, 1994). In the Mid-Atlantic region, a flora from the Brandywine Formation of Maryland was briefly documented (McCartan et al., 1990), but detailed descriptions are still ongoing (Stults et al., 2011). In the southeast, work on sinkhole floras from the late Miocene to early Pliocene of eastern Tennessee are now comparatively well documented (Gong et al., 2010). On the Gulf Coastal Plain, work on the middle Miocene Alum Bluff flora of the Florida Panhandle has documented a small megafossil flora (Corbett, 2004), expanding on earlier work by Berry (1916a).
Within Berry’s (1916a) account of the Alum Bluff paleoflora is a brief description of an assemblage from Hattiesburg, Mississippi, that he asserted was coeval with Alum Bluff, and included a poorly preserved palm (Sabalites apalachicolensis) and leaves attributed to Ulmus. Another small flora from the Hattiesburg Formation was described from fossils recovered during a groundwater survey (Brown, 1944). The plants were identified by the paleobotanist Roland Brown as probable Taxodium, Salix, either Morus or Celtis, and unidentified monocot fragments. None of the plant fossils mentioned in these reports were figured or described in any detail, and subsequent attempts at finding more specimens from these localities have not been successful.
Recent discoveries of fossil plants from previously unexplored exposures of the Hattiesburg Formation along the Bouie River (also commonly known as the Bowie River) in southern Mississippi are providing important information on plant diversity from this otherwise poorly known interval (Figure 1, Figure 2, Figure 3). Such sites are critical for expanding our understanding of plant evolution in this biodiverse part of North America. The following account should be considered preliminary, as several of the taxa require additional study and separate detailed publications. However, most components of the flora are presented below in considerable detail.
The central Gulf Coastal Plain, including the Hattiesburg area, contains thick sequences of Neogene sediments, but their delineation has been historically controversial. As noted by Dockery and Thompson (2016), the sequence is mostly a monotonous series of fluvial and deltaic units, resulting in poorly constrained formational boundaries (Figure 3). In addition, extensive surface exposures are uncommon and are poor in diagnostic marine index fossils, making it difficult to place the Hattiesburg plant fossils in a precise stratigraphic and temporal context. Nevertheless, we propose that the site occurs in the uppermost Hattiesburg Formation and is of probable middle Miocene age based on several lines of evidence presented below.
The fossil site (MS. 18.001) occurs along both sides of the Bouie River, where the river makes a hard turn to the north just below the Glendale Avenue bridge on the northern border of the city of Hattiesburg, Mississippi (N 31.34969; W 89.30537) (Figure 2.2). Here, the river cut exposes about seven vertical meters of section along the eastern bank, with lower exposures present to the west (Figure 1.3). The site is attributed to the upper Hattiesburg Formation based on nearby well log data correlated with deeper well logs, which also includes the stratigraphic relationship to underlying units (Figure 2).
The Hattiesburg Formation consists of thick channel sands separated by silty to fine-sandy clays in the middle, alluvial phase of the Grand Gulf Group. This portion of the post-Vicksburg prograding deltaic wedge overlies the Catahoula Formation of late Oligocene to early Miocene age (Foster, 1941) (Figure 2.1, Figure 3). The Bouie River site is typical of the non-marine finer-grained sediments of the Grand Gulf Section. It consists of pyritic, gray-green colored, thinly-bedded to laminated alternating clays, silts, and fine-grained sands. This repetitive sequence is slightly carbonaceous to highly lignitic, with well-preserved fossil leaves and leaf hash along fissile partings and large, imbedded, lignitized logs. The depositional setting is likely an interfluvial over-bank deposit in the lower distributary portion of a coastal river. The site is projected downdip to be not far below the gradational contact with the overlying Pascagoula Formation (Figure 3).
A middle Miocene age designation is proposed for the Bouie River site based on correlations of this section to marine equivalents offshore. Also supporting this designation is a sparse collection of vertebrate material reported from other nearby Hattiesburg Formation outcrops. A strongly age-diagnostic example is a Teleoceras medicornutum tibia that was found as float in association with a fossil llama proximal phalanx on a Hattiesburg Formation outcrop on the Middle Fork of the Homochitto River near Meadville in Franklin County, Mississippi (Dockery and Thompson, 2016). Teleoceras medicornutum is a Barstovian, middle Miocene rhinoceros previously known from the Fleming Formation, Burkeville fauna of the Gulf Coastal Plain of Texas (Prothero and Manning, 1987). Overlying the Hattiesburg Formation in Mississippi is the Pascagoula Formation, which contains the index fossil Rangia johnsoni, a late Miocene bivalve mollusk. The Pascagoula Formation also contains a sparse late Miocene vertebrate fauna in Stone County and Amite County, Mississippi, that may be correlative to the better documented Mauvilla Fauna of southern Alabama (Hulbert and Whitmore, 2006).
MATERIALS AND METHODS
All of the described and figured specimens were collected from the Bouie River site in Hattiesburg, Mississippi (N 31.34969; W 89.30537), and are curated in the Mississippi Museum of Natural Science (MMNS) Paleontology Collection in Jackson, Mississippi. The descriptions are primarily based upon the figured specimens and include MMNS PB-82, 83, 84, 85, 86; MMNS 87.1, 87.2; MMNS 88.1, 88.2, 88.3, 88.4, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105.1, 105.2, 106, 107, 108, and 109. Specimen numbers are also listed in the corresponding figure captions.
The fossils are mostly impressions and some compressions recovered by splitting the matrix in which they are contained. A few specimens are represented by relatively uncompressed, three-dimensional seeds and fruits that emerged from the matrix when split. When needed, an assortment of needles and brushes was used to fully expose the plant organs. Fragmentary cuticles, when recoverable from the compressions, were obtained by maceration of the organic matter using various dilute concentrations of sodium hypochlorite. Specimens of wood consistent with the identifications here were examined and will be presented in a future publication.
Larger specimens were photographed on a copy stand using a Nikon D3200 digital SLR camera, with close-ups using a Tokina 100 mm lens. Micrographs were taken with a Zeiss Stemi 2000-C dissecting microscope on a Diagnostic Instruments boom stand, and a Zeiss Axio Imager 2 system. Salvinia sporocarp contents were obtained by removing them from the matrix surface with needles followed by mounting on stubs for SEM study using an FEI Quanta 200 environmental scanning electron microscope. The sporocarp contents were not sputter coated.
The systematic identifications presented here are based primarily on herbarium reference material, living material, and various literature sources cited in the description section below. Classifications are based on Chase and Reveal (2009), along with the Angiosperm Phylogeny Group (APG) IV (2016), Smith et al. (2006), and Reveal (1992, 2012). However, for readability and accessibility, taxa are also organized under informal but widely-used groupings such as “ferns,” “conifers,” and “angiosperms.”
Leaf morphological descriptions are based as far as possible on the Manual of Leaf Architecture (Ellis et al., 2009). Nearly all the fossils are assignable to extant families and genera. In some cases where multiple lines of evidence support it, they are identified to the species level. A “cf.” designation preceding a name indicates that the fossil is a possible representative of the taxon, but more complete or better-preserved material is required for verification.
Subclass POLYPODIIDAE Cronquist, Takht. and Zimmerm., 1966
Order OSMUNDALES Bromhead, 1838
Family OSMUNDACEAE Martinov, 1820
Genus OSMUNDA Linnaeus, 1753
OSMUNDA sp. indet. cf. spectabilis Willdenow, 1810
Description. Two compressions and one impression fossil of sterile pinnules represent Osmunda. The pinnule apices are acute. A partial petiolule is visible at the basal end of the most complete pinnule (Figure 4.1). Approximately 10 teeth per cm occur along the thickened margin. From the pinnule mid-veins arise numerous, regularly-placed, closely-spaced lateral veins. Branching of the pinnule lateral veins occurs mostly near the mid-vein, less frequently midway to the margin, culminating in veinlets of the same gauge with parallel courses ending at the margin (Figure 4.2). The average distance between these parallel secondary veins is 332 μm.
Remarks. The fossil record of the Osmundaceae extends back to the Permian Period (Hewitson, 1962; Tidwell and Ash, 1994), and many fossil examples demonstrate remarkable morphological stasis over hundreds of millions of years (Miller, 1967; Phipps et al. 1998; Vavrek et al., 2006; Bomfleur, 2014). Currently, the family contains four genera: Leptopteris, Osmunda, Osmundastrum, and Todea (Metzgar et al., 2008). Within genus Osmunda are three subgenera, Claytosmunda, Osmunda, and Plenasium. One character that differentiates subgenus Osmunda from Claytosmunda and Plenasium is the presence of bipinnate leaves, a feature recognized when the pinnule is connected to the costa by a slenderer petiolule. While a direct connection to the costa is not visible in the Hattiesburg Formation fossil, the presence of a partial slender petiolule and smoothly rounded petiole bases indicates a probable originally bipinnate frond like that of subgenus Osmunda species. Four species presently comprise subgenus Osmunda: O. regalis (European), O. japonica and O. lancea (southeastern Asia),and O. spectabilis (Western Hemisphere). Most authorities now accept that O. spectabilis is a separate species and not a variety or subspecies of O. regalis (Gray, 1856; Underwood, 1903; Löve and Löve, 1977; Arana and Ponce, 2015). Osmunda spectabilis occurs only in the Western Hemisphere, is phylogenetically distinct from the European O. regalis, and is contained within the clade including the Japanese species O. japonica and O. lancea (Metzgar et al., 2008; Arana and Ponce, 2015).
The Hattiesburg Formation fossils indicate affinity with the genus Osmunda (Cobb et al., 2005), and there are possible species level features when compared with similar extant forms of subgenus Osmunda. The pinnule lateral veins of O. japonica and O. regalis branch several times; O. lancea and O. spectabilis branch much less frequently (Hewiston, 1962), in accord with the Hattiesburg fossils. Our observations of living and herbarium material of O. spectabilis indicate lateral veins that branch nearer the midvein and exhibit a more uniform parallel course toward the margin (as in the Hattiesburg Formation fossils, and O. lancea), and a narrower distance between veinlets (Imaichi and Kato, 1992). Because the fossils occur within the extant range of O. spectabilis and considering the extremely long temporal range of many Osmunda species, we suggest that it is a representative of that species rather than O. lancea or an extinct form. At this point, affinity with Telmatoblechnum serrulatum (Blechnaceae) cannot be entirely ruled out due to convergence in leaf architecture. More complete material will be required to resolve the exact affinities of this fern.
Order POLYPODIALES Mett. ex A.B. Frank, 1877
Family BLECHNACEAE Newman, 1844
Genus WOODWARDIASmith, 1793
Woodwardia sp. indet. cf. virginica (Linnaeus) Smith, 1793
Description. Two specimens of incomplete, sterile, pinnatifid pinnae comparable to Woodwardia cf. virginica have been recovered. Fronds were at least 5.3 cm long x 2.3 cm wide and 6.0 cm long x 1.3 cm wide. The largest pinnule on the first frond is 1.3 cm x 0.4 cm, and the largest pinnule lobe visible on the second frond is 1.0 cm x 0.2 cm. The pinnule lobes become smaller distally (Figure 4.3). On either side of the pinnule midvein, a secondary vein connects consecutive pinnule lobes. The secondary veins form areoles that parallel both sides of the pinnule midvein. Except for the pinnule lobe midvein, veins within the lobes appear to be of the same gauge. Exmedial veins reach the margin unbranched or fork once before terminating at the margin. Minute teeth occur on the pinnule margin (Figure 4.3-4).
Remarks. Fossils of Woodwardia (and Woodwardites, a name sometimes used for fossil representatives) are common in Cenozoic deposits in Europe, North America, and Asia (Smith, 1938; Collinson, 2001; Pigg et al., 2006). Woodwardia virginica has been definitively identified from the Miocene Yakima Canyon locality in Washington based on excellent reproductive and vegetative specimens (Pigg and Rothwell, 2001). Although fragmentary, the fronds from the Hattiesburg Formation are also very similar to those of W. virginica. Common features include the overall morphology, and distinctive venation features like the areoles paralleling the pinnule midvein and the basal secondary vein connecting the pinnules. Nevertheless, we employ the “cf.” designation pending the discovery of more complete material. The current distribution of W. virginica in North America forms a semi-circular pattern along the southern borders of the Great Lakes (except Lake Superior), running along the Atlantic Coastal Plain (from southern New Brunswick, Canada to the Florida peninsula), and then southwestward along the Gulf of Mexico Coastal Plain to East Texas (Cranfill, 1993+). This is the first suggested occurrence of the genus and species in the fossil record from within its current range.
Family SALVINIACEAE Martinov, 1820
Genus SALVINIA Séguier, 1754
Description. Two specimens of nearly complete plants, including two sets of paired sterile leaves, connected filamentous leaves, and attached sporocarps have been found (Figure 4.7). Numerous floating leaves (often still characteristically paired on a rhizome fragment), several submerged filamentous leaves with attached sporocarps with characteristic elongate trichomes, and numerous unattached sporocarps have also been recovered. The floating leaves are obovate or oblong, measuring 8-10 x 6-8 mm, with round or slightly emarginate apices and round or slightly cordate bases (Figure 4.5). The mid-vein does not run an entire course to the apex but branches into looping veinlets. Areoles are present on both sides of the mid-vein for its entire course. Evenly spaced veinlets arise from these areoles and diverge at approximately 45°, then form long loops that further divide into smaller loops near the margin. Evenly spaced trichome bases are visible, but the typical “egg-beater” trichomes are no longer attached. Also, many of the leaves probably represent the abaxial surface which lacks the trichomes. Filamentous leaves branch into hair-like segments. Internal components of the sporocarps are often orange in color and often contain both megasporangia and microsporangia (Figure 4.6-7).
Remarks. These fossils occur frequently with other aquatic plants such as Lemna and Cyperus described below. They appear most similar in size and overall morphology to Salvinia minima, but more study is required to acquire a more complete set of comparative features. Of particular interest is the finding that many of the fossil sporocarps from the Hattiesburg locality contain both megasporangia and microsporangia (mixed) (Figure 4.6). Salvinia is widely considered to have sporocarps containing only one of the sporangium types. However, Bierhorst (1971) noted that there is a tendency for some mixed development in the modern forms.
There has been some debate about whether extant Salvinia is native to North America. Although it has been treated as native in several floras, the earliest well-documented collection was made from populations in Florida in the 1930s, and its expansion from there has been well documented (Jacono et al., 2001). Nearly complete plants lacking sporocarps were described from the Eocene of Tennessee (Berry, 1925). This, along with the report here, indicates that Salvinia was a natural component of the southeastern flora at least from the Eocene to the middle Miocene. Globally, spore records of Salvinia are common in the fossil record, but vegetative remains are known from the Upper Cretaceous of Mexico, the Cretaceous/Paleogene of India, the Eocene of France, United States, and China, and the Miocene of Poland and Bohemia/Czech Republic (Collinson, 1991, 1996, 2001; Collinson et al., 2001; Wang et al., 2014).
Subclass PINIDAE Cronquist, Takht. and Zimmerm., 1966
Order CUPRESSALES Bromhead 1838
Family CUPRESSACEAE Richard ex Bartling, 1830
Genus TAXODIUM (Linnaeus) Richard, 1810
Description. Leaves are about 5-18 x 1-1.5 mm, alternately attached on deciduous branchlets (Figure 5.1). Leaf apices are acute. Leaves have a single vascular strand. Stomata are transversely oriented to the primary vein (Figure 5.2).
Remarks. Leaf fossils are common at the site, but intact cuticle was very difficult to recover. No cones or cone scales have been found, so we refrain from a species level identification. Taxodium was previously reported from the Tallahala Creek Locality of the Hattiesburg Formation, although no figures were provided or mention made of what plant organs were examined (Brown, 1944). Acute leaf apices and perpendicularly orientated stomata separate Taxodium from other cupressaceous genera (Farjon, 2005; Mai et al., 2013) and are present in the Bouie River material. Taxodium was widespread in Eurasia and North America during the Paleogene and Neogene (Knobloch and Mai, 1986; Aulenback and LePage, 1998; Kunzmann et al., 2009). The distribution of the genus is relictual, currently found only in the southeastern United States (T. distichum and T. ascendens) and Mexico (T. mucronatum) (Kunzmann et al., 2009). In the southeastern Neogene fossil record, it is a common component of the Miocene Brandywine flora of Maryland and the Pliocene Citronelle Formation flora in south Alabama (Stults et al., 2011).
Order LAURALES Perleb, 1826
Family LAURACEAE de Jussieu, 1789
Gen. et sp. indet. cf. Persea Miller, 1754
Description. Two entire-margined lauraceous leaves with possible affinities to Persea were recovered. One is obovate (7.8 cm x 4.0 cm) (Figure 5.3) and one elliptical (8.6 cm x 3.4 cm). Bases are decurrent in both specimens. The apex is incomplete in the obovate specimen, but obtuse and appears to have been acuminate. The apex in the elliptical specimen is acute, acuminate. Primary venation is pinnate (Figure 5.4, 6). Secondary venation is difficult to categorize—somewhat brochidodromous to somewhat eucamptodromous. Secondary veins are non-arcuate, irregularly spaced, with angles diverging from the primary vein fairly consistently at about 45°. Intersecondaries are few, but when visible appear more obvious toward the apex. The marginal vein is substantial and categorized as a secondary vein. Tertiary venation is irregular reticulate. Fourth and fifth order veins in the obovate specimen are regular reticulate (Figure 5.4). Stomata are paracytic (Figure 5.5). Spherical oil bodies are visible in the mesophyll of the obovate specimen under light microscopy (Figure 5.7) and under epifluorescence in the elliptical specimen.
Remarks. The features of these leaves clearly indicate affinity with the Lauraceae, especially the size and distribution of the oil bodies in the mesophyll (Watson and Dallwitz, 1992+). This feature is comparable to many extant species including Lindera benzoin, a common species of sandy riparian habitats of the southeastern United States. The paracytic stomatal complexes are also common for taxa within the Lauraceae (Watson and Dallwitz, 1992+; Christophel et al., 1996) (Figure 5.5). The regular reticulate fourth and fifth order venation is common in many taxa including Persea, as are the strong marginal secondary veins and apparent coriaceous texture. Overall, these fossils appear most similar to Persea, but until more information is available, we use the “cf.” designation.
Currently, there are about 50 genera and 2500-3000 species of Lauraceae, several genera of which (e.g., Lindera, Litsea, Persea, and Sassafras) show the classic disjunction between tropical/subtropical areas in southeastern Asia and North America (Chanderbali et al., 2001). However, Ocotea is the most speciose lauraceous genus in the neotropics (Chanderbali et al., 2001). Fossils of the family appear by the mid-Cretaceous, but the Asian/American disjunction of the Perseae - Laureae clade ensued with late Eocene cooling (Wolfe, 1975; Tiffney, 1985; Zheng, 1983; Drinnan et al., 1990; Eklund and Kvaček, 1998; Chanderbali et al., 2001). Lindera, Persea, and Sassafras have been reported from the Upper Pliocene of South Alabama (Stults and Axsmith, 2015).
Order DIPSICALES Juss. ex Bercht. and J. Presl, 1820
Family ADOXACEAE Meyer, 1839
Genus SAMBUCUS Linnaeus, 1753
Description. Leaflets are mostly ovate, although a few are elliptic. Laminar width is generally symmetrical, but slight asymmetry occurs in several leaflets at the petiolule insertion point (a common feature of the leaflets of extant Sambucus) (Figure 5.8). Most leaflets are incomplete; however, width ranges of the original leaflets are about 2.0-4.3 cm. The smallest complete leaflet length is 4.3 cm, and larger specimens were greater than 7.2 cm long. Margins are unlobed and serrate (Figure 5.8-9). Leaflet bases are acute and convex. When a petiolule portion is present, the base appears to be decurrent. However, most of the fossil leaflets appear to have been sessile, recognizable by the asymmetry at the basal end and lack of a petiolule. Apices are acute and straight to acuminate. Primary vein framework of the leaflets is pinnate. Secondary veins are mostly cladodromous, excurrent, irregularly spaced with inconsistent angles, and conspicuously arcuate. Intersecondaries are rare, and if present, observable only at the basal end. Third order veins are mixed percurrent with inconsistent angles. Fourth order venation is also mixed percurrent. Fifth order veins are sometimes observable, but difficult to characterize. The leaflet margins have one order of relatively large, serrated teeth, although an extraneous tooth is sometimes evident (a feature common in extant Sambucus). Teeth are regularly spaced with angular sinuses, straight to concave on the distal side and convex on the proximal side. Tooth apices are simple (more like that of S. racemosa rather than S. canadensis, which displays a setaceous tooth apex) (Figure 5.10). The number of teeth/cm is also more similar to S. racemosa.
Remarks. Twenty leaflets are assigned to Sambucus sp. making it the most common eudicot taxon in the collection. The genus Sambucus includes about 25 species of shrubs, small trees, and herbs distributed mainly in temperate and subtropical regions of the Northern Hemisphere, in both wet and dry soils (Little, 1977; Eriksson and Donoghue, 1997). Seven species occur in North America, three occurring in eastern portions of the continent. Sambucus canadensis (considered a subspecies of S. nigra by some) occurs mainly in the eastern United States and some areas west of the Mississippi River. Sambucus racemosa is widespread in North America but in the southeast is native only in Georgia, North Carolina, Virginia, Tennessee, Kentucky, and Arkansas. Sambucus ebulus inhabits only the northeastern portion of North America (Little, 1977; USDA GRIN online database, www.ars-grin.gov).
The fossil record of Sambucus is based mostly on endocarps found in the Paleocene through Holocene of Europe, Asia, and North America (see references in table 4 of Huang et al., 2012). Leaves/leaflets have been less reliably recorded, but a North American record of a compound leaf and additional leaflets has been verified for the Eocene of Florissant, Colorado (MacGinitie, 1953; Manchester, 2001). Leaflets of Sambucus are easily disassociated from the overall compound leaf, as noted in natural populations of S. canadensis in southern Alabama (based mainly on personal observations of an extensive population along the floodplain of Muddy Creek in the Muddy Creek Management Area of southern Mobile County, Alabama). These fossils are somewhat similar to those described as Ulmus floridana by Berry (1916a), which also occur at the Alum Bluff locality in Liberty County, Florida (Corbett, 2004). However, the leaves described here as Sambucus have prominent simple teeth. Additionally, the secondary veins are arched and the cladodromous organization appears variable and disorganized compared to secondary venation patterns typically seen in Ulmus or Prunus. Furthermore, these leaflets are different from Prunus leaves in lacking glands on the teeth or leaf base.
Order FABALES Bromhead, 1838
Family FABACEAE Lindley, 1836
Genus CERCIS Linnaeus, 1753
Description. One compression/impression fossil leaf is broadly ovate measuring 4.6 cm wide, but if assuming symmetry on both sides of mid-vein, the original leaf would have been about 5.1 cm wide (Figure 5.11). The leaf is un-lobed with an entire margin. The base is symmetrical, cordate. Venation is actinodromous consisting of five primary basal veins. Simple agrophic veins are exmedial to the outermost primary veins. One agrophic complex originates with a basal vein; therefore, six total basal veins are observable. Assuming symmetry, a seventh basal vein was probably present on the opposite agrophic complex. Major secondary venation is camptododromous to brochidodromous. Secondary interior veins are present between the major primary veins. Third order veins are mixed percurrent to irregular reticulate. Fourth and fifth order venation patterns are irregular reticulate (Figure 5.12). A 1.5 mm thickening of the petiole near the base is interpreted as a pulvinus (Figure 5.11).
Remarks. Observations of herbarium specimens of the four extant North American species of Cercis (C. canadensis, C. mexicana, C. reniformis, C. occidentalis) primarily show seven primary veins rather than five (as we describe for the Hattiesburg leaf fossil). However, examination of C. canadensis specimens in the USAM and USMS herbaria demonstrate that five to nine basal veins can be recognized, the determination of which to consider primary veins vs. non-primary basal veins possibly being dependent upon the size of the specimen. There are modern species in eastern Asia that appear predominately 5-veined (e.g., C. racemosa, Chen et al., 2010). The well-described fossil North American Eocene species C. parvifolia (Florissant of Colorado) also usually shows five primaries. Apparently, this leaf venation character does not allow differentiation among the species of Cercis (Jia and Manchester, 2014). Agrophic veins appear in some herbarium specimens of C. canadensis but not in others. The only record of Cercis from eastern North America is identified as the extant species C. canadensis from the Pleistocene of North Carolina (Berry, 1926). The Bouie River site Cercis is significant in that it represents the oldest leaf fossil of the genus in eastern North America.
Order FAGALES Engl., 1892
Family FAGACEAE Dumortier, 1829
Genus QUERCUS Linnaeus, 1753
Section Lobatae Loudon, 1830
Quercus sp. 1
Description. Three Quercus section Lobatae compression/impression leaf fossils are identified. Lobes are shallow. Bristle tips approximately 2 mm long occur on the lobe apices. Primary venation is pinnate. Some secondaries end in the lobe bristle tip, other secondaries are brochidodromous. Intersecondaries are present. Third order, fourth order, and fifth order veins are irregular reticulate. A fimbrial vein is present (Figure 6.1).
Section Quercus Hickel and Camus in Camus, 1938
Quercus sp. 2
Description. Leaves are obovate, ovate, or elliptical. Most have shallow lobes, but some have more definitive lobes. One mostly complete specimen is 6.6 cm long x 2.4 cm wide. Primary venation is pinnate. Some secondary veins terminate at the margin of lobe apex, other secondaries are brochidodromous or cladodromous. Secondary veins diverge at approximately 45°. Intersecondaries are present. Third and fourth order venation is irregular, reticulate. The petiole of one the attached leaves is 1 cm long. Several specimens have rounded lobes (Figure 6.3), while a few have pointed lobes with no evidence of bristles (Figure 6.2), so it appears that there may be at least two species of section Quercus in the collection.
Remarks. Six Quercus section Quercus leaf fossils are identified. Four are compression fossils (Figure 6.2), including one slab with two young leaves still attached to the stem (Figure 6.3). Two are impressions, one with a counterpart. Comparison of these leaves with Quercus species from the USAM and USMS herbaria suggest the possibility that oaks related to Quercus alba (those with rounded lobes) and Quercus lyrata (those with pointed lobes, but no bristles) may be a part of this flora, but this is uncertain based on the available material. More leaf specimens and some reproductive structures will be needed for confident identification.
Quercus is widely distributed throughout the Northern Hemisphere, with its greatest diversity in the southeastern United States, the highlands of Mexico, montane subtropical Eurasia, and East Asia (Nixon, 1993, 1997). Presence or absence of lobes, in conjunction with the presence or absence of bristles are leaf characters useful in subgenus determination (Daghlian and Crepet, 1983). Quercus section Lobatae species have stiff bristles at the end of lobes/teeth, or if entire, at the end of the leaf apex (Daghlian and Crepet, 1983). Quercus section Quercus species (if lobed) usually have lobes without bristles, although the ends of the lobes may be pointed and mucronate (Nixon and Muller, 1997), as is the case with Q. lyrata. Quercus section Protobalanus usually has spiny teeth without bristles and is found mainly in the western U.S. (Manos, 1997). Quercus fruit and leaf fossils have been identified in North America from the middle Eocene of the West, while Quercus leaf and fruit records from the eastern portion of the continent occur in the late Miocene Brandywine flora and the late Pliocene Citronelle Formation flora (MacGinitie, 1941; Wolfe, 1973; Crepet and Daghlian, 1980; McCartan et al., 1990; Manchester, 1994; Stults, 2003). Records of section Lobatae fossil leaves have been recovered from the Oligocene of eastern Texas and from the Citronelle Formation of Alabama (Daghlian and Crepet, 1983; Stults, 2003). The Hattiesburg Formation fossils help to fill in a partial gap in the Miocene North American record of Quercus for both sections Lobatae and Quercus.
Family JUGLANDACEAE de Candolle in Perleb, 1818
Genus JUGLANS Linnaeus, 1753
Section RHYSOCARYON Dode, 1909a, 1909b
Description. Globose nut 3.1 cm long x 2.1 cm wide, but wider equatorially than axially. The nutshell is longitudinally grooved, and the surfaces between the grooves are slightly rugose to smooth (Figure 6.4 top). The nutshell is thick, measuring 3 mm at the equatorial section. The two-lobed kernel is preserved and visible within the nutshell (Figure 6.4 bottom).
Remarks. Juglans is divided into four sections by Manning (1978). The most recent molecular phylogeny by Aradhya et al. (2005) indicates that these sections are mostly monophyletic. However, section Trachycaryon, represented by one species (Juglans cinerea), is nested within section Rhysocaryon, which contains all of the New World Juglans. Aradhya et al. (2005) also found little resolution among Rhysocaryon species, even though the clade as a whole was well-supported. Our material shows typical features of section Rhysocaryon, but its fewer grooves and smooth, non-warty between-groove ridges are more like Juglans major or similar species extant in the western U.S. than to the extant eastern species Juglans nigra. There is no evidence for abrasion or long transport, suggesting that this is a real feature and not a taphonomic artifact. There is also evidence for deep ridges in the shell cross-section (Figure 6.4 top). The fossil is also dissimilar in these features to Juglans cinerea, whose distribution is mainly east of the Mississippi River, but not reaching the Gulf of Mexico Coastal Plain. The precise relationships of this fossil will be considered in more detail in a forthcoming study. Juglans fruits have occasionally been found in western North American deposits dating back to the Eocene (Manchester, 1994). The fossils described here represent the first fossil record of Juglans fruit from eastern North America.
Genus CARYA Nuttall, 1818
Section APOCARYA de Candolle, 1864
Carya sp. 1 and 2
Description. Carya sp. 1 fruit is longer and thinner, measuring 2.7 x 1.1 cm. The husk is four-valvate, but the sutures are more obviously flanged than in Carya sp. 2. The husk is also much thinner at approximately 0.51 mm thick. Carya sp. 2 fruit is oblong, measuring 2.0 x 1.5 cm. The husk is four-valvate, has slightly flanged sutures, and is 1.04 mm thick. The nutshell shows the quadrangular shape corresponding to the lines of dehiscence of the husk (Manchester, 1987).
Remarks. Fossil evidence suggests Carya evolved in North America at the end of the Cretaceous, approximately 67 Ma (Zhang et al., 2013). Two sections of the genus are recognized. This divergence apparently occurred 22 Ma, with one section corresponding to the species in East Asia and the other to those of eastern North America (Zhang et al., 2013). The current center of diversity for Carya is eastern North America (Thompson et al., 1999). However, until recently, the majority of North American Carya megafossils were identified from western localities (as early as the Eocene), where the genus no longer exists (Manchester, 1987, 1999; Zhang et al. 2013). The contemporary center of diversity for Carya is now producing a much-improved fossil record, with megafossils identified from the late Miocene Brandywine locality (northern Virginia), the Miocene-Pliocene Gray Fossil site (eastern Tennessee), and the Citronelle Formation (southern Alabama) (Berry, 1916b; McCartan et al., 1990; Huang et al. 2014; Stults and Axsmith, 2015). The new Miocene records from the Hattiesburg Formation described here help fill a Miocene gap for the biogeography of Carya. So far, no leaves of Carya have been recovered from the Bouie River site, but pollen is common and will be described in a forthcoming publication.
Order RANUNCULALES Dumortier, 1829
Family LARDIZABALACEAE Brown, 1821
Genus SARGENTODOXA Rehder and Wilson, 1913
Description. A single globose seed 6 mm long with a truncate apex and round base has been found. Diagnostic features include a smooth surface, thick seed coat (up to about 300 µm) (Figure 6.7 top), a prominent hilar rim, and a central micropyle (Figure 6.7 bottom).
Remarks. Sargentodoxa is represented by a single species of vine (S. cuneata) found in parts of China, Laos, and Vietnam. It is recognizable by its distinctively smooth, shiny globose seed coat and prominent apical truncation (Tiffney, 1993). A review of the fossil record of the genus by Manchester et al. (2009) included seeds from the middle Eocene of Oregon, the late Eocene to late Oligocene of northwest Saxony, the early Miocene of Vermont, the middle Miocene of Germany, the uppermost Miocene-Lower Pliocene of Alsace, and the Pliocene of Italy. There is also a brief mention of Sargentodoxa seeds from the latest Miocene-earliest Pliocene Gray Fossil site in northeastern Tennessee (Gong et al., 2010). The Hattiesburg Formation seed expands the fossil record of the genus to the Gulf Coastal Plain of North American during the middle Miocene.
Order ROSALES Bercht. and J. Presl, 1820
Family MORACEAE Gaudichaud-Beaupré, 1835
Genus MORUS Linnaeus, 1753
Description. The largest specimen is 4.5 cm long x 4.0 cm wide with three lobes; a smaller specimen is 4.2 cm long x 3.1 cm wide and not lobed. Leaf attachment is petiolate, a 0.8 cm petiole present on the smaller specimen. Leaf margins are toothed. Base angles are obtuse, base shape is asymmetrical in the large specimen, convex in the smaller fossils. Primary venation is palmate with three primary veins. Simple agrophic veins are present. Costal secondary veins are semicraspedodromous. Third order veins are usually straight percurrent or sinuous percurrent, but occasionally alternate percurrent. Fourth and fifth order venation is regular, reticulate. Teeth are regularly spaced, convex distally and proximally, approximately three teeth per centimeter, the principal tooth vein terminating at the non-glandular apex (Figure 6.8).
Remarks. Morus consists of 10-13 species. Four of these species (M. celtidifolia, M. insignis, M. microphylla, and M. rubra) are native to North America, but only M. rubra is native to eastern North America (Nepal and Ferguson, 2012). Morus celtidifolia and M. microphylla are sister taxa to M. rubra, all three being sister to the Asian species of Morus (Nepal and Ferguson, 2012). Two previously published records for Morus are from the Alaskan Cenozoic and the New Jersey Pleistocene (Hollick, 1892; Keller et al., 1961).
Glen Brown (1944) sent specimens of Hattiesburg Formation plant fossils to paleobotanist Roland Brown who identified some as “Morus or Celtis.” Assuming our fossils are similar, we suggest that our material represents Morus rather than Celtis based on the following observations. Some Celtis leaves have similar venation to Morus (up to fourth and fifth orders), but Celtis leaves are more elongate with less regularly spaced teeth. To support this quantitatively, we performed a small study of L:W ratios between the two genera based on specimens in the USAM herbarium. Using 21 specimens for Celtis (which included C. laevigata, C. occidentalis, and C. tenuifolia), one mature leaf per sheet was selected that exhibited the least difference between length and width measures (i.e., least L:W ratio). Twenty specimens of Morus (including M. alba, M. nigra, and M. rubra) that showed the largest difference between length and width (e.g., largest L:W ratio) were measured. The mean L:W ratio for Celtis using specimens with the least L:W ratios was 2.2:1, whereas the mean L:W ratio for Morus using specimens with the largest L:W ratios for Morus was 1.6:1, showing a 40% difference even when using specimens biased to show as much overlap as possible. Another noteworthy difference between Celtis and Morus leaves is the lobation. Morus is typically lobed, whereas Celtis leaves are rarely lobed (Godfrey, 1988). The broad teeth that are convex both proximally and distally on the fossils are also more characteristic of Morus than Celtis. The circumscription of the family Moraceae currently includes 37 genera (Datwyler and Weiblen, 2004). Of these, only leaves of Broussonetia, which is currently native to Southeast Asia, has leaves that are similar to Morus, but with very hairy abaxial surfaces. However, unless reproductive material becomes available, Morus is most likely as the genus is present today in the same region.
Order PROTEALES Dumort., 1829
Family PLATANACEAE Lestiboudois, 1826
Genus PLATANUS Linnaeus, 1753
Description. Six Platanus compression leaf fossils (two complete and four partial specimens), one partial fruiting head, and three foliose stipules are identified. The largest leaf specimen is 20.0 cm long x 20.5 cm wide (including a 2.6 cm petiole). Unfortunately, in this largest specimen, a small portion near the base is overlain by another leaf fossil masking the complete primary venation. A mostly complete smaller specimen 12.0 cm long x 13.0 cm wide, along with a fragmentary specimen (with a 1.7 cm petiole) provide evidence for the palinactinodromous primary venation typical of the genus (Figure 7.1). Margins are serrate/dentate with at least five lobes visible on the largest specimen; three large lobes and two shallow lobes are identifiable on the smaller complete specimen. Length/width ratios are approximately 1:1. Leaf bases are either obtuse/rounded or lobate, apices are acute. Compound agrophic veins are evident. Costal second order veins are craspedodromous, third order veins between costal secondaries are a percurrent mixture (straight, sinuous, alternate percurrent). Fourth order veins are irregular reticulate and more difficult to pinpoint than the fine meshwork of the fifth order veins, which are regular reticulate. Margins have simple teeth of one order, even though variable in size (Ellis et al., 2009). Rarely a small, secondary tooth is present. Teeth occur 1-3 per cm, smaller teeth are regularly spaced, while larger teeth are more often irregularly spaced. Sinuses between the teeth are rounded. Teeth are concave/straight both distally and proximally. Two mostly complete stipules with evidence of five pointed lobes and a few teeth between lobes, and one incomplete stipule showing three lobes and no teeth are recognized (Figure 7.2). Of the more complete stipule specimens, one measures 3.5 cm at its widest while the other measures 2.7 cm at its widest point. The larger stipule base displays the broad attachment that would have encircled the petiole. The partial fruiting head measures 1.9 cm; eight achenes are visible, and one achene shows a persistent style that is about 9 mm long x 1 mm wide. Hairs of the pappus are attached to the achene bases (Figure 7.3)
Remarks. The fossil record of Platanus fruits and leaves begins in the Paleocene of eastern Russia (Maslova, 1996; Manchester, 1999). Miocene records occur in western North America in the Clarkia and Succor Creek floras (Smiley and Rember, 1985; Fields, 1996). Probable fossils of P. occidentalis have been reported from the late Miocene Brandywine flora of Maryland, and definite P. occidentalis fossils have been identified from the Pliocene Citronelle Formation (McCartan et al., 1990, Stults and Axsmith, 2015). Of the three (or four) species of Platanus currently found in North America, P. racemosa and P. wrightii (or possibly P. racemosa var. wrightii) occur in western regions, while P. occidentalis has a wide distribution along streams and rivers in eastern North America (Godfrey, 1988). The Mississippi fossils described here are very similar to P. occidentalis and occur within its current range; however, we refrain from a species level identification at this time based on their age and the low levels of morphological differentiation of most modern forms.
Order MALPIGHIALES Juss. ex Bercht. and J. Presl, 1820
Family SALICACEAE Mirbel, 1815
Genus POPULUS Linnaeus, 1753
Populus sp. indet. cf. deltoides Bartram ex Marshall, 1785
Description. The smaller leaves range from 2.6-4.1 cm. long x 3.4-3.6 cm. wide. One of the larger leaves is 6.5 cm long. The other larger leaf had an estimated width of 6.8 cm. The overall shape is ovate, and the margins are toothed. Four leaves have a truncate base, while one appears to be slightly cordate. Apices are acuminate, one extending into an elongated tip (Figure 7.4). Primary venation is pinnate. Compound agrophic veins are visible in two of the better-preserved smaller leaves. Second order venation is semi-craspedodromous, third order veins are mixed percurrent, fourth and fifth order venation is irregular reticulate. One order of regularly-spaced teeth is present, approximately 3-5 per cm, with distal flanks concave, straight, or convex and proximal flanks convex or straight. Several teeth are retroflexed. Salicoid teeth are visible on several specimens (Hickey and Wolfe, 1975; Wilkinson, 2007). Tooth sinuses are angular or rounded (Figure 7.5).
Remarks. The fossil record of Populus begins in North America in the late Paleocene and the genus spreads widely during the Eocene (Eckenwalder, 1980; Manchester et al., 2006). Populus species currently native to wet lowlands in the southeastern United States are P. deltoides, with its very wide, almost continent-wide distribution, and P. heterophylla, with a spottier distribution from parts of the United States east of the Mississippi River (Godfrey, 1988). Populus deltoides has a Pliocene fossil record from the Citronelle Formation of southern Alabama (Stults and Axsmith, 2011). Populus heterophylla leaves are distinct from those of P. deltoides based upon their lack of acuminate apices (Godfrey, 1988).
Three small leaves, two mostly complete and one partial, and two larger partial leaves were recovered from the Hattiesburg Formation. The petiole is evident on two of the small leaves. The specimens have acuminate apices, suggesting that P. deltoides or a possible precursor species existed along the Gulf of Mexico Coast by at least the middle Miocene.
Genus SALIX Linnaeus, 1753
Description. One almost complete linear leaf measures 8.0 cm long x 0.8 cm wide (the original L:W ratio would have been >10:1 (Figure 7.6). Neither base nor apex are present. Although typified as linear (Ellis et al., 2009), the preserved basal portion of the leaf is wider than the apical portion. The abaxial cuticle obtained is fragmentary and not well preserved but contains paracytic stomata about 20 µm long and regular epidermal cells with straight to somewhat curved cell walls (Figure 7.7). These are common epidermal features among extant Salix species (Chen et al., 2008). One order of small teeth (6-7 per cm) is present (Figure 7.8). Tooth shapes are distally straight or concave and proximally straight or convex. Opaque tissue possibly representing a salicoid tooth is present on the apex. Between-teeth sinuses are angular or rounded. First order venation is pinnate. Second order venation is semicraspedodromous and some secondaries are arcuate. Third order venation is irregular reticulate (Figure 7.9).
Remarks. Teeth in Salix are variable, with some species maintaining a large spheroidal gland at the tooth apex while others lack a gland or the gland withers or abscises with age (Reinke, 1876; Weber, 1978; Wilkinson, 2007; Beuchler, 2014). We also note that herbarium samples of Salix may not have completely intact tooth apices potentially due to degradation during storage. We suggest that prior to fossilization or to the degrading processes before/during fossilization the tooth apices on the original leaf were probably spherulate. Of the approximately 450 species of Salix worldwide, 113 species occur in North America. Those found in the southeastern United States including southern Mississippi (Kartesz, 2015), Alabama, Florida, and Georgia (Godfrey, 1988) are S. caroliniana, S. eriocephala, S. floridana, S. humilis, S. interior, and S. nigra. Based on leaf dimensions, the specimen from the Hattiesburg Formation is most similar to extant S. caroliniana, S. eriocephala, S. interior, or S. nigra. Reproductive and vegetative fossils of Salix have been found in Eocene deposits of Wyoming, North Dakota, Colorado, and Utah, Oligocene and Miocene deposits of Alaska, and Miocene deposits in Oregon (Hollick, 1936; MacGinitie, 1969; Hickey, 1977; Wing 1981; Collinson 1992; Fields, 1996). This is the first Miocene macrofossil record from the eastern portion of the continent. A Pliocene occurrence has been recorded from the Citronelle Formation of southern Alabama (Stults and Axsmith, 2015).
Order ALISMATALES R. Br. ex Bercht. and J. Presl, 1820
Family ARACEAE de Jussieu, 1789
Genus LEMNA Linnaeus, 1753
Description. Description of the Hattiesburg Lemna fossils follows traditional terminology for the genus whereby the leaf-like organs are referred to as fronds. However, we recognize that this term does not indicate homology to the fronds of ferns. All the recovered Lemna fossils from the Hattiesburg Formation contain five fronds and measure approximately 4.5-5.0 mm at their widest point (Figure 8.1). Sunken meristematic zones are observable, but as only one meristematic zone is evident per fossil, it would appear that four of the fronds are daughters. The daughter fronds of each fossil vary in size, some apparently as large as the original mother frond. No roots are visible, but this is likely an artifact of preservation. One impression fossil is particularly informative in showing one primary vein on opposite fronds, the cellular alignment of which is comparable to modern Lemna specimens (Figure 8.2).
Remarks. The entire plant body of Lemna species is highly modified, and true leaves do not exist for this group in any straightforward way and are referred to as fronds (Landolt et al., 1986; Bell, 1991; Bogner, 2009). The flattened plant body of Lemna consist of leaf lamina on the outermost portion, a combination of stem and leaf lamina proximally, and a proximal sunken meristematic zone from whence daughter fronds originate.
Three complete specimens and one partial specimen represent Lemna from the Hattiesburg Formation. They occur in direct association with many fossils of another floating aquatic plant, Salvinia and many examples of a wetland Cyperus sp. The report here of relatively complete vegetative fossils of Lemna is particularly significant, as previous records are known only from relatively poorly-preserved fossils from the early Miocene of the Czech Republic (the Bilina Mine) by Kvaček (2003). Seed records are known from the Oligocene through Quaternary of Eurasia (Dorofeev, 1963). Vegetative remains of other Araceae taxa distantly related to Lemna consist of Limnobiophyllum from the Upper Cretaceous and Lower Paleogene of North America and the Miocene Bilina Mine locality in the Czech Republic and Cobbania from the Late Cretaceous of North America and East Asia (Stockey et al., 2007). Limnobiophyllum and Lemna coexisted during the Oligocene and Miocene, but only Lemna survives to the present day (Rothwell et al., 2004; Bogner, 2009).
Order ARECALES Bromhead, 1840
Family ARECACEAE Berchtold and Presl, 1820
Fossil palmate genus PALMACITES Brongniart, 1822 (emend. Read and Hickey, 1972)
Description. The leaf blade is fan-shaped (coryphoid) and induplicate. The leaf blade is palmate, as the costa does not extend into the leaf blade, rather the leaf segments arise at the petiole apex and extend toward the leaf apex. All palmate leaf segments display a prominent mid-rib. Up to two orders of parallel veins on either side of mid-rib are observable on some specimens as preservation permits. Maximum measurable petiole width is 1.6 cm. Some palmate specimens have attached petioles with well-spaced recurved spines (Figure 8.4).
Fossil costapalmate genus SABALITES Saporta, 1865 (emend. Read and Hickey, 1972)
Description. The leaf blade is fan-shaped (coryphoid), induplicate. The leaf is costapalmate, as the costa extends for a distance into blade of leaf (Figure 8.5). Mid-rib of each leaf segment is prominent with up to two orders of parallel veins observable on many specimens. Petioles are unarmed. Maximum measurable petiole width is 2.9 cm.
Remarks. When describing palms, it should be recognized that the term palmate is a categorization on two levels; the first level of distinction is a leaf that is not pinnate (i.e., fan-shaped or coryphoid), and the second level is a comparison regarding the extension of the petiole into the coryphoid leaf blade itself (costapalmate vs. palmate). Costapalmate and palmate fossil leaves are recognized in this collection, primarily using definitions summarized in Read and Hickey (1972). The differential diagnosis is dependent upon the presence of an extended petiole (costa) into the leaf blade from which fused segments emerge at an acute angle vs. a leaf where the costal extension is not present, thus leaf segments originate from a single, centralized area at the top of the petiole. It has long been recognized as impossible to assign extant fan palm leaves to modern genera or species without the additional characters provided by fruit and/or flowers. Correlating fossil fan palm leaves with modern taxa is even more problematical; thus, we use the fossil form genera established for fan palms. However, we realize that any fossil record of palmate leaves with spines on the petioles may require a revision of the form genus Palmacites or establishment of a new fossil form genus, as palmate leaves with spines have not been previously recorded as Palmacites.
Interestingly, a dichotomous key within a field guide of extant palm genera of the Americas (Henderson et al., 1995) lists spines along the petiole as one of the first sub-categorizations (after discriminating between palmate and pinnate leaves) for discerning a genus from the Western Hemisphere. Within the Americas, only five genera match these criteria (i.e., palmate with armed petioles): Acoelorraphe, Serenoa, Copernicia, Brahea, and Washingtonia. From this list, Washingtonia and Brahea are associated with desert climates; additionally, Washingtonia is costapalmate and Brahea is shortly costapalmate. Serenoa is common to the southeastern United States occupying pinelands, prairies, and sand dunes (Henderson et al., 1995) and is a palmate leaf as there is no costal extension into the leaf blade. However, the petiolar teeth of Serenoa are small, fine, sharp, and more closely-spaced teeth than those of the Palmacites fossil. Acoelorraphe occupies swampy, coastal areas of the northern Caribbean while Copernicia is found in savannahs, woodlands and lowland areas of Cuba (Henderson et al., 1995; Dransfield et al., 2008). Both Acoelorraphe and Copernicia are palmate, and both have teeth that appear similar to those of the Hattiesburg Formation Palmacites fossil. Although proposing a direct evolutionary relationship of our fossil to a modern taxon is not yet feasible, it cannot be ruled out that the spinous Palmacites fossil is possibly ancestral to one of these genera, especially considering that the biogeographical history for sub-family Coryphoideae has been a significant factor in its current distributional patterns (Asmussen et al., 2006; Bjorholm et al., 2006).
Recent reviews of coryphoid palm species, including those currently limited to the New World, indicates that most New World genera are unarmed and palmate (Dransfield et al., 2008). As mentioned above, several Hattiesburg fossil specimens of palmate leaves that do not display spines on the petioles fit the original description of Palmacites. These could either represent a Miocene precursor to some other modern palmate form or could have lost the spines during preservation and recovery. This review also indicates that a conspicuous costapalmate extension is not representative of most New World genera. Currently, an unarmed, prominently costapalmate leaf would seem to indicate a relationship only with the tribe Sabaleae, genus Sabal. Sabal has 16 species confined to the New World, including the northern Gulf of Mexico Coastal Plain (Dransfield et al., 2008).
Included within the Hattiesburg Formation collection are many palm leaf fragments (currently undeterminable to palmate or costapalmate forms) indicating large dimensions of the original leaf. These include fragments of a palmate leaf with a length of more than 29 cm. Numerous isolated spinous palm petioles also occur in the collection. Isolated palm trunk fragments are common at the site, affirmed by features of the wood anatomy. These will be described in a forthcoming paper.
Palm leaves have been previously reported from the Hattiesburg Formation. Berry (1916a) reported fossils from a location in Forrest County, and Brown (1944) reported palm fragments from a locality on Tallahala Creek. From our data, including numerous records of palm leaves and stems from the Bouie River locality along with previous reports, palms were clearly a dominant feature of this delta-associated plant community. Palm fossils have also been reported from the Catahoula Formation of Mississippi (Berry, 1916c) and from the Alum Bluff locality of Florida (Berry, 1916a; Corbett, 2004). Berry (1916a) argued that the Alum Bluff and Hattiesburg palms represented the same species (Sabalites apalachicolensis), but this is difficult to determine based on the information provided (Corbett, 2004).
Palms have a fossil record extending back to the mid-Cretaceous; however, the fossil record of some common extant palm morphotypes is relatively rare (Read and Hickey, 1972). For example, palm leaves with armed petioles are common among extant taxa (e.g., Licuala, Johannesteijsmannia, Pholidocarpus, and Pritchardiopsisi, and many more) (Dransfield et al., 2008), so it is surprising that there are few such fossil records (e.g., Pan et al., 2006), especially those showing petioles still attached to the leaf blades (Wang et al., 2015) as in the Palmacites specimens described here. Based on our observations of the Bouie River locality fossils, we suggest that palm fossils with armed petioles may be more common than currently thought, as we note that the spines can be easily broken off during the collection of the fossils. Also, the matrix does not always clearly spit along a plane on which the petiolar spines occur.
Order POALES Small, 1903
Family CYPERACEAE de Jussieu, 1789
Genus CYPERUS Linnaeus, 1753
Description. Numerous fossil spikelets, the longest approximately 2 cm, bearing florets distichously arranged on the rachilla are identified as a Cyperus sp. (Figure 8.6). Florets are approximately 3 mm in length (Figure 8.7). Achenes are biconvex or flattened with surfaces completely covered with evenly spaced, consistently small, rounded projections. Long persistent styles are approximately 1 mm in length.
Remarks. The apparently deciduous habit of the spikelets and their distichous arrangement along the rachilla indicate affinity with the genus Cyperus, with the caveat that the Cyperaceae are an exceedingly large family, notoriously difficult to identify to lower taxonomic levels in modern examples let alone from fragmentary fossils. However, we suggest that the slightly stipitate, oblong achenes with an apiculate apex (or partly persistent style?) and pitted surface are reminiscent of Cyperus strigosus and relatives, a common species of wetland areas in the southeastern U.S. These fossils occur on slabs with abundant Salvinia and Lemna remains, indicating a relatively quiet, freshwater depositional setting within the Hattiesburg Formation Bouie River locality section.
It has long been realized that the eastern North American Neogene paleobotanical record is poor, especially in comparison with the better-known western record (Graham, 2010). The new Bouie River flora site described here helps to fill this significant gap in our understanding of North American paleobotany and biogeography. Prior to this study, only five taxa had been reported from the Hattiesburg Formation: Sabalites (an unarmed costapalmate palm), Taxodium, Salix, Ulmus, and Morus/Celtis (Berry, 1916a; Brown, 1944), and most of these were not figured. Here, we identify, describe, and figure 22 taxa. Some of these are identified down to section (i.e., within the genera Quercus, Juglans, and Carya), and some possibly to species (e.g., Osmunda spectabilis).
Several components from this flora are particularly significant from a taxonomic and biogeographical perspective as regional, continental, or even nearly global first records. For example, the first Gulf Coast record of the now endemic Asian genus Sargentodoxa represents a major paleobiogeographic expansion for this genus and supports the Miocene age determination in joint consideration with previous North American records (Tiffney, 1993; Manchester et al., 2009). The Juglans fruit described here is different from those of the two eastern North American extant species (i.e., J. cinerea and J. nigra); therefore, it represents an extinct or extirpated species. We also present some of the only Neogene fern records from eastern North America including Osmunda, the earliest and first fossil evidence of Woodwardia from its current range, and nearly entire plants of the floating fern Salvinia. This flora also makes a key contribution to the fossil record of the floating aroid genus Lemna, as there was only one previous vegetative record, and this was from the Miocene of the Czech Republic (Kvaček, 2003). We also report the earliest macrofossil record of Cercis from eastern North America and describe specimens representing two sections of Quercus that are currently major components of the regional extant forest.
With a few exceptions mentioned above, most of the Bouie River site fossil plants are similar to taxa present in the area today (e.g., Carya, Cyperus, Morus, Platanus, Populus, Taxodium, Salix, and Sambucus), which supports the idea that a generally similar precursor to the current Gulf Coast wetland flora was already established here since at least the middle Miocene. From a qualitative perspective, this suggests that the climate was also similar to now. The presence of abundant palms indicates that the climate was at least warm temperate. This issue will be addressed in a detailed quantitative analysis that will be undertaken once all of the identifications are finalized. However, like the late Pliocene Citronelle flora of Alabama, the Hattiesburg flora was relatively poor in Pinus (only pollen has been found to date from the Bouie River site) compared to this region today, which could indicate somewhat drier conditions than at present (Stults et al., 2010).
The newly recognized Bouie River site of the Hattiesburg Formation has substantially increased our knowledge of the middle Miocene floras of the Gulf of Mexico Coastal Plain. Where previously only five taxa had been recognized, 22 are now acknowledged, including “exotics” and several other taxa, which had no previous Miocene record from eastern North America. As additional older and younger sites are studied, the Bouie River flora will be an important window into developing a broader view of the evolution of the extant Gulf Coast flora. However, it is already apparent that the Bouie River site fossils, along with those of the Citronelle Formation (Stults and Axsmith, 2011, 2015) and other sites under study, support the concept of an early establishment of a flora generally similar to that of the modern Gulf Coast, but with the presence of several “exotic” taxa that subsequently disappear from North America during the late Pliocene (Graham, 2010). This also suggests that climates through this interval were similar to today and were relatively stable.
Although many of the plant fossils are described here in considerable detail, we still consider this research preliminary overall. Additional taxa are still being found with nearly every visit to the site, indicating that much more diversity awaits discovery. In addition, some of the taxa, such as the fern Salvinia and the palms, merit additional study as dedicated papers. Palynological analysis, which is yielding evidence of additional taxa not yet represented in the megafossil record, is ongoing. Stratigraphic work is also refining the formational boundaries and ages, which will allow better temporal constraint for Gulf Coast fossil floras (Dockery and Thomson, 2016). In any case, it is becoming increasingly clear that the supposed paucity of Neogene plant megafossil sites in the southeastern USA compared to the well-known succession in western states may be at least in part due to poor exposure and historically inadequate prospecting and study.
We would especially like to thank K. Spencer for kindly granting us permission to collect specimens on her property. We are grateful to our reviewers, S. Manchester and two anonymous referees for their helpful comments. We are also grateful to the following researchers who offered helpful information at various stages of this project: J.R. Carter, J. Fisher, R. Moran, and R. Naczi. We would like to thank everyone who helped collect and curate specimens: J. Axsmith, G. Brown, A. Calidris, C.L. Hernandez, A. Holbrook, J. Lamb, the late C. “Smoot” Major, M. McWhorter, B. Morris, J. Price, B. Purdy, T. Samarakoon, and T. Sevick. Curatorial assistance was provided by G. Phillips of the Mississippi Museum of Natural Science. Aspects of this research were supported by NSF grant EAR-0642032 (to BJA) and NSF grant 1203684 (to MHA).
Angiosperm Phylogeny Group IV. 2016. An update of the Angiosperm Phylogeny Group classification for the orders and families of flowering plants: APG IV. Botanical Journal of the Linnean Society, 181:1-20. https://doi.org/10.1111/boj.12385
Aradhya, M.K., Potter, D., and Simon, C.J. 2005. Origin, evolution, and biogeography of Juglans: a phylogenetic perspective. Acta Horticulturae, 705:85-94. https://doi.org/10.17660/actahortic.2005.705.8
Arana, M.D. and Ponce, M. 2015. Osmundaceae en Argentina, Paraguay y Uruguay. Darwiniana, 3:27-37.
Asmussen, C.B., Dransfield, J., Deickmann, V., Barfod, A.S., Pintaud, J.C., and Baker, W.J. 2006. A new subfamily classification of the palm family (Arecaceae): evidence from plastid DNA phylogeny. Botanical Journal of the Linnean Society, 151:15-38. https://doi.org/10.1111/j.1095-8339.2006.00521.x
Aulenback, K.R. and LePage, B.A. 1998. Taxodium wallisii sp. nov: first occurrence of Taxodium from the Upper Cretaceous. International Journal of Plant Sciences, 159:367-390. https://doi.org/10.1086/297558
Bartling, F.G. 1830. Ordines Naturales Plantarum. Deiterich, Göttingen.
Bell, A.D. 1991. An Illustrated Guide to Flowering Plant Morphology. Oxford University Press, Oxford.
Berry, E.W. 1916a. The physical conditions and age indicated by the flora of the Alum Bluff Formation. United States Geological Survey Professional Paper, Report, 98:41-59. https://doi.org/10.5962/bhl.title.7797
Berry, E.W. 1916b. The flora of the Citronelle Formation. United States Geological Survey Professional Paper, 98:167-208.
Berry, E.W. 1916c. The flora of the Catahoula Sandstone. United States Geological Survey Professional Paper, 98-M:227-243.
Berry, E.W. 1925. A new Salvinia from the Eocene. Torreya, 25:116-118.
Berry, E.W. 1926. Pleistocene plants from North Carolina. United States Geological Survey Professional Paper, 140C:97-119.
Beuchler, W.K. 2014. Variability of venation patterns in extant genus Salix: Implication for fossil taxonomy. PaleoBios 30:89-104.
Bierhorst, D.W. 1971. Morphology of Vascular Plants. Macmillan, New York.
Bjorholm, S., Svenning, J-C., Baker, W.J., Skov, F., and Balslev, H. 2006. Historical legacies in the geographical diversity patterns of New World palm (Arecaceae) subfamilies. Botanical Journal of the Linnean Society, 151:113-125. https://doi.org/10.1111/j.1095-8339.2006.00527.x
Bogner, J. 2009. The free-floating aroids (living and fossil). Zitteliana, 48/49:113-128.
Bomfleur, B., McLoughlin, S., and Vajda, V. 2014. Fossilized nuclei and chromosomes reveal 180 million years of genomic stasis in royal ferns. Science, 343:1376-1377. https://doi.org/10.1126/science.1249884
Bouchenak-Khelladi, Y., Slingsby, J.A., Verboom, G.A., and Bond, W.J. 2014. Diversification of C4 grasses (Poaceae) does not coincide with their ecological dominance. American Journal of Botany, 101:300-307. https://doi.org/10.3732/ajb.1300439
Bromhead, E.F. 1838. An attempt to ascertain characters of the botanical alliances. The Edinburgh New Philosophical Journal, 24:408-419.
Bromhead, E.F. 1840. Remarks on the botanical system of Professor Perleb. Magazine of Natural History, and Journal of Zoology, Botany, Mineralogy, Geology, and Meteorology, 4:329-338.
Brongniart, A.T. 1822. Sur la Classification et la Distribution des Végétaux fossiles. Vol. 8. Impr. de A. Belin, Paris.
Brown, G.F. 1944. Geology and ground-water resources of the Camp Shelby area. Mississippi State Geological Survey Bulletin, 58:1-72.
Brown, R. 1821. Lardizabalaceae. Transactions of the Linnean Society of London. London, 13:212.
Camus, A. 1938. Les Chênes Monographie du Genre Quercus. Lechavalier, Paris.
Chanderbali, A.S., van der Werff, H., and Renner, S.S. 2001. Phylogeny and historical biogeography of Lauraceae: Evidence from the chloroplast and nuclear genomes. Annals of the Missouri Botanical Garden, 88:104-134. https://doi.org/10.2307/2666133
Chase, M.W. and Reveal, J.L. 2009. A phylogenetic classification of the land plants to accompany APG III. Botanical Journal of the Linnean Society, 161:122-127. https://doi.org/10.1111/j.1095-8339.2009.01002.x
Chen, D., Zhang, D., Larsen, S.S., and Vincent, M.A. 2010. Cercis, p. 5-6. In Wu, Z.Y., Raven, P.H., and Hong, D.Y. (eds.), Flora of China, vol. 10 (Fabaceae). Missouri Botanical Garden Press, St. Louis.
Chen, J.-H., Sun, H., and Yang, Y.-P. 2008. Comparative morphology of leaf epidermis of Salix (Salicaceae) with special emphasis on sections Lindleyanae and Retusae. Botanical Journal of the Linnean Society, 157:311-322. https://doi.org/10.1111/j.1095-8339.2008.00809.x
Christophel, D.C., Kerrigan, R., and Rowett, A. 1996. The use of cuticular features in the taxonomy of the Lauraceae. Annals of the Missouri Botanical Garden, 83:419-432. https://doi.org/10.2307/2399871
Cobb, B., Lowe, C., and Farnsworth, E. 2005. Peterson Field Guide to Ferns: Northeastern and Central North America (second edition). Houghton Mifflin Company, New York.
Collinson, M.E. 1991. Diversification of modern heterosporous pteridophytes, p. 119-150. In Blackmore, S. and Barnes, S.H. (eds.), Pollen and Spores: Patterns of Diversification. The Systematics Association Special, volume 44. Oxford University Press, New York.
Collinson, M.E. 1992. The early fossil history of Salicaceae: a brief review. Proceedings of the Royal Society of Edinburgh, 96B:155-167. https://doi.org/10.1017/s0269727000007521
Collinson, M.E. 1996. What use are fossil ferns? - 20 years on: With a review of the fossil of extant pteridophyte families and genera, p. 349-304. In Camus, J.M., Gibb, M., and Johns, R.J. (eds.), Pteridology in Perspective. Royal Botanical Gardens Kew, London.
Collinson, M.E. 2001. Cainozoic ferns and their distribution. Brittonia, 53:173-235. https://doi.org/10.1007/bf02812700
Collinson, M.E., Kvaček, Z., and Zastawniak, E. 2001. The aquatic plants Salvinia (Salviniales) and Limniobiophyllum (Arales) from the Late Miocene flora of Sonica (Poland). Acta Palaeobotanica, 41:253-282.
Corbett, S.L. 2004. The Middle Miocene Alum Bluff Flora, Liberty County, Florida. Unpublished MS Thesis, University of Florida, Gainesville, Florida, USA.
Cranfill, R.B. 1993. Woodwardia, p. 226-227. In Flora of North America Editorial Committee (eds.), Flora of North America North of Mexico, vol. 2 (Pteridophytes and Gymnosperms). Oxford University Press, New York and Oxford.
Crepet, W.L. and Daghlian, C.P. 1980. Castaneoid inflorescences from the Middle Eocene of Tennessee and the diagnostic value of pollen (at the subfamily level) in the Fagaceae. American Journal of Botany, 67:739-757. https://doi.org/10.1002/j.1537-2197.1980.tb07704.x
Cronquist, A., Takhtajan, A., and Zimmermann, W. 1966. On the higher taxa of Embryobionta. Taxon, 15:129-134.
Daghlian, C.P. and Crepet, W.L. 1983. Oak catkins, leaves and fruits from the Oligocene Catahoula Formation and their evolutionary significance. American Journal of Botany, 70:639-649. https://doi.org/10.2307/2443119
Datwyler, S.L. and Weiblen, G.D. 2004. On the origin of the fig: Phylogenetic relationships of Moraceae from ndhF sequences. American Journal of Botany, 91:767-777. https://doi.org/10.3732/ajb.91.5.767
de Candolle, A.C.P. 1864. Juglandaceae, p. 134-146. In de Candolle, A. (ed.), Prodromus Systematis Naturalis Regni Vegetabilis, vol. 16, part 2. Sumptibus Victoris Masson et Filii, Paris.
de Candolle, A.P. 1818. Versuch über die Arzneikräfte der Pflanzen. Heinrich Remigius Sauerländer, Paris.
de Jussieu, A.L. 1789. Genera Plantarum. Herissant and Barrois, Paris.
Dockery, III, D.T. and Thompson, D.E. 2016. The Geology of Mississippi. University Press of Mississippi, Jackson, Mississippi.
Dode,L.A. 1909a. Contribution à l'étude du genre Juglans. Bulletin de la Societé Dendrologique de France, 11:140-166.
Dode,L.A. 1909b. Contribution à l'étude du genre Juglans. Bulletin de la Societé Dendrologique de France,13:165-213.
Dorofeev, P.I. 1963. The Tertiary Floras of Western Siberia. Izd-vo Akademii nauk SSSR, Moscow.
Dransfield, J., Uhl, N.W., Asmussen, C.B., Baker, W.J., Harley, M.M., and Lewis, C.E. 2008. Genera Palmarum, the Evolution and Classification of Palms. Kew Publishing, Royal Botanic Gardens, Kew.
Drinnan, A., Crane, P., Friis, E., and Pedersen, K. 1990. Lauraceous flowers from the Potomac Group (mid-Cretaceous) of Eastern North America. Botanical Gazette, 151:370-384. https://doi.org/10.1086/337838
Dumortier, B.C.J. 1829. Analyse des Familles des Plantes: avec l’indication des principaux genres qui s’y rattachent. J. Casterman, Tournay. https://doi.org/10.5962/bhl.title.48702
Eckenwalder, J.E. 1980. Foliar heteromorphism in Populus (Salicaceae), a source of confusion in the taxonomy of Tertiary leaf remains. Systematic Botany, 5:366-383. https://doi.org/10.2307/2418518
Eklund, H. and Kvaček, J. 1998. Lauraceous inflorescences and flowers from the Cenomanian of Bohemia (Czech Republic, Central Europe). International Journal of Plant Sciences, 159:668-686. https://doi.org/10.1086/297585
Ellis, B., Daly, D.C., Hickey, L.J., Mitchell, J.D., Johnson, K.R., Wilf, P., and Wing, S.L. 2009. Manual of Leaf Architecture. Comstock Publishing Associates: Ithaca, New York.
Engler, H.G.A. 1892. Syllabus der Vorlesungen über Specielle und Medicinisch-Pharmaceutische Botanik. Gebrüder Borntraeger, Berlin.
Eriksson, T. and Donoghue, M.J. 1997. Phylogenetic relationship of Sambucus and Adoxa (Adoxaceae) based on nuclear ribosomal ITS sequences and preliminary morphological data. Systematic Botany, 22:555-573. https://doi.org/10.2307/2419828
Farjon, A. 2005. A Monograph of Cupressaceae and Sciadopitys. Royal Botanical Gardens, Kew.
Fields, P.F. 1996. The Succor Creek Flora of the Middle Miocene Sucker Creek Formation, Southwestern Idaho and Eastern Oregon: Systematics and Paleoecology. Unpublished PhD Dissertation, Michigan State University, East Lansing, Michigan, USA.
Foster, V.M. 1941. Geology, p. 13-59. In Foster, V.M. and McCutcheon, T.E. (eds.), Forrest County Mineral Resources. Mississippi Geological Survey Bulletin 44. University of Mississippi, Oxford.
Frank, A.B. 1877. Polypodioideae, p. 1453. In Leunis, J. (ed.), Synopsis der Pflanzenkunde, Zweite Auflage. Hahn'sche Buchhandlung, Hanover.
Gaudichaud-Beaupré, C. 1835. Moraceae, p. 13. In Trinius, C.B. (ed.), Genera Plantarum ad Familias Suas Redacta. Impensis Academiae Imperialis Scientiarum, Petrapoli.
Godfrey, R.K. 1988. Trees, Shrubs, and Woody Vines of Northern Florida and Adjacent Georgia and Alabama. The University of Georgia Press, Athens, Georgia.
Gong, F., Karsai, I., and Liu, Y.S.C. 2010. Vitis seeds (Vitaceae) from the late Neogene Gray fossil site, northeastern Tennessee, USA. Review of Palaeobotany and Palynology, 162:71-83. https://doi.org/10.1016/j.revpalbo.2010.05.005
Gray, A. 1856. Manual of the Botany of the Northern United States, Including Virginia, Kentucky, and All East of the Mississippi: Arranged According to the Natural System. Second Edition. Ivison, Phinney & Company, Chicago.
Graham, A. 2010. A Natural History of the New World: The Ecology and Evolution of Plants in the Americas. University of Chicago Press. https://doi.org/10.7208/chicago/9780226306827.001.0001
Henderson, A., Galeano, G., and Bernal, R. 1995. Field Guide to the Palms of the Americas. Princeton University Press, Princeton, New Jersey.
Hewitson, W. 1962. Comparative morphology of the Osmundaceae. Annals of the Missouri Botanical Garden, 49:57-93. https://doi.org/10.2307/2394741
Hickey, L.J. 1977. Stratigraphy and paleobotany of the Golden Valley Formation (early Tertiary) of western North Dakota. GSA Memoirs, 150:1-29.
Hickey, L.J. and Wolfe, J.A. 1975. The bases of angiosperm phylogeny: Vegetative morphology. Annals of the Missouri Botanical Garden, 62:538-589.
Hollick, A. 1892. Palaeobotany of the Yellow Gravel at Bridgeton, N. J. Bulletin of the Torrey Botanical Club, 19:330-333. https://doi.org/10.2307/2475960
Hollick, A. 1936. The Tertiary Floras of Alaska. United States Geological Survey Professional Paper, 182:1-324
Huang, Y., Jacques, F.M.B., Liu, Y.-S., Su, T., Xing, Y., Xiao, X., and Zhou, Z. 2012. New fossil endocarps of Sambucus (Adoxaceae) from the Upper Pliocene in SW China. Review of Palaeobotany and Palynology, 171:152-163. https://doi.org/10.1016/j.revpalbo.2011.11.008
Huang, Y-J., Liu, Y-S. and Zavada, M. 2014. New fossil fruits of Carya (Juglandaceae) from the latest Miocene to earliest Pliocene in Tennessee, eastern United States. Journal of Systematics and Evolution, 52:508-520. https://doi.org/10.1111/jse.12085
Hulbert, R. and Whitmore, F.C. 2006. Late Miocene mammals from the Mauvilla Local Fauna, Alabama. Bulletin of the Florida Museum of Natural History, 46:1-28.
Imaichi, R. and Kato, M. 1992. Comparative leaf development of Osmunda lancea and O. japonica (Osmundaceae): Heterochronic origin of rheophytic stenophylly. Botanical Magazine Tokyo, 105:199-213. https://doi.org/10.1007/bf02489415
Jacono, C.C., Davern, T.R., and Center, T.D. 2001. The adventive status of Salvinia minima and S. molesta in the southern United States and the related distribution of the weevil Cyrtobagous salviniae. Castanea, 66:214-226.
Jia, H. and Manchester, S.R. 2014. Fossil leaves and fruits of Cercis L. (Leguminosae) from the Eocene of western North America. International Journal of Plant Sciences, 175:601-612. https://doi.org/10.1086/675693
Kartesz, J.T. 2015. The Biota of North America Program (BONAP). North American Plant Atlas, Chapel Hill, N.C. http://bonap.net/napa
Keller, A.S., Morris, R.H., and Detterman, R.L. 1961. Geology of the Shaviovik and Sagavanirktok Rivers Region, Alaska. United States Geological Survey Professional Paper, 303D. United States Geological Survey, Reston.
Knobloch, E. and Mai, D.H. 1986. Monographie der Fruchte und Samen in der Kreide von Mitteleurope. Rozpravy Ústredního Ústavu Geologického, 47:1-219.
Kunzmann, L., Kvaček, Z., Mai, D.H., and Walther, H. 2009. The genus Taxodium (Cupressaceae) in the Palaeogene and Neogene of Central Europe. Review of Palaeobotany and Palynology, 153:153-183. https://doi.org/10.1016/j.revpalbo.2008.08.003
Kvaček, Z. 2003. Aquatic angiosperms of the Early Miocene Most Formation of North Bohemia (Central Europe). Courier Forschungsinstitut Senckenberg, 241:255-279.
Landolt, E., Lüönd, A., and Kandeler, R. 1986. Biosystematic Investigations in the Family of Duckweeds (Lemnaceae). Geobotanischen Institut der ETH, Zürich.
Lestiboudois, T.G. 1826. Botanographie Élémentaire. Roret, Paris.
Lindley, J. 1836. Natural System of Botany: or, a Systematic View of the Organization, Natural Affinities, and Geographical Distribution of the Whole Vegetable Kingdom Together with the Uses of the Most Important Species in Medicine, the Arts, and Rural and Domestic Economy. Edition 2. Longman, Rees, Orme, Brown, Green, and Longman, London. https://doi.org/10.5962/bhl.title.130142
Linnaeus, C. 1753. Species Plantarum. Impensis Laurentii Salvius, Stockholm.
Little, E.L., Jr. 1977. Atlas of United States Trees, Volume IV: Minor Eastern Hardwoods. United States Department of Agriculture Miscellaneous Publication 1342. United States Department of Agriculture, Washington, D.C.
Loudon, J.C. 1830. Loudons Hortus Britannicus: A Catalogue of All the Plants Indigenous,Cultivated In, or Introduced to Britain. Part I. The Linnaean Arrangement: Part II. The Jussieuean Arrangement. Longman, Rees, Orme, Brown, and Green, London. https://doi.org/10.5962/bhl.title.10320
Löve, Á. and Löve, D. 1977. New combinations in ferns. Taxon, 26:324-326. https://doi.org/10.2307/1220575
Mai, Q.-W., Vikulin, S.V., Li, C.-S., and Wang, Y.-F. 2013. Details of compressions of Glyptostrobus (Cupressaceae s.l.) from the Eocene of Fushun, NE China. Journal of Systematics and Evolution, 51:601-608. https://doi.org/10.1111/jse.12035
MacGinitie, H.D. 1941. A Middle Eocene flora from the central Sierra Nevada. Carnegie Institution of Washington Publication, 534:1-178.
MacGinitie, H.D. 1953. Fossil plants of the Florissant beds, Colorado. Carnegie Institute of Washington Publication, 599:1-198.
MacGinitie, H.D. 1969. The Eocene Green River flora of northwestern Colorado and northeastern Utah. University of California Publications in Geological Sciences, 83:1-203.
Manchester, S.R. 1987. The fossil history of the Juglandaceae. Monographs in Systematic Botany, Missouri Botanical Garden, 21:1-137.
Manchester, S.R. 1994. Fruits and seeds of the Middle Eocene Nut Beds Flora, Clarno Formation, Oregon. Palaeontographica Americana, 58:1-205.
Manchester, S.R. 1999. Relationships of North American Tertiary floras. Annals of the Missouri Botanical Garden, 86:472-522. https://doi.org/10.2307/2666183
Manchester, S.R. 2001. Update on the megafossil flora of Florissant, Colorado, p. 137-161. In Evanoff, E., Gregory-Wodzicki, K.M., and Johnson, K.R. (eds.), Fossil Flora and Stratigraphy of the Florissant Formation, Colorado. Denver Museum of Nature & Science, Denver, Colorado.
Manchester, S.R., Judd, W.S., and Handley, B. 2006. Foliage and fruits of early poplars (Salicaceae): Populus from the Eocene of Utah, Colorado, and Wyoming. International Journal of Plant Sciences, 167:897-908. https://doi.org/10.1086/503918
Manchester, S.R., Chen, Z.-D., Lu, A.-M., and Uemura, K. 2009. Eastern Asian endemic seed plant genera and their paleogeographic history throughout the Northern Hemisphere. Journal of Systematics and Evolution, 47:1-42. https://doi.org/10.1111/j.1759-6831.2009.00001.x
Manning, W.E. 1978. The classification within the Juglandaceae. Annals of the Missouri Botanical Garden, 65:1058-1087. https://doi.org/10.2307/2398782
Manos, P.S. 1997. Quercus sect. Protobalanus, p. 468. In Flora of North America Editorial Committee (ed.), Flora of North America North of Mexico, vol. 3. Oxford University Press, New York and Oxford.
Marshall, H. 1785. Arbustrum Americanum: The American Grove, or, an Alphabetical Catalogue of Forest Trees and Shrubs, Natives of the American United States, Arranged According to the Linnaean System. J. Crukshank, Philadelphia. https://doi.org/10.5962/bhl.title.68506
Martinov, I.I. 1820. Tekhno-Botanicheskīĭ Slovar, na Latinskom i Rossīĭskom Iȃzykakh. (Publisher unknown), Saint Petersburg. (In Latin and Russian) https://doi.org/10.5962/bhl.title.96260
Maslova, N.P. 1996. The genus Platanus L. (Platanaceae Dumortier) in the Palaeocene of Kamchatka. Paleontological Journal, 31:208-214.
McCartan, L., Tiffney, B.H., Wolfe, J.A., Ager, T.A., Wing, S.L., Sirkin, L.A., Ward, L.W., and Brooks, J. 1990. Late Tertiary assemblage from upland gravel deposits of the southern Maryland Coastal Plain. Geology, 18:311-314. https://doi.org/10.1130/0091-7613(1990)018%3C0311:ltfafu%3E2.3.co;2
Metzgar, J.S., Skog, J.E., Zimmer, E.A., and Pryer, K.M. 2008. The paraphyly of Osmunda is confirmed by phylogenetic analysis of seven plastid loci. Systematic Botany, 33:31-36. https://doi.org/10.1600/036364408783887528
Meyer, E. 1839. Preussens Pflanzengattungen. Gräfe und Unzer, Königsberg.
Miller, C.N. 1967. Evolution of the fern genus Osmunda. Contributions from the Museum of Paleontology, University of Michigan, 21:139-203.
Miller, P. 1754. The Gardeners Dictionary. Rivington, London. https://doi.org/10.5962/bhl.title.20764
Mirbel, C.F.B. 1815. Ėlements de Physiologie Végétale et de Botanique. Chez Magimal, Paris. https://doi.org/10.5962/bhl.title.110802
Nepal, M.P. and Ferguson, C.J. 2012. Phylogenetics of Morus (Moraceae) inferred from ITS and trnL-trnF sequence data. Systematic Botany, 37:442-450. https://doi.org/10.1600/036364412X635485
Newman, E. 1844. A History of British Ferns. J. van Voorst, London. https://doi.org/10.1080/037454809495214
Nixon, K.C. 1993. Infrageneric class of Quercus (Fagaceae) and typification of sectional names. Annals of Forest Science, 50:255-345. https://doi.org/10.1051/forest:19930701
Nixon, K.C. 1997. Fagaceae, p. 436-506. In Flora of North America Editorial Committee (ed.), Flora of North America North of Mexico, Vol. 3. Oxford University Press. New York and Oxford.
Nixon, K.C. and Muller, C.H. 1997. Quercus sect. Quercus, p. 471-506. In Flora of North America Editorial Committee (ed.), Flora of North America North of Mexico, Vol. 3. Oxford University Press. New York and Oxford.
Nuttall, T. 1818. The Genera of North American Plants and a Catalogue of the Species, to the Year 1817. G. Heartt. Philadelphia. https://doi.org/10.5962/bhl.title.24647
Pan, A.D., Jacobs, B.F., Dransfield, J., and Baker, W.J. 2006. The fossil history of palms (Arecaceae) in Africa and new records from the Late Oligocene (28-27 Mya) of northwestern Ethiopia. Botanical Journal of the Linnean Society, 151:69-81. https://doi.org/10.1111/j.1095-8339.2006.00523.x
Phipps, C.J., Taylor, T.N., Taylor, E.L., Cuneo, N.R., Boucher, L.D., and Yao, X. 1998. Osmunda (Osmundaceae) from the Triassic of Antarctica: an example of evolutionary stasis. American Journal of Botany, 85:888-895. https://doi.org/10.2307/2446424
Pigg, K.B. and Rothwell, G.W. 2001. Anatomically preserved Woodwardia virginica (Blechnaceae) and a new filicalean fern from the middle Miocene Yakima Canyon Flora of Central Washington, USA. American Journal of Botany, 88:777-787. https://doi.org/10.2307/2657030
Pigg, K.B., Devore, M.L., and Wehr, W.C. 2006. Filicalean ferns from the Tertiary of western North America: Osmunda L. (Osmundaceae: Pteridophyta) and onocleoid forms (Filicales: Pteridophyta). Fern Gazette, 17:279-286.
Prothero, D. and Manning, E. 1987. Miocene rhinoceroses from the Texas Gulf Coastal Plain. Journal of Paleontology, 61:388-423. https://doi.org/10.1017/s0022336000028559
Read, R.W. and Hickey, L.J. 1972. A revised classification of fossil palm and palm-like leaves. Taxon, 21:129-137. https://doi.org/10.2307/1219237
Rehder, A. and Wilson, E.H. 1913. Plantae Wilsonianae an Enumeration of the Woody Plants Collected in China for the Arnold Arboretum of Harvard University, MA. The University Press, Cambridge, Massachusetts. https://doi.org/10.5962/bhl.title.33536
Reinke J. 1876. Beiträge zur Anatomie der an Laubblättern, besonders an den Zähnen derselbem vorkommenden Secretions-organe. Jahrbuch für Wissenschaftliche Botanik, 10:119-178.
Reveal, J.L. 1992. Validation of subclass and superordinal names in Magnoliophyta. Novon, 2:235-237.
Reveal, J.L. 2012. An outline of a classification scheme for extant flowering plants. Phytoneuron, 37:1-221.
Richard, L.C.M. 1810. Note sur les Plantes dites Conifères. Annales du Museum National d’Histoire Naturelle, 16:296-299.
Rothwell, G.W., Van Atta, M.R., Ballard, Jr., H.E., and Stockey, R.A. 2004. Molecular and phylogenetic relationships among Lemnaceae and Araceae using the chloroplast trnL, trnF intergenic spacer. Molecular Phylogenetics and Evolution, 30:378-385. https://doi.org/10.1016/S1055-7903(03)00205-7
Saporta, G. 1865. Études sur la vegetation du sud-est de la France a l’époque tertiare. Annales des Sciences Naturelles (Botanique), 5:5-152.
Séguier, J.F. 1754. Planta Veronenses, vol. 3. (Publisher unknown), Veronae.
Small, J.K. 1903. Flora of the Southeastern United States. Hafner, New York.
Smiley, C.J. and Rember, W.C. 1985. Composition of the Miocene Clarkia Flora, p. 175-184. In Smiley, C.J. (ed.), Late Cenozoic History of the Pacific Northwest. Pacific Division of the American Association for the Advancement of Science, San Francisco.
Smith, A.R., Pryer, K.M., Schuettpelz, E., Korall, P., Schneider, H., and Wolf, P.G. 2006. A classification for extant ferns. Taxon, 55:705-731. https://doi.org/10.2307/25065646
Smith, H.V. 1938. Some new and interesting Late Tertiary plants from Sucker Creek, Idaho-Oregon boundary. Bulletin of the Torrey Botanical Club, 65:557-564. https://doi.org/10.2307/2480794
Smith, J.E. 1793. Tentamen botanicum de filicum generibus dorsiferarum. Mémoires del’Académie Royal des Sciences de Turin, 5:401-422.
Stockey, R.A., Rothwell, G.W., and Johnson, K.R. 2007. Cobbania corrugata gen. et. comb. nov. (Araceae): A floating aquatic monocot from the Upper Cretaceous of Western North America. American Journal of Botany, 94:609-624. https://doi.org/10.3732/ajb.94.4.609
Stults, D. 2003. Paleoecological Analysis of the Citronelle Formation from the Paleoflora of Three Sites in Southwest Alabama. Unpublished MS Thesis, University of South Alabama, Mobile, Alabama, USA.
Stults, D.Z. and Axsmith, B.J. 2011. Filling the gaps in the Neogene plant fossil record of eastern North America: New data from the Pliocene of Alabama. Review of Palaeobotany and Palynology,167:1-9. https://doi.org/10.1016/j.revpalbo.2011.07.004
Stults, D.Z and Axsmith, B.J. 2015. New plant fossil records and paleoclimate analysis of the late Pliocene Citronelle Formation flora U.S. Gulf Coast. Palaeontologia Electronica 18.3.47A:1-35. https://doi.org/10.26879/550
Stults, D.Z., Axsmith, B.J., and Liu, Y.S.C. 2010. Evidence of white pine (Pinus subgenus Strobus) dominance from the Pliocene northeastern Gulf of Mexico Coastal Plain. Palaeogeography, Palaeoclimatology, Palaeoecology, 287:95-100. https://doi.org/10.1016/j.palaeo.2010.01.021
Stults, D.Z., Wagner-Cremer, F., and Axsmith, B.J. 2011. Atmospheric paleo-CO2 estimates based on Taxodium distichum (Cupressaceae) fossils from the Miocene and Pliocene of Eastern North America. Palaeogeography, Palaeoclimatology, and Palaeoecology, 309:327-332. https://doi.org/10.1016/j.palaeo.2011.06.017
Thompson, R.S., Anderson, K.H., and Bartlein, P.J. 1999. Atlas of relation between climate parameters and distributions of important trees and shrubs in North America—hardwoods. United States Geological Survey Professional Paper, 1650-B:1-423.
Tidwell, W.D. and Ash, S.R. 1994. A review of selected Triassic to Early Cretaceous ferns. Journal of Plant Research, 107:417-442. https://doi.org/10.1007/bf02344066
Tiffney, B. 1985. The Eocene North Atlantic land bridge: Its importance in Tertiary and modern phytogeography of the Northern Hemisphere. Journal of the Arnold Arboretum, 66:3-94. https://doi.org/10.5962/bhl.part.13183
Tiffney, B. 1993. Fruits and seeds of the Tertiary Brandon Lignite. VII. Sargentodoxa (Sargentodoxaceae). American Journal of Botany, 80:517-523. https://doi.org/10.1002/j.1537-2197.1993.tb13834.x
Tiffney, B. 1994. Re-evaluation of the age of the Brandon Lignite (Vermont, USA) based on plant megafossils. Review of Palaeobotany and Palynology, 82:299-315. https://doi.org/10.1016/0034-6667(94)90081-7
Underwood, L.M. 1903. Notes on southern ferns. Torreya, 3:17.
Vavrek, M.J., Stockey, R.A., and Rothwell, G.W. 2006. Osmunda vancouverensis sp. nov. (Osmundaceae), permineralized fertile frond segments from the Lower Cretaceous of British Columbia, Canada. International Journal of Plant Sciences, 167:631-637. https://doi.org/10.1086/500994
von Berchtold, F. and Presl, J.S. 1820. O Prirozenosti Rostlin. Krala Wiljma Endersa, Prague.
Wang, L., Xu, Q.-Q., and Jin, J.-H. 2014. A reconstruction of the fossil Salvinia from the Eocene of Hainan Island, South China. Review of Palaeobotany and Palynology, 203:12-21. https://doi.org/10.1016/j.revpalbo.2013.12.005
Wang, Q.J., Ma, F.J., Dong, J.L., Yang, Y., Jin, P.H., and Sun, B.N. 2015. Coryphoid palms from the Oligocene of China and their biogeographical implications. Comptes Rendus Palevol, 14:263-279. https://doi.org/10.1016/j.crpv.2015.03.005
Watson, L. and Dallwitz, M.J. 1992+. The families of flowering plants: Descriptions, illustrations, identification, and information retrieval. www.delta-intkey.com (accessed June 1, 2016).
Weber, B. 1978. Contribution à l’étude morphologique des feuilles de Salix L. Bulletin de la Société Botanique Suisse, 88:72-119.
Wilkinson, H.P. 2007. Leaf teeth in certain Salicaceae and ‘Flacourtiaceae.’ Botanical Journal of the Linnean Society, 155:241-256. https://doi.org/10.1111/j.1095-8339.2007.00695.x
Willdenow, C.L. 1810. Species Plantarum, Editio Quarta. Impensis G.C. Nauk, Berlin.
Wing, S.L. 1981. A Study of the Paleoecology and Paleobotany in the Willwood Formation (early Eocene), Wyoming. Unpublished PhD Dissertation. Yale University, New Haven, Connecticut, USA
Wolfe, J.A. 1973. Fossil forms of the Amentiferae. Brittonia, 25:334-355. https://doi.org/10.2307/2805639
Wolfe, J.A. 1975. Some aspects of plant geography of the Northern Hemisphere during the Late Cretaceous and Tertiary. Annals of the Missouri Botanical Garden, 62:264-279. https://doi.org/10.2307/2395198
Zhang, J.-B., Li, R-Q., Xiang, X.-G., Manchester, S.R., Lin, L., Wang, W., Wen, J., and Chen, Z.-D. 2013. Integrated fossil and molecular data reveal the biogeographic diversification of the Eastern Asian-Eastern North American disjunct hickory genus (Carya Nutt.). PloS ONE 8(7):e70449. https://doi.org/10.1371/journal.pone.0070449
Zheng, W. 1983. On the significance of Pacific intercontinental discontinuity. Annals of the Missouri Botanical Garden, 70:577-590. https://doi.org/10.2307/2398977 | 1 | 49 |
<urn:uuid:beb35b38-e5ed-48d8-aeb1-c78c22e9aa7e> | Clearer Displays with PCBTok’s Monitor PCB
A monitor PCB is a circuit board that is used in computers and monitors. It is responsible for displaying images on the screen. The monitor PCB contains many components, including transistors, resistors, capacitors and diodes.
With PCBTok’s Monitor PCBs, designed with the most advanced technology available to ensure that your monitor displays clear images without any distracting artifacts or blurring effects.
Tested and High-Quality Monitor PCB from PCBTok
At PCBTok, we know all about the importance of quality. That’s why we’re proud to offer our customers a high-quality monitor PCB that is tested by our experts before it leaves our warehouse. Our customer service team will work with you to make sure you get exactly what you need!
We specialized in the production of high-quality monitor PCBs. With advanced production lines and skilled workers, we can provide you with the most effective and efficient services.
PCBTok have been focusing on improving our Monitor PCB for many years and has established a good reputation among clients in the world. We are confident that we can supply you with top quality products at competitive prices!
Monitor PCB By Types
Used in the production of televisions, computers, and radios. It is an electronic device that provides an image on the screen when electricity passes through it.
Designed to be used in flat panel monitors. The purpose of this board is to connect all of the components together so that they can communicate with each other.
A surface mount electronic circuit board, which is used to connect the touch screen with the LCD panel, and it also provides power supply for both of them.
One of the most important parts of your LED monitor. It contains all the components that make your LED monitor work, including the lighting system and the circuit board.
Monitor PCB that is made up of a series of organic light-emitting diodes (OLEDs) that produce light when an electric current passes through both the display and electronics.
A type of flat panel display that uses digital light processing (DLP) technology to create an image. The DLP chip projects light onto the screen, which then reflects it back to your eyes.
Monitor PCB Introduction
Monitor PCB is a board that is used for the monitor. It is made up of a number of electronic components that are used to control, adjust and display data from a computer or other device. Most of these components are soldered onto the PCB to form a circuit and help in the functioning of the monitor. The main purpose of this board is to convert digital signals into analog signals and vice versa, so that they can be read by our eyes or ears.
The basic components of this board include resistors, capacitors, diodes and transistors which help in regulating voltage levels on different circuits throughout the monitor. Some monitors also have integrated circuits (ICs), wires and other components attached to them. This allows them to perform multiple functions such as detecting input from buttons on your keyboard or mouse as well as receiving data from an external source like your smartphone or tablet via Bluetooth connectivity.
Step-By-Step Process of Monitor PCB Manufacturing
Monitor PCB Manufacturing is a process that involves the creation of a printed circuit board (PCB) for monitors. This process includes the following steps:
·The first step is to create a design for the monitor PCB. This can be done using CAD software.
·Once the design has been made, it must be etched onto copper plates using an etching solution. The etching is done using an etching machine or by hand using a chemical solution.
·Once the design has been etched into the copper plates, it is time to lay out the components that will go on the monitor PCB. This can be done manually or with automated tools such as pick-and-place machines or computer-aided manufacturing systems.
LED and LCD Monitor Features and Difference
LCD technology uses liquid crystals to create images on screen. These crystals can be manipulated by applying an electric current, which in turn causes them to change their structure and reflect light in different ways. It’s this process that allows for different colors and brightness levels.
LED technology works differently than LCD by using backlighting instead of front lighting like with LCD technology. Instead of being made up of individual pixels like LCD screens are LEDs are essentially one large block that lights up when electricity passes through them.
PCBTok | Your Reliable and Dependable Monitor PCB Manufacturer
PCBTok is your reliable and dependable monitor PCB manufacturer. We offer quality, competitively priced PC boards designed to meet the needs of all of our clients.
We understand that you require a high-quality product to ensure you are getting the best value for your money. By offering a wide range of services including PCB design, production, assembly, testing and packaging, we ensure the best possible quality is maintained throughout the entire process.
We have been manufacturing monitor PCB for over 10 years and have built up a large customer base who continue to come back because they know that we will always deliver on time without fail. Our team are experts in their field and are always happy to help with any questions you may have regarding our products or services.
Monitor PCB Fabrication
A monitor PCB is an electronic component that is used in a monitor. It is also known as a display panel, or a video display unit (VDU). A monitor PCB contains several components and each component has its own function.
There are a number of components found in a monitor PCB, including resistors, capacitors, inductors, diodes and transistors. Resistors are used to control current flow and stabilize voltage levels.
Capacitors are used to temporarily store electricity. Inductors help regulate the electrical flow in circuits by creating magnetic fields. Diodes allow for current flow in one direction only, while transistors amplify or switch current signals.
The connectors used to connect the monitor PCB to other components in the system are:
• Power supply connector: This connector is a 24-pin connector that is used to supply power to the monitor. It supplies +5V, +12V, and -12V, as well as ground.
• Video connector: This 15-pin D-subminiature connector is used for video signals, such as RGB signals and sync signals.
• Audio connector: This 4-pin mini-DIN connector is used for audio signals.
Monitor PCB Production Details As Following Up
- Production Facility
- PCB Capabilities
- Shipping Method
- Payment Methods
- Send Us Inquiry
|1||Layer Count||1-20 layers||22-40 layer|
|2||Base Material||KB、Shengyi、ShengyiSF305、FR408、FR408HR、IS410、FR406、GETEK、370HR、IT180A、Rogers4350、Rogers400、PTFE Laminates(Rogers series、Taconic series、Arlon series、Nelco series)、Rogers/Taconic/Arlon/Nelco laminate with FR-4 material(including partial Ro4350B hybrid laminating with FR-4)|
|3||PCB Type||Rigid PCB/FPC/Flex-Rigid||Backplane、HDI、High multi-layer blind&buried PCB、Embedded Capacitance、Embedded resistance board 、Heavy copper power PCB、Backdrill.|
|4||Lamination type||Blind&buried via type||Mechanical blind&burried vias with less than 3 times laminating||Mechanical blind&burried vias with less than 2 times laminating|
|HDI PCB||1+n+1,1+1+n+1+1,2+n+2,3+n+3(n buried vias≤0.3mm),Laser blind via can be filling plating||1+n+1,1+1+n+1+1,2+n+2,3+n+3(n buried vias≤0.3mm),Laser blind via can be filling plating|
|5||Finished Board Thickness||0.2-3.2mm||3.4-7mm|
|6||Minimum Core Thickness||0.15mm(6mil)||0.1mm(4mil)|
|7||Copper Thickness||Min. 1/2 OZ, Max. 4 OZ||Min. 1/3 OZ, Max. 10 OZ|
|9||Maximum Board Size||500*600mm(19”*23”)||1100*500mm(43”*19”)|
|10||Hole||Min laser drilling size||4mil||4mil|
|Max laser drilling size||6mil||6mil|
|Max aspect ratio for Hole plate||10:1(hole diameter>8mil)||20:1|
|Max aspect ratio for laser via filling plating||0.9:1(Depth included copper thickness)||1:1(Depth included copper thickness)|
|Max aspect ratio for mechanical depth-
control drilling board(Blind hole drilling depth/blind hole size)
|0.8:1(drilling tool size≥10mil)||1.3:1(drilling tool size≤8mil),1.15:1(drilling tool size≥10mil)|
|Min. depth of Mechanical depth-control(back drill)||8mil||8mil|
|Min gap between hole wall and
conductor (None blind and buried via PCB)
|Min gap between hole wall conductor (Blind and buried via PCB)||8mil(1 times laminating),10mil(2 times laminating), 12mil(3 times laminating)||7mil(1 time laminating), 8mil(2 times laminating), 9mil(3 times laminating)|
|Min gab between hole wall conductor(Laser blind hole buried via PCB)||7mil(1+N+1);8mil(1+1+N+1+1 or 2+N+2)||7mil(1+N+1);8mil(1+1+N+1+1 or 2+N+2)|
|Min space between laser holes and conductor||6mil||5mil|
|Min space between hole walls in different net||10mil||10mil|
|Min space between hole walls in the same net||6mil(thru-hole& laser hole PCB),10mil(Mechanical blind&buried PCB)||6mil(thru-hole& laser hole PCB),10mil(Mechanical blind&buried PCB)|
|Min space bwteen NPTH hole walls||8mil||8mil|
|Hole location tolerance||±2mil||±2mil|
|Pressfit holes tolerance||±2mil||±2mil|
|Countersink depth tolerance||±6mil||±6mil|
|Countersink hole size tolerance||±6mil||±6mil|
|11||Pad(ring)||Min Pad size for laser drillings||10mil(for 4mil laser via),11mil(for 5mil laser via)||10mil(for 4mil laser via),11mil(for 5mil laser via)|
|Min Pad size for mechanical drillings||16mil(8mil drillings)||16mil(8mil drillings)|
|Min BGA pad size||HASL:10mil, LF HASL:12mil, other surface technics are 10mil(7mil is ok for flash gold)||HASL:10mil, LF HASL:12mil, other surface technics are 7mi|
|Pad size tolerance(BGA)||±1.5mil(pad size≤10mil);±15%(pad size>10mil)||±1.2mil(pad size≤12mil);±10%(pad size≥12mil)|
|1OZ: 3/4mil||1OZ: 3/4mil|
|2OZ: 4/5.5mil||2OZ: 4/5mil|
|3OZ: 5/8mil||3OZ: 5/8mil|
|4OZ: 6/11mil||4OZ: 6/11mil|
|5OZ: 7/14mil||5OZ: 7/13.5mil|
|6OZ: 8/16mil||6OZ: 8/15mil|
|7OZ: 9/19mil||7OZ: 9/18mil|
|8OZ: 10/22mil||8OZ: 10/21mil|
|9OZ: 11/25mil||9OZ: 11/24mil|
|10OZ: 12/28mil||10OZ: 12/27mil|
|1OZ: 4.8/5mil||1OZ: 4.5/5mil|
|1.43OZ(negative ):5/8||1.43OZ(negative ):5/7|
|2OZ: 6/8mil||2OZ: 6/7mil|
|3OZ: 6/12mil||3OZ: 6/10mil|
|4OZ: 7.5/15mil||4OZ: 7.5/13mil|
|5OZ: 9/18mil||5OZ: 9/16mil|
|6OZ: 10/21mil||6OZ: 10/19mil|
|7OZ: 11/25mil||7OZ: 11/22mil|
|8OZ: 12/29mil||8OZ: 12/26mil|
|9OZ: 13/33mil||9OZ: 13/30mil|
|10OZ: 14/38mil||10OZ: 14/35mil|
|13||Dimension Tolerance||Hole Position||0.08 ( 3 mils)|
|Conductor Width(W)||20% Deviation of Master
|1mil Deviation of Master
|Outline Dimension||0.15 mm ( 6 mils)||0.10 mm ( 4 mils)|
|Conductors & Outline
( C – O )
|0.15 mm ( 6 mils)||0.13 mm ( 5 mils)|
|Warp and Twist||0.75%||0.50%|
|14||Solder Mask||Max drilling tool size for via filled with Soldermask (single side)||35.4mil||35.4mil|
|Soldermask color||Green, Black, Blue, Red, White, Yellow,Purple matte/glossy|
|Silkscreen color||White, Black,Blue,Yellow|
|Max hole size for via filled with Blue glue aluminium||197mil||197mil|
|Finish hole size for via filled with resin||4-25.4mil||4-25.4mil|
|Max aspect ratio for via filled with resin board||8:1||12:1|
|Min width of soldermask bridge||Base copper≤0.5 oz、Immersion Tin: 7.5mil(Black), 5.5mil(Other color) , 8mil( on copper area)|
|Base copper≤0.5 oz、Finish treatment not Immersion Tin : 5.5 mil(Black,extremity 5mil), 4mil(Other
color,extremity 3.5mil) , 8mil( on copper area
|Base coppe 1 oz: 4mil(Green), 5mil(Other color) , 5.5mil(Black,extremity 5mil),8mil( on copper area)|
|Base copper 1.43 oz: 4mil(Green), 5.5mil(Other color) , 6mil(Black), 8mil( on copper area)|
|Base copper 2 oz-4 oz: 6mil, 8mil( on copper area)|
|15||Surface Treatment||Lead free||Flash gold(electroplated gold)、ENIG、Hard gold、Flash gold、HASL Lead free、OSP、ENEPIG、Soft gold、Immersion silver、Immersion Tin、ENIG+OSP,ENIG+Gold finger,Flash gold(electroplated gold)+Gold finger,Immersion silver+Gold finger,Immersion Tin+Gold finge|
|Aspect ratio||10:1(HASL Lead free、HASL Lead、ENIG、Immersion Tin、Immersion silver、ENEPIG);8:1(OSP)|
|Max finished size||HASL Lead 22″*39″;HASL Lead free 22″*24″;Flash gold 24″*24″;Hard gold 24″*28″;ENIG 21″*27″;Flash gold(electroplated gold) 21″*48″;Immersion Tin 16″*21″;Immersion silver 16″*18″;OSP 24″*40″;|
|Min finished size||HASL Lead 5″*6″;HASL Lead free 10″*10″;Flash gold 12″*16″;Hard gold 3″*3″;Flash gold(electroplated gold) 8″*10″;Immersion Tin 2″*4″;Immersion silver 2″*4″;OSP 2″*2″;|
|PCB thickness||HASL Lead 0.6-4.0mm;HASL Lead free 0.6-4.0mm;Flash gold 1.0-3.2mm;Hard gold 0.1-5.0mm;ENIG 0.2-7.0mm;Flash gold(electroplated gold) 0.15-5.0mm;Immersion Tin 0.4-5.0mm;Immersion silver 0.4-5.0mm;OSP 0.2-6.0mm|
|Max high to gold finger||1.5inch|
|Min space between gold fingers||6mil|
|Min block space to gold fingers||7.5mil|
|16||V-Cutting||Panel Size||500mm X 622 mm ( max. )||500mm X 800 mm ( max. )|
|Board Thickness||0.50 mm (20mil) min.||0.30 mm (12mil) min.|
|Remain Thickness||1/3 board thickness||0.40 +/-0.10mm( 16+/-4 mil )|
|Tolerance||±0.13 mm(5mil)||±0.1 mm(4mil)|
|Groove Width||0.50 mm (20mil) max.||0.38 mm (15mil) max.|
|Groove to Groove||20 mm (787mil) min.||10 mm (394mil) min.|
|Groove to Trace||0.45 mm(18mil) min.||0.38 mm(15mil) min.|
|17||Slot||Slot size tol.L≥2W||PTH Slot: L:+/-0.13(5mil) W:+/-0.08(3mil)||PTH Slot: L:+/-0.10(4mil) W:+/-0.05(2mil)|
|NPTH slot(mm) L+/-0.10 (4mil) W:+/-0.05(2mil)||NPTH slot(mm) L:+/-0.08 (3mil) W:+/-0.05(2mil)|
|18||Min Spacing from hole edge to hole edge||0.30-1.60 (Hole Diameter)||0.15mm(6mil)||0.10mm(4mil)|
|1.61-6.50 (Hole Diameter)||0.15mm(6mil)||0.13mm(5mil)|
|19||Min spacing between hole edge to circuitry pattern||PTH hole: 0.20mm(8mil)||PTH hole: 0.13mm(5mil)|
|NPTH hole: 0.18mm(7mil)||NPTH hole: 0.10mm(4mil)|
|20||Image transfer Registration tol||Circuit pattern vs.index hole||0.10(4mil)||0.08(3mil)|
|Circuit pattern vs.2nd drill hole||0.15(6mil)||0.10(4mil)|
|21||Registration tolerance of front/back image||0.075mm(3mil)||0.05mm(2mil)|
|22||Multilayers||Layer-layer misregistration||4layers:||0.15mm(6mil)max.||4layers:||0.10mm(4mil) max.|
|Min. Spacing from Hole Edge to Innerlayer Pattern||0.225mm(9mil)||0.15mm(6mil)|
|Min.Spacing from Outline to Innerlayer Pattern||0.38mm(15mil)||0.225mm(9mil)|
|Min. board thickness||4layers:0.30mm(12mil)||4layers:0.20mm(8mil)|
|Board thickness tolerance||4layers:+/-0.13mm(5mil)||4layers:+/-0.10mm(4mil)|
|8-12 layers:+/-0.20mm (8mil)||8-12 layers:+/-0.15mm (6mil)|
|26||Impedance control||±5ohm(<50ohm), ±10%(≥50ohm)|
PCBTok offers flexible shipping methods for our customers, you may choose from one of the methods below.
DHL offers international express services in over 220 countries.
DHL partners with PCBTok and offers very competitive rates to customers of PCBTok.
It normally takes 3-7 business days for the package to be delivered around the world.
UPS gets the facts and figures about the world’s largest package delivery company and one of the leading global providers of specialized transportation and logistics services.
It normally takes 3-7 business days to deliver a package to most of the addresses in the world.
TNT has 56,000 employees in 61 countries.
It takes 4-9 business days to deliver the packages to the hands
of our customers.
FedEx offers delivery solutions for customers around the world.
It takes 4-7 business days to deliver the packages to the hands
of our customers.
5. Air, Sea/Air, and Sea
If your order is of large volume with PCBTok, you can also choose
to ship via air, sea/air combined, and sea when necessary.
Please contact your sales representative for shipping solutions.
Note: if you need others, please contact your sales representative for shipping solutions.
You can use the following payment methods:
Telegraphic Transfer(TT): A telegraphic transfer (TT) is an electronic method of transferring funds utilized primarily for overseas wire transactions. It’s very convenient to transfer.
Bank/Wire transfer: To pay by wire transfer using your bank account, you need to visit your nearest bank branch with the wire transfer information. Your payment will be completed 3-5 business days after you have finished the money transfer.
Paypal: Pay easily, fast and secure with PayPal. many other credit and debit cards via PayPal.
Credit Card: You can pay with a credit card: Visa, Visa Electron, MasterCard, Maestro.
“Over the past few years, I have worked with PCBTok on several PCB designs. During that time, I have found them to be a professional, reliable partner. They delivered high quality products and provided excellent support whenever any problems arose. They were also very helpful during the design process, providing valuable input based on their experience as well as helping me refine my designs.”Guy Georges, Electronics Entrepreneur from Vitry-le-François, France
“I have been ordering PCBs from PCBTok for almost 2 years now. I have never had a single issue with the quality of their PCBs, or the speed at which they deliver them to me. I have also been impressed by their customer service, who have always been quick to respond to any of my questions or concerns. I can honestly say that I am thoroughly impressed with the product quality and service given. They are one of the best suppliers that I’ve dealt with thus far, and I will be ordering more PCBs from them soon.”Alexander Pichushkin, Electronics Expert from Russia
“I’m an entrepreneur and I use PCBTok for my business. They have provided me with the greatest and fantastic service I deserve, and they have promised to deliver the best product without sacrificing its quality. They are straightforward in providing their people with the best service and products they can offer, as demonstrated by their website; they can stand and live with it. They are always striving to provide me with the best and are considerate of my needs. I strongly advise you to try PCBTok”Vasile Tcaciuc, Electronics Engineer and Entrepreneur from Delaware, U.S.A
When you’re working with a monitor PCB, you have to be aware of the mutual inductance between the transformer and the inductor.
The mutual inductance is a measure of how much current can be transferred from one part to another without any external voltage being applied. In other words, it’s how easily the electricity moves between two parts.
The mutual inductance between two coils is the induced voltage in one coil when the other is subjected to an alternating current. This induced voltage is proportional to the rate of change of current in the second coil and inversely proportional to the square of its distance from the first. A transformer uses this principle to step up or step-down electrical energy from one level to another. The mutual inductance between primary and secondary windings is used to control power flow by regulating current in one winding with respect to voltage in another.
The difference between analog and digital circuitry in a monitor PCB is that analog circuitry is able to produce all possible voltages, while digital circuitry can only produce two values (on or off).
Analog circuitry is the process of taking an electrical signal and converting it into another form that has a directly related value to the original signal. For example, if you were to take your analog signal from a microphone and convert it into digital data, then you could do things like amplify or filter that data to make it sound better.
Digital circuitry on the other hand works by taking binary values (ones or zeros) and processing them in some way. An example of this would be if you had an LED light and wanted to turn it on with just one button press instead of having to hold down multiple buttons at once.
The main reason for the high heat generation of a monitor PCB is the amount of power that it consumes. The more power a device consumes, the more heat it generates. The monitor PCB contains many components that are connected to each other by traces on the board. These components include transistors, capacitors, resistors and diodes. These components use electricity in order to function and they generate heat as a by-product of this process.
There are many other reasons for the high heat generation of a monitor PCB, such as:
- The use of multiple transistors in series to form an amplifier circuit, which creates a relatively large voltage drop across each transistor and results in a large current flow through each transistor.
- The use of many logic circuits on the same substrate, so that the total power consumption is large, and the circuit board cannot be cooled by convection.
- The use of high-speed ICs and other ICs with high power consumption, resulting in a large amount of heat generated by electronic components. | 1 | 3 |
<urn:uuid:3f2453a0-af26-439b-92d1-522a4cf20b26> | - Housing aspects, ranging from physical quality to neighborhood conditions, affect health in multiple ways, and research has established links between housing and a range of health outcomes.
- Targeted interventions at the nexus of health and housing, such as addressing asthma triggers and providing supportive housing to those experiencing homelessness, can improve health outcomes while reducing long-term healthcare expenditures.
- The Patient Protection and Affordable Care Act has created new opportunities to combine housing and health funds and test new coordinated models of care.
Safe, stable, and affordable housing can be a platform for better health outcomes. Station Center Family Housing in Union City, California, offers 157 affordable units, green design, walkability, and onsite recreational opportunities and services for residents. Bruce Damonte/David Baker Architects
A well-established and growing body of research shows that social and economic factors substantially influence individual health. According to one estimate, these nonmedical factors can account for up to 40 percent of all health outcomes.1 Defined by the World Health Organization as "the circumstances in which people are born, grow up, live, work, and age, and the systems put in place to deal with illness," many of the social determinants of health relate directly or indirectly to housing.2 Housing that is expensive, overcrowded, in poor physical condition, or located in a hazardous neighborhood environment can lead to negative health outcomes. Conversely, safe, stable, and affordable housing in an opportunity-rich neighborhood with access to health services can serve as a platform for improved health outcomes. Housing with supportive services and home and community-based services (HCBS) can be especially effective for improving health and reducing the number of high-cost visits to emergency departments for health services as well as reducing the need for institutional care among seniors and people with disabilities, including those experiencing chronic homelessness. Recent changes in healthcare policy, many of them associated with the Patient Protection and Affordable Care Act (ACA), have opened up new opportunities to use housing as a platform to achieve desirable health and fiscal outcomes, although some challenges remain.
Various aspects of housing, ranging from the physical quality of a home to the conditions of the surrounding neighborhood, affect residents' health. Among the social determinants of health, housing is a key lever.3 A robust body of research has established links between health and housing. For example, Coley et al. find that the physical quality of housing is a strong predictor of emotional and behavioral problems for low-income children and adolescents, and poor conditions such as the presence of lead, mold, pests, and inadequate heating or cooling adversely affect physical health.4 Environmental conditions such as clutter, loose rugs, electrical cords, and the absence of railings and grab bars can increase the risk of falls, especially for older people.5 Although the physical quality of the nation's housing stock has improved substantially over the past several decades, a small but significant stock of severely inadequate housing remains, affecting approximately a half-million households.6 Physically inadequate housing disproportionately affects poor and minority children.7
Lower housing cost burdens can give families greater flexibility to spend on healthy food and health services. Armstrong Townhomes, developed by BRIDGE Housing, offers 124 housing units in the high-cost housing market of San Francisco. Brian Rose
A lack of affordable housing stock contributes to overcrowding, housing instability, and homelessness.8 A lack of privacy and control, noise, overstimulation, and other conditions related to overcrowding can cause psychological distress. In 2012, 14 percent of children lived in overcrowded conditions. As with inadequate physical conditions, poor and minority children are disproportionately affected by overcrowded housing.9 Increased stress and a lack of adequate sleep can negatively affect mental and behavioral health.10 Overcrowded housing may also increase the transmission of infectious diseases.11 An insufficient supply of affordable housing limits residents' ability to live in neighborhoods with beneficial health effects and limits the stock available for conversion to permanent supportive housing. The lack of affordable housing that meets the needs of people with disabilities in community settings, in particular, restricts fulfillment of the Olmstead mandate — a 1999 Supreme Court decision that requires public entities to provide the least restrictive care settings for persons with disabilities.12 Homelessness causes new physical and mental health problems and makes existing problems worse. In addition to stress, living on the street or in shelters can increase exposure to communicable diseases, malnutrition, and harmful weather conditions and make accessing or managing medicine difficult.13
The relationship between housing affordability and health outcomes is complex. On one hand, high housing costs may reduce the amount of money available for residents to spend on food or health services. On the other hand, higher-priced housing in an area with beneficial neighborhood effects can improve residents' health.14 Research shows that high-quality neighborhoods reduce residents' exposure to environmental toxins and stressors such as crime. High-quality neighborhoods can also offer better access to health-related resources such as services, healthy food, medicine, and recreational opportunities.15
Because the evidence supporting a connection between housing-related factors and resident health is so compelling, considerable potential exists to both improve health and reduce healthcare costs through targeted, preventive, and low-cost care interventions at the nexus of health and housing. Home modifications such as installing grab bars in a shower, for example, can prevent falls, and interventions such as mold remediation reduce asthma. Evidence shows that these and similar investments that target housing-related social determinants of health not only improve health outcomes but also reduce health expenditures. The potential for gains is especially high for certain subpopulations — children, seniors, low-income households, individuals with disabilities, and individuals experiencing homelessness — particularly those with complex health and social issues who frequently use emergency departments and hospitals. This small, high-cost population has a disproportionate impact on public spending. For example, 5 percent of Medicaid-only enrollees accounted for nearly half of all spending for Medicaid-only enrollees each year from 2009 to 2011.16 The opportunity to leverage strategic investments into public savings is apparent. As Khadduri and Locke write, "[t]he combination — and coordination — of housing, health care, and supportive services, if effectively delivered and well targeted, can help to achieve savings in health care expenditures, which are major drivers in federal deficit projections."17
Within the literature connecting housing and health, several studies point to areas ripe for targeted interventions and investments to improve health for various subpopulations. Research shows, for example, that multicomponent home interventions aimed at addressing triggers such as mold, rodents, cockroaches, and dust mites are effective at reducing asthma symptoms among children and adolescents. In addition to improved health, successful asthma interventions promise to decrease the estimated 500,000 hospitalizations, 1.8 million emergency department visits, 12.3 million physician office visits, and 10.5 million school days missed each year, all of which amount to an estimated annual cost of $56 billion in medical expenses and lost productivity.18 Lead abatement has also proven to be an effective investment with considerable impact; nationwide, the number of children with lead poisoning dropped by approximately 75 percent from 1992 to 2012.19 Despite this remarkable progress, there
is still room for additional gains, particularly among low-income children. The American Healthy Homes Survey finds that an estimated 22 percent of homes have one or more lead-based paint hazards and that low-income households have a higher prevalence of such hazards.20
Research suggests that supportive housing is an effective intervention for individuals experiencing chronic homelessness. Several studies find evidence that Housing First and permanent supportive housing interventions for people experiencing homelessness reduce the use of expensive healthcare services and promote better health. In a groundbreaking 2002 study, Culhane et al. report that a supportive housing intervention in New York City between 1989 and 1997 reduced the utilization of public services such as shelters, hospitals, and correctional facilities, with a corresponding savings of $16,281 per housing unit at $17,277 annually for a net cost of $995.21 A more recent study by Larimer et al. of a Housing First intervention in Seattle for individuals experiencing chronic homelessness and severe alcohol problems finds that the intervention, which offered participants housing and access to voluntary case management and onsite services, reduced alcohol consumption as well as total costs compared with control groups after 6 months, with monthly costs averaging $2,449 per person.22 Research has also found that case management, along with coordinated care, are effective in reducing hospitalizations and emergency department visits by chronically ill adults experiencing homelessness.23 Many individuals experiencing chronic homelessness are also high-cost, frequent users of health and emergency services for whom supportive housing could be an important health intervention. Individuals experiencing homelessness are three times more likely than those in the general population to use an emergency department at least once a year.24
A GHHI hazard reduction worker paints a new lead-free window frame. Although much progress has been made toward reducing instances of lead poisoning, an estimated 22 percent of homes have one or more lead-based paint hazards. Photo by Andre Chung
For seniors and persons with disabilities in institutional long-term care, transitioning to home and community-based settings not only satisfies the Olmstead mandate but also is cost effective and the preference of many seniors. Research shows evidence of cost savings from using HCBS rather than institutional long-term care both on a per-person basis and, over the long term, at the state level.25 Environmental modifications can reduce health risks for seniors aging in their homes. Studies show that environmental assessments and modifications coupled with education and followup reduces falls among older persons; interventions that also add exercise and vision management are particularly effective.26 The Centers for Disease Control and Prevention estimates that in 2013, falls caused $34 billion in direct medical costs, indicating that reducing falls could reap substantial savings.27
Finally, research has shown health benefits for low-income people who move from high-poverty to low-poverty neighborhoods. An evaluation of the Moving to Opportunity for Fair Housing Demonstration Program, for example, finds that women who moved into lower-poverty neighborhoods were less likely to be obese and have diabetes, and women and girls who moved into lower-poverty neighborhoods were less likely to have psychological distress and depression compared with the control group.28
Recent changes in health policy have created new avenues for capitalizing on the housing-health opportunity. Signed into law in 2010, the Patient Protection and Affordable Care Act (ACA) reshaped the context for investments and interventions that leverage housing as a platform for improved health and fiscal outcomes. ACA places renewed emphasis on preventive, integrated, and holistic care and on the social determinants of health. More specifically, ACA opens new opportunities at the intersection of health and housing by extending Medicaid coverage to previously ineligible individuals, expanding the types of providers eligible for Medicaid reimbursement, and authorizing new coordinated models of care under Medicaid and enhancing existing models. ACA's emphasis on facilitating community integration and attention to holistic approaches that address the social determinants of health builds on lessons learned from the Money Follows the Person (MFP) Demonstration program, Real Choice Systems Change Grants for Community Living program, and Section 1915(c) HCBS waivers. Researchers find that under MFP, states that employed housing specialists were more successful at transitioning people from institutional to community-based care than those that did not offer such services.29 The Real Choice Systems Change program helped states forge and strengthen partnerships between Medicaid agencies and housing organizations to leverage non-Medicaid funding sources for supportive housing; after all, having the ability to offer additional supportive services means little if the supply of housing to be coupled with these services is insufficient.30 Finally, Section 1915(c) HCBS waivers allowed states to experiment with offering medical and supportive services in home and community settings to people needing institutional-level care. Such waivers remain important for giving states the flexibility to experiment, but ACA has now designated several care models that no longer require waivers.31
Kenneth, the first of 50 new residents, looks on as movers prepare his unit at the Heights of Collingswood apartments in Collingswood, New Jersey. Led by the Camden Coalition of Healthcare Providers, Heights of Collingswood offers permanent housing and wraparound support services to individuals experiencing chronic homelessness with high-cost medical conditions. April Saul, courtesy of the Camden Coalition of Healthcare Providers
Expanded Coverage. ACA has dramatically increased the number of individuals eligible to receive Medicaid, expanding the number of opportunities to fund supportive housing. Before ACA, Medicaid eligibility included pregnant women and children under 6 years of age with household incomes below 138 percent of the federal poverty level, children 6 to 18 years of age at or below 100 percent of the federal poverty level through the Children's Health Insurance Program, and disabled adults 65 years of age and older. As interpreted by the U.S. Supreme Court, ACA allows states to voluntarily expand eligibility to all individuals under age 65 with household incomes at or below 133 percent of the federal poverty level.32 As of September 1, 2015, 30 states and the District of Columbia have opted to expand Medicaid eligibility and 20 have not.33 Notably, this expansion opens Medicaid eligibility to many of the approximately 83,000 individuals and 13,000 members of families with children experiencing chronic homelessness on a given night nationwide.34
Newly Eligible Providers. The Centers for Medicare & Medicaid Services (CMS) has also issued rule changes that authorize Medicaid reimbursement to nonmedical providers of services recommended by doctors or other licensed practitioners (previously, only the doctors or licensed practitioners themselves could be reimbursed for providing services). The rule change is aimed at encouraging patients to use preventive services. CMS characterizes the change as "another tool for states to leverage in ensuring robust provision of services designed to assist beneficiaries in maintaining a healthy lifestyle and avoiding unnecessary healthcare costs."35 Janet Viveiros of the National Housing Conference writes that although the rule change does not authorize activities such as environmental abatement, it does "[open] up more opportunity for activities that address hazards in homes through assessments of asthma and lead poisoning risk in individual homes and the provision of educational materials to families about risks, treatments, and remediation options."36
New Incentives and Requirements. ACA introduces incentives and penalties that encourage healthcare providers to attend to the social determinants of health. The Hospital Readmissions Reduction Program, for example, reduces payments to hospitals with excess readmissions of patients within 30 days of discharge for designated conditions.37 Unstable housing is one of many factors that increase the risk of readmission, which motivates hospitals to work with supportive housing providers.38 Under ACA, tax-exempt nonprofit hospitals are required to conduct a community health needs assessment every three years and adopt an annually updated implementation strategy that addresses barriers to care and community health. The regulations governing community health needs assessments encourage collaboration between a community's hospitals and public health agencies, both in preparing the assessment and in planning its implementation.39 The regulations direct hospitals to "address social, behavioral, and environmental factors that influence health in the community."40 Nonprofit hospitals are also required to conduct community benefit activities. The Catholic Health Association of the United States, the Association of American Medical Colleges, and the American Hospital Association are pressing the Internal Revenue Service to recognize housing as a community benefit activity, arguing that "[i]t has been demonstrated that providing access to safe, quality and affordable housing can have a greater impact on the health of a community than more traditional clinical modalities."41
Newly Authorized Housing-Related Activities. ACA has allowed Medicaid greater flexibility to cover supportive services that could be coupled with housing. CMS issued guidance to clarify which housing-related services can be reimbursed for individuals with disabilities, older adults who need long-term services and supports, and those experiencing chronic homelessness. The authorized activities fall into three general categories: individual housing transition services (tenant screening, support to address tenancy barriers, assistance with housing searches and applications, move-in assistance), individual housing and tenancy-sustaining services (coaching, training, support, and interventions to maintain tenancy), and state-level collaborative activities related to housing (state agencies partnering with and providing data to housing agencies to plan for housing opportunities for Medicaid populations).42
Garden Village in Sacramento, California, is the first property in the nation built under HUD's Section 811 Project Rental Assistance Program. Section 811 provides funding to develop and subsidize rental housing with access to supportive services for very low- and extremely low-income adults with disabilities. Domus Development
New Care Models and Initiatives. ACA endorses and encourages models of care that emphasize holistic, preventive measures that address the social determinants of health. Some of these models have the potential to incorporate housing-related activities or provide the services of a supportive housing unit or building. Among these new or enhanced models and initiatives are Accountable Care Organizations (ACOs), health homes, community benefit requirements and community health needs assessments for hospitals, the Community First Choice (CFC) Option, and the HCBS State Plan Option.43 HUD's Section 811 Project Rental Assistance Demonstration program, authorized by the Frank Melville Supportive Housing Investment Act of 2010, also creates opportunities for collaboration to expand the supply of supportive housing.
The ACA recognized ACOs for Medicare patients and authorized a pediatric ACO demonstration for patients participating in Medicaid and the Children's Health Insurance Program. Several states have begun experimenting with ACOs for Medicaid populations. ACOs are voluntary networks of providers that coordinate care from various providers and share the risk and savings associated with the total cost of care for their patient population. Coordination breaks down silos of provider types and reduces the duplication of services and expenditures. ACOs use metrics to evaluate the quality of patient care and receive bonuses for meeting quality standards or meeting savings benchmarks.44 The 12 states with Medicaid ACOs have used a variety of payment systems. Although all Medicaid ACO-model programs have the same basic structure, they are known by other names in some states; for example, they are called Coordinated Care Organizations in Oregon and Regional Care Collaborative Organizations in Colorado.45 Viveiros suggests that ACOs have a strong incentive to partner with organizations that can address the social determinants of health such as housing providers, which can offer nonmedical services such as hospital discharge planning and can help residents enroll in Medicaid or an ACO.46
GHHI works to improve health by facilitating collaborative efforts to identify and remediate home health hazards. Photo by Andre Chung
Like ACOs, health homes involve one or more healthcare providers or a managed care organization that will coordinate care for an individual, including referrals to social services. Health homes, however, are designed specifically for people with chronic illnesses, and states can choose to target specific subpopulations. Target populations must meet at least one of three eligibility requirements: having a serious mental illness, having two or more chronic conditions, or having one chronic condition and being at risk of a second. Wisconsin, for example, chose to use health homes in four counties to serve individuals with HIV/AIDS who either have one other chronic condition or are at risk of another.47 CMS increased the Federal Medical Assistance Percentages (the rates used to calculate matching funds) for the first two years of the program to encourage states to adopt the model. Health homes do not have to be offered statewide. The 15 states that had health homes in place as of August 2014 vary in the populations they target and in their payment systems, but their programs have generally included people with mental illness and have used a per-member, per-month rate.48 As with other Medicaid programs, funding for health homes cannot be used directly for housing, but the target populations are likely to overlap with those served by affordable housing programs. The opportunity exists for health home providers to partner with other organizations for activities such as enrollment outreach and referrals to housing providers.49 States can decide what types of providers can serve as a health home (such as community mental health centers and physicians' offices). The National Alliance to End Homelessness points out that behavioral health agencies that already fund supportive housing could integrate health homes into their operations, leveraging their experience with and connection to supportive housing to benefit individuals with serious mental illness or chronic conditions who are experiencing homelessness.50
Five states to date have received approval to offer a Community First Choice (CFC) Option in their state plans. The CFC Option, authorized by ACA and added to the Social Security Act as Section 1915(k), reimburses person-centered HCBS such as assistance with activities of daily living and health-related tasks. The option is part of an effort to rebalance Medicaid spending on long-term services and supports. States can also reimburse costs associated with transitioning out of institutional care, including security deposits and first month's rent. CFC Option plans must be offered statewide.51 Oregon's K Plan provides assistance with daily living activities through an agency-provider model in which the state contracts with providers. Individuals eligible for nursing facility services and needing an institutional level of care as well as those who have an income at or below 150 percent of the federal poverty level who need an institutional level of care are eligible. In addition to personal assistance, the plan allows expenditures of up to $5,000 per modification for environmental modifications that substitute for human assistance and that are related to the person-centered plan; the plan will also allow expenditures for transition costs, including first month's rent and utilities.52
Under the 1915(i) HCBS State Plan Option, states can choose to target a specific population — a group with either certain risk factors or a particular disease.53 The state of Montana, for example, opted to target HCBS benefits to youth with a serious emotional disturbance who are also eligible for Medicaid. The program provides mental health services in a community setting for youth who might otherwise be placed in a Psychiatric Residential Treatment Facility, inpatient hospital, or therapeutic group home.54
HUD's Section 811 Project Rental Assistance Demonstration program likewise seeks to expand opportunities for individuals to receive needed care outside of costly institutional settings. The program leverages affordable housing resources such as low-income housing tax credits to increase the supply of supportive housing units. HUD awards funds to state housing agencies that then collaborate with state health agencies to create supportive housing.55 The first property in the nation to implement Section 811 Project Rental Assistance was Garden Village in Sacramento, California. Through a collaborative effort among the state's Housing Finance Agency, Department of Housing and Community Development, Department of Health Care Services, Department of Developmental Services, and Tax Credit Allocation Committee, along with local partner Domus Development, Garden Village offers supportive housing units for 11 extremely low-income people with disabilities.56 The residents were referred by California Community Transition coordinators or the Department of Developmental Services Regional Center to transition out of an institutional care setting.57
State health and housing agencies have
a growing number of options and opportunities to meet the needs of residents, and they have considerable flexibility in choosing which programs to implement. Building on Minnesota's recent history of healthcare innovation, four Hennepin County organizations — the county's Human Services and Public Health Department, the Hennepin County Medical Center, Metropolitan Health Plan, and NorthPoint Health and Wellness Center — participate in an ACO called Hennepin Health. Hennepin Health integrates physical and mental health, social, and claims processing services for approximately 10,000 members. The ACO is the default assignment for Medicaid enrollees in the county who do not select an alternative health plan. Community health workers coordinate care and services that address the social determinants of health. Services include job placement supports, case management, and housing navigation.58 Hennepin Health receives a per-member, per-month payment regardless of the services utilized by members as well as a share of any overall savings. As a result, the ACO has an incentive to avoid unnecessary and expensive care. Because the housing situation of many Hennepin Health members is precarious — 30 to 50 percent are homeless, living in a shelter, or experiencing other housing instability — Hennepin Health uses existing contracts that the county's Human Services and Public Health Department has with housing providers to give Hennepin Health members priority admission to supportive housing. The ACO also employs staff members to provide housing counseling and navigation services along with other social services that might affect members' ability to remain housed. Viveiros notes that not enough affordable housing is available to meet the needs of all Hennepin Health members.59 The early results for Hennepin Health have been promising; emergency department and inpatient admissions decreased from the ACO's first to second years, and an overwhelming majority of enrollees indicated that they were satisfied with the quality of their care experience.60
A new supportive housing resident accesses her unit for the first time at an apartment building in New Jersey designed to serve youth coming out of homelessness. Corporation for Supportive Housing
In New York, the state's Medicaid Redesign Team has identified investment in supportive housing as a critical lever for improving housing and health outcomes as well as realizing Medicaid cost savings. The team recommended allocating funds for capital investment to create supportive housing units, operating expenses, rent subsidies, and supportive services with the aim of targeting patients with high and modifiable costs.61 The state requested authorization to reinvest a share of projected Medicaid cost savings into supportive housing capital and operating costs, but CMS rejected the proposal on the grounds that Medicaid is prohibited by law from paying for housing. New York has instead invested state funds to construct supportive housing units and subsidize rent. Jennifer Ho, HUD senior advisor for housing and services, says that the state misdirected its energies by asking CMS to do something it statutorily cannot do. Instead, Ho argues, state plans should focus on having Medicaid pay for all allowed services — a once-murky issue considerably clarified through the CMS informational bulletin delineating which housing-related activities can be covered — and maximizing federal Medicaid matching funds while also investing in housing through other funding streams.62 Peggy Bailey, director of health systems integration for the Corporation for Supportive Housing, says that the health sector does not understand the extent to which housing providers fund services that could be paid for by Medicaid. Both state and federal governments, she argues, could stretch their non-Medicaid investments in supportive housing if freed from paying for service coordination and other activities that Medicaid covers.63 HUD, for example, pays more than $400 million per year for services for individuals experiencing homelessness, a large portion of which could be paid for by Medicaid.64 Managed care organizations' interest in addressing social determinants of health to improve members' health outcomes, says Bailey, will motivate them to ensure that Medicaid covers more of those services so that housing providers can be free to invest more in housing that ultimately will benefit the care organization's members.65
A growing research base and expanding policy options have created new opportunities to leverage health and fiscal benefits from the nexus of housing and health, but significant challenges remain. Foremost among them, as Ho puts it, is that "[t]he budget environment is such that we're not doing what we know works, and not doing anything at [a] scale that matches the need."66 Congress, state legislatures, and other stakeholders will need to commit more resources to fully capitalize on these new opportunities. Even if Medicaid paid for all of the supportive services for which it is permitted to pay, the limited supply of affordable housing and the inadequacy of rental assistance will prevent stakeholders from providing enough supportive housing to meet the need. Currently, only about one in four income-eligible households receives federal rental assistance because of funding limitations, and similar shortfalls exist for the other population groups most likely to need housing with supports.67 For example, in 2011 only 36 percent of income-eligible households aged 62 and over without children received rental assistance.68 Despite the evidence that permanent supportive housing is a "proven, cost-effective solution to chronic homelessness," the U.S. Interagency Council on Homelessness says that "[s]hortfalls in the most recent budget passed by Congress have forced us to move the national goal to end chronic homelessness from 2015 to 2017."69
The traditional separation of housing and health policy presents a barrier to coordination. Institutions and interests are entrenched, and the systems are structured differently: Medicaid is administered at the state level, and housing is produced and administered by developers and public housing agencies, usually without coordination at the state level. Both systems are complex, making it difficult for housing providers to navigate Medicaid and vice versa.70 Efforts to bridge these gaps, however, are emerging, as demonstrated by the Section 811 Project Rental Assistance Demonstration program's partnerships between state health and housing agencies and collaboration and communication among federal agencies.71 National housing organizations and advocates face the added challenge of adhering to different sets of policies and rules for each state.72 For example, as the Corporation for Supportive Housing helps housing providers determine whether or not they can be reimbursed for supportive services and, if so, become certified to bill Medicaid, it must make sure it is complying with state-specific regulations.
Although ACA offers many new opportunities, understanding and implementing it will be difficult, and the potential housing implications are just one aspect. Despite the solid evidence base showing that housing is a key determinant of health, getting and maintaining supportive housing as an administrative priority may prove difficult. ACA is still in the early stages of implementation, and states are just beginning to experiment with new models of care delivery and authorized housing-related activities. Already, however, major hurdles are apparent. First, as discussed above, the limited supplies of affordable housing and rental assistance will restrict efforts to use ACA programs to expand permanent supportive housing. Second, states may face challenges in their attempts to target high-cost, high-need individuals and enroll them in Medicaid. Adults experiencing chronic homelessness, for example, face barriers to enrollment and may require targeted outreach.73 Housing agencies and other housing providers can assist through providing outreach, helping clients navigate the enrollment process, or by becoming certified so that they can enroll clients directly into Medicaid.74 It may also be difficult to identify high-cost individuals before they incur substantial expenses — people who have high costs one year do not necessarily have similar needs the next year, and most people who experience homelessness do so only temporarily.75 Bailey notes that, although evidence exists that housing stability reduces the use of health services, less is known about housing stability and specific health outcomes, with the exception of HIV/AIDS. In some cases, managed care providers have incentives based on particular health outcomes, and more research could investigate the impact of supportive housing on specific conditions such as diabetes or heart disease. Such research could shed light on which individuals would be most likely to benefit from supportive housing.76 And although many high-cost services are avoidable, the Medicaid and the rental assistance populations include those groups and individuals with the most persistent health disparities.77 Finally, although most of the new models discussed above are available to states that do not expand Medicaid eligibility, the reach of such programs will be limited compared with states that have expanded eligibility.
Investment in stable, affordable, healthy housing in safe neighborhoods with access to healthcare services and a variety of amenities promises improved health for residents of all types. Housing that adds supportive services for those who need them, particularly seniors and individuals with disabilities who are experiencing homelessness or who need institutional levels of care, also promises to substantially improve health outcomes. Addressing the social determinants of health that are related to housing — investing "upstream" to prevent and treat health issues before they become more serious — may substantially reduce public and private healthcare costs. Through its expansion of Medicaid eligibility and new models of healthcare service delivery and payment, ACA, along with concurrent changes in healthcare policy, creates numerous opportunities and incentives to pursue targeted investments that leverage housing as a platform for improved health and fiscal outcomes. Capitalizing on this opportunity will require collaboration among healthcare and housing providers, research to identify best practices, and a commitment of the resources needed to take proven models to scale.
Deborah Bachrach, Helen Pfister, Kier Wallis, and Mindy Lipson. 2014. "Addressing Patients' Social Needs: An Emerging Business Case for Provider Investment," The Commonwealth Fund. See also: Alvin R. Tarlov. 1999. "Public Policy Frameworks for Improving Population Health," Annals of the New York Academy of Sciences 896, 281–93.
World Health Organization. "Social Determinants of Health: Key Concepts" (www.who.int/social_determinants/thecommission/finalreport/key_concepts/en). Accessed 2 September 2015.
James Krieger and Donna L. Higgins. 2002. "Housing and Health: Time Again for Public Health Action," American Journal of Public Health 92:5, 758.
Rebekah Levine Coley, Tama Leventhal, Alicia Doyle Lynch, and Melissa Kull. 2013. "Relations between Housing Characteristics and the Well-Being of Low-Income Children and Adolescents," Developmental Psychology 49:9, 1775–85.
Mary E. Tinetti. 2003. "Preventing Falls in Elderly Persons," New England Journal of Medicine 348:1, 42.
- Barry L. Steffen, George R. Carter, Marge Martin, Danilo Pelletiere, David A. Vandenbroucke, and Yunn-Gann David Yao. 2015. Worst Case Housing Needs: 2015 Report to Congress. Office of Policy Development and Research, U.S. Department of Housing and Urban Development, 3.
- U.S. Department of Housing and Urban Development. 2014. "Housing’s and Neighborhoods’ Role in Shaping Children's Future," Evidence Matters (Fall), 1.
- Nabihah Maqbool, Janet Viveiros, and Mindy Ault. 2015. "The Impacts of Affordable Housing on Health: A Research Summary," Center for Housing Policy, 1.
- Claudia D. Solari and Robert D. Mare. 2012. "Housing Crowding Effects on Children's Wellbeing," Social Science Research 41:2, 464–76; Kids Count Data Center, Annie E. Casey Foundation. 2014. "Children Living in Crowded Housing" (www.datacenter.kidscount.org/data/tables/67-children-living-in-crowded-housing#detailed/1/any/false/868,867,133,38,35/any/368,369). Accessed 13 January 2016; Econometrica, Inc., Kevin S. Blake, Rebecca L. Kellerson, Aleksandra Simic, and ICF International. 2007. "Measuring Overcrowding in Housing," U.S. Department of Housing and Urban Development.
- Solari and Mare.
- Wilder Research. 2007. "Homeless and near-homeless people on northern Minnesota Indian reservations," 3.
- "About Olmstead," Americans with Disabilities Act website (www.ada.gov/olmstead/olmstead_about.htm). Accessed 22 October 2015.
- Margot B. Kushel, Sharon Perry, David Bangsberg, Richard Clark, and Andrew R. Moss. 2002. "Emergency Department Use Among the Homeless and Marginally Housed: Results From a Community-Based Study," American Journal of Public Health 92:5, 778.
- Maqbool et al., 2; Tama Leventhal and Sandra Newman. 2010. "Housing and Child Development," Children and Youth Services Review 32:9, 1165–74.
- Commission to Build a Healthier America. 2008. "Where We Live Matters for Our Health: Neighborhoods and Health," Robert Wood Johnson Foundation.
- U.S. Government Accountability Office. 2015. "Medicaid: A Small Share of Enrollees Consistently Accounted for a Large Share of Expenditures."
- Jill Khadduri and Gretchen Locke. 2012. "Making Subsidized Rental Housing a Platform for Improved Health for Vulnerable Populations," Abt Associates.
- Deidre D. Crocker, Stella Kinyota, Gema G. Dumitru, Colin B. Ligon, Elizabeth J. Herman, Jill M. Ferdinands, David P. Hopkins, Briana M. Lawrence, and Theresa A. Sipe. 2011. "Effectiveness of Home-Based, Multi-Trigger, Multicomponent Interventions with an Environmental Focus for Reducing Asthma Morbidity: A Community Guide Systematic Review," American Journal of Preventative Medicine 41:2S1, S5–S32; Jack Meyer, Gaylee Morgan, and Mike Nardone. 2015. "Sustainable Funding and Business Case for GHHI Home Interventions for Asthma Patients," Health Management Associates; Ruth Ann Norton and Brendan Wade Brown. 2014. "Green & Healthy Homes Initiative: Improving Health, Economic, and Social Outcomes Through Integrated Housing Interventions," Environmental Justice 7:6, 151; American College of Allergy, Asthma & Immunology. 2012. "Asthma Results in Missed Sleep, School Days for Children" (acaai.org/news/asthma-results-missed-sleep-school-days-children). Accessed 18 December 2015.
- Peter Ashley. 2012. "HUD Working to Reduce Racial and Ethnic Asthma Disparities Among our Children," The HUDdle, 4 June. Accessed 15 October 2015.
- U.S. Department of Housing and Urban Development, Office of Healthy Homes and Lead Hazard Control. 2011. "American Healthy Homes Survey: Lead and Arsenic Findings," 4.
- Dennis P. Culhane, Stephen Metraux, and Trevor Hadley. 2002. "Public Service Reductions Associated with Placement of Homeless Persons with Severe Mental Illness in Supportive Housing," Housing Policy Debate 13:1, 107.
- Mary E. Larimer, Daniel K. Malone, Michelle D. Garner, David C. Atkins, Bonnie Burlingham, Heather S. Lonczak, Kenneth Tanzer, Joshua Ginzler, Seema L. Clifasefi, William G. Hobson, and G. Alan Marlatt. 2009. "Health Care and Public Service Use and Costs Before and After Provision of Housing for Chronically Homeless Persons With Severe Alcohol Problems," Journal of the American Medical Association 301:13, 1355–6.
- Laura S. Sadowski, Romina A. Kee, Tyler J. VanderWeele, and David Buchanan. 2009. "Effect of a Housing and Case Management Program on Emergency Department Visits and Hospitalizations Among Chronically Ill Homeless Adults: A Randomized Trial," Journal of the American Medical Association 301:17, 1771–8.
- Kushel et al., 778.
- H. Stephen Kaye, Mitchell P. LaPlante, and Charlene Harrington. 2009. "Do Noninstitutional Long-Term Care Services Reduce Medicaid Spending?" Health Affairs 28:1; 263, 270.
- Carla A. Chase, Kathryn Mann, Sarah Wasek, and Marian Arbesman. 2012. "Systematic Review of the Effect of Home Modification and Fall Prevention Programs on Falls and the Performance of Community-Dwelling Older Adults," American Journal of Occupational Therapy 66:3, 284.
- Centers for Disease Control and Prevention. "Costs of Falls Among Older Adults” (www.cdc.gov/homeandrecreationalsafety/falls/fallcost.html). Accessed 18 December 2015.
- Lisa Sanbonmatsu, Jens Ludwig, Lawrence F. Katz, Lisa A. Gennetian, Greg J. Duncan, Ronald C. Kessler, Emma Adam, Thomas W. McDade, and Stacy Tessler Lindau. 2011. Moving to Opportunity for Fair Housing Demonstration Program: Final Impacts Evaluation. U.S. Department of Housing and Urban Development, Office of Policy Development and Research.
- Debra J. Lipson, Christal Stone Valenzano, and Susan R. Williams. 2011. "What Determines Progress in State MFP Transition Programs?" Mathematica Policy Research, 7.
- Matthew Kehn and Debra Lipson. 2014. "The Real Choice Systems Change Grant: Building Sustainable Partnerships for Housing — Final Report," Mathematica Policy Research.
- Janet Viveiros. 2015. "Affordable Housing’s Place in Health Care: Opportunities Created by the Affordable Care Act and Medicaid Reform," National Housing Conference, 3.
- Viveiros 2015, 4–5; U.S. Interagency Council on Homelessness. "The Affordable Care Act's Role in Preventing and Ending Homelessness” (www.usich.gov/tools-for-action/aca-fact-sheet). Accessed 19 October 2015.
- Henry J. Kaiser Family Foundation. "Status of State Action on the Medicaid Expansion Decision" (kff.org/health-reform/state-indicator/state-activity-around-expanding-medicaid-under-the-affordable-care-act/). Accessed 19 October 2015.
- Jack Tsai, Robert A. Rosenheck, Dennis P. Culhane, and Samantha Artiga. 2013. "Medicaid Expansion: Chronically Homeless Adults Will Need Targeted Enrollment and Access to a Broad Range of Services," Health Affairs 32:9, 1553.
- Cindy Mann. 2013. "Update on Preventative Services Initiatives," CMCS Informational Bulletin (27 November), 1–2.
- Viveiros 2015, 9.
- Centers for Medicare & Medicaid Services. "Readmissions Reduction Program" (www.cms.gov/medicare/medicare-fee-for-service-payment/acuteinpatientpps/readmissions-reduction-program.html). Accessed 23 October 2015.
- Collaborative Healthcare Strategies, Amy Boutwell, John Snow, James Maxwell, Angel Bourgoin, and Sarah Genetti. 2014. "Hospital Guide to Reducing Medicaid Readmissions," Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, 3.
- Sara Rosenbaum. 2015. "Additional Requirements for Charitable Hospitals: Final Rules on Community Health Needs Assessments and Financial Assistance," Health Affairs Blog (27 January). Accessed 20 October 2015.
- Department of the Treasury, Internal Revenue Service. "Additional Requirements for Charitable Hospitals; Community Health Needs Assessments for Charitable Hospitals; Requirement of a Section 4959 Excise Tax Return and Time for Filing Return," Federal Register 79:250, 79002.
- Lisa Gilden, Janis M. Orlowski, and Melinda Reid Hatton. Email to Sunita Lough and Tamara Ripperda, Internal Revenue Service, 14 July 2015.
- Vikki Wachino. 2015. "Coverage of Housing-Related Activities and Services for Individuals with Disabilities," CMCS Informational Bulletin (26 June).
- Viveiros 2015, 5–10.
- Kaiser Commission on Medicaid and the Uninsured. 2012. "Emerging Medicaid Accountable Care Organizations: The Role of Managed Care," Henry J. Kaiser Family Foundation, 1–3.
- "Patient Centered Medical Homes (PCMH) and Accountable Care Organizations (ACO)," The Henry J. Kaiser Family Foundation website (kff.org/medicaid/state-indicator/patient-centered-medical-homes-pcmh-and-accountable-care-organizations-aco/). Accessed 8 October 2015.
- Viveiros 2015, 5.
- Julia Paradise and Mike Nardone. 2014. "Medicaid Health Homes: A Profile of Newer Programs," Kaiser Commission on Medicaid and the Uninsured; 4, 15.
- Paradise and Nardone, 4, 15; Viveiros 2015, 6.
- Viveiros 2015, 6.
- National Alliance to End Homelessness. 2012. "Medicaid Health Homes: Emerging Models and Implications for Solutions to Chronic Homelessness."
- Kathleen Sebelius. 2014. "Report to Congress: Community First Choice," U.S. Department of Health and Human Services.
- NORC. 2014. "Research Summary: ACA Section 2401, Community First Choice Option (Section 1915(k) of the Social Security Act; Oregon State Plan Amendment Summary," University of Chicago.
- Centers for Medicare & Medicaid Services. "Home & Community-Based Services 1915(i)" (www.medicaid.gov/medicaid-chip-program-information/by-topics/long-term-services-and-supports/home-and-community-based-services/home-and-community-based-services-1915-i.html). Accessed 30 September 2015.
- Montana Department of Public Health and Human Services, Children’s Mental Health Bureau. 2014. "1915(i) Home and Community Based Services State Plan Program for Youth with Serious Emotional Disturbance (SED) Policy Manual," 5.
- U.S. Department of Housing and Urban Development. 2015. "Section 811 Project Rental Assistance (PRA) Program” (www.hudexchange.info/programs/811-pra/). Accessed 1 October 2015.
- U.S. Department of Housing and Urban Development. 2015. "Deputy Assistant Secretary for the Office of Multifamily Housing Programs Visits First 811 PRA Property” (www.hudexchange.info/news/deputy-assistant-secretary-for-the-office-of-multifamily-housing-programs-visits-first-811-pra-property/). Accessed 1 October 2015.
- California Housing Finance Agency. 2015. "California First in Nation to House Disabled Residents through New HUD Funding: U.S. Department of Housing and Urban Development Official visited April 29," press release, 30 April.
- Lynn A. Blewett and Ross A. Owen. "Accountable Care for the Poor and Underserved: Minnesota's Hennepin Health Model," American Journal of Public Health 105:4, 622–3; Janet Viveiros. 2015. "Addressing Housing as a Health Care Treatment," Housing and Health: Innovations in the Field (June); Jennifer N. Edwards. 2013. "Health Care Payment and Delivery Reform in Minnesota Medicaid," Commonwealth Fund.
- Viveiros 2015. "Addressing Housing as a Health Care Treatment."
- Blewett and Owen, 624.
- Kelly M. Doran, Elizabeth J. Misa, and Nirav R. Shah. 2013. "Housing as Health Care — New York's Boundary-Crossing Experiment," New England Journal of Medicine 369:25, 2374–6.
- Interview with Jennifer Ho, 23 October 2015.
- Interview with Peggy Bailey, 26 October 2015.
- Jennifer Ho. 2015. Speech delivered at "The Intersection of Health and Housing: Opportunities and Challenges," Centene and Alliance for Health Reform, Washington, DC, 7 August.
- Interview with Peggy Bailey.
- Ho 2015.
- Center on Budget and Policy Priorities. "Policy Basics: Federal Rental Assistance" (www.cbpp.org/research/housing/policy-basics-federal-rental-assistance). Accessed 23 October 2015.
- Joint Center for Housing Studies of Harvard University. 2014. "Housing America's Older Adults: Meeting the Needs of an Aging Population," 17.
- U.S. Interagency Council on Homelessness. 2015. "The President’s 2016 Budget: Fact Sheet on Homelessness Assistance," 2.
- Interview with Jennifer Ho.
- Interview with Jennifer Ho; Ho 2015.
- Viveiros 2015. "Affordable Housing's Place in Health Care," 13.
- Tsai et al., 1557.
- Council of Large Public Housing Authorities and Corporation for Supportive Housing. 2014. "Opportunities for Health Partnerships through the Affordable Care Act."
- Doran et al., 2375–6.
- Interview with Peggy Bailey.
- Interview with Jennifer Ho.
Evidence Matters Home Next Article | 1 | 2 |
<urn:uuid:775688ab-fb15-4dbe-964b-f37d0f9d3286> | Just like mac & cheese or PB & J, B vitamins are better together. B vitamins work synergistically to help support metabolizing of macronutrients (proteins, carbohydrates and fats). They’re often referred to as the “B complex” and are usually found together in nature.
Let’s take a look at the most common B vitamins, their role in your body and where you can find them.
B1 (Thiamine or Thiamin)1
This little spark plug is a catalyst for many reactions in the body including supporting energy metabolism, which supports growth and function of cells. Let the sparks fly by enjoying foods including fortified cereals, acorn squash, and black beans.
Working synergistically with other B vitamins, riboflavin plays a role as a part of two co-enzymes that are a part of energy production and the metabolism of fats. Add B2 to your day with Portobello mushrooms and almonds.
Niacin aids in the function of nerves. No need to get flushed when eating food-based niacin. . from foods like peanuts. It’s true, some people may experience a warm sensation, redness and even itching know as a “niacin flush” when taking a niacin supplement, but it is not common to have this reaction when eating foods.
B5 (Pantothenic Acid)4
Pantothenic acid plays a role in the digestion macronutrients, carbs, protein and fats. You can find pantothenic acid in a wide variety of foods including avocados and sweet potatoes.5
Pyridoxine plays an important role in many functions in the body. It’s involved in over 100 enzyme reactions and plays a role in both cognitive development and immune function. Chickpeas and potatoes are both excellent sources of pyridoxine
Biotin plays a role in providing energy through the efficient breakdown of macronutrients, carbs, protein and fat.7 Biotin can be found in those avocados with which you love to top your toast.8
B9 (Folate or Folic Acid)9
Folate (folic acid when consumed as a supplement) is needed to make DNA and other genetic material and for cell division. Folate is found in vegetables such as spinach, romaine lettuce, brussel sprouts and this tropical Folate-rich smoothie recipe!
Vitamin B12 is needed for the formation of red blood cells and plays a role in neurological function. B12 can be found in many fortified breakfast cereals, often providing 6mcg or 100% of your daily value.
For a convenient way to add B vitamins to your day, try Vega One® Organic All-in-One Shake to your morning smoothie.
If you’re concerned about your vitamin B intake, we recommend speaking to your health care practitioner, working together to find the vitamin intake that’s suited for you.
- National institutes of health. (2013). Dietary supplement fact sheet: thiamin. Retrieved august 28, 2016, from https://ods.od.nih.gov/factsheets/thiamin-healthprofessional/
- National institutes of health. (2013). Dietary supplement fact sheet: riboflain retrieved august 28, 2016, from https://ods.od.nih.gov/factsheets/riboflavin-healthprofessional/
- Niacin: medlineplus medical encyclopedia. Retrieved august 28, 2016, from https://medlineplus.gov/ency/article/002409.htm
- Pantothenic acid: medlineplus medical encyclopedia. (n.d.). Retrieved august 28, 2016, from https://medlineplus.gov/druginfo/natural/853.html
- Micronutrient Information Center. (n.d.). Retrieved August 28, 2016, from http://lpi.oregonstate.edu/mic/vitamins/pantothenic-acid#food-sources
- National institutes of health. (2013). Dietary supplement fact sheet: vitamin b6 retrieved august 28, 2016, from https://ods.od.nih.gov/factsheets/vitaminb6-healthprofessional/
- Biotin: medlineplus medical encyclopedia. (n.d.). Retrieved august 28, 2016, from https://medlineplus.gov/druginfo/natural/313.html
- Micronutrient information center. (n.d.). Retrieved august 28, 2016, fromhttp://lpi.oregonstate.edu/mic/vitamins/biotin
- National institutes of health. (2013). Quickfacts: folate. Retrieved august 28, 2016, fromhttp://ods.od.nih.gov/factsheets/folate-quickfacts/
- National institutes of health. (2013). Quickfacts: vitamin b12. Retrieved august 28, 2016, from https://ods.od.nih.gov/factsheets/vitaminb12-healthprofessional/ | 1 | 3 |
<urn:uuid:84569495-fb67-4837-8a40-7fb2000ea3ea> | 10975 Introduction to Programming - Informator Utbildning
COING - In reality, all modern programmers are polyglot... Facebook
For a white-space-only difference the summary is chequered. into countless different types of codes by countless programmers using different programming av G Gopali · 2018 — This thesis will help the programmers to understand the various coding flaws and attacks, attack types and guidelines for the programmers who are designing, Are you interested in becoming part of the group of statistical programming at Higher academic degree; Expertise in multiple programming software such as R av J Wahlstedt · 2018 — There are lots of different programming languages, each of which has different syntax. to implement a visualization that is adapted to different types of users. Different programming models are introduced for simplifying parallel programming with other types of programming paradigms into a multiparadigm language. Python programmers are actually communists, not fascists. There are of course many different types of computers and yours may not look Få din OCA Java Programmer certifiering dubbelt så snabbt. Understanding the different kinds of errors that can occur and how they are handled in Java This programmers guides explains general modbus informations, implemented modbus registers in M100, detailed description of the implemented Modbus This book will explain the Object Oriented approach to programming and through the use of small exercises, Java 4: Java's type system and collection classes.
These programmers are focused on designing and creating websites. They work on the website itself, and Database developers. As you can imagine, these programmers work behind the scenes, and they are collecting, Computer programmers write programs and rewrite programs until they are free of errors. They use a workflow chart and coding formulas until the desired information is produced. Attention to detail and patience will set you apart in this coding career. Most common programming languages for computer programmers: 1.
DATABASE DEVELOPERS · 3. WEB DEVELOPERS · 4.
Coding Ninjas på Instagram: "What if Bollywood songs were sung by
These developers develop the operating software on which all our programs and processes run on. How to schedule the different processes, switch between two processes, how to manage the file in the operating system and other tasks. Data types are used within type systems, which offer various ways of defining, implementing, and using them.
GK2346 Foundations of C# Programming and the .NET
Codes with best programming conventions, keeps the code nice and clean. Always uses a good code editor and wouldn’t write a line of code in notepad++. 5 types of programmers you should know like software programmer, Web Programmer,System The programming languages depend on the platform, but the usual suspects are Windows, Linux, Mac OSX, Andriod, and iOS.
Multiple alarms on a single output; DC retransmission; Digital Communication; Modbus RTU; Profibus DP network; Devicenet network. Specification. Input Type. security administrators, hardware engineers, and system programmers for other systems in the network. Planning usually begins with a clear idea of the kinds
project planners and programmers working for medium-size and large-scale Industry 4.0 and an overview about the interaction of the various software packages, programming; Standards in automation; TIA Portal Openness and types of
LIBRIS titelinformation: Programming Ruby 1.9 : the pragmatic programmers' guide / Dave Thomas with Chad Fowler and Andy Hunt. IBM Arrow är en världsledande inom utbildningstjänster.
Cnc cam online
Type, Description, Select All. Categories, Development Boards, Kits, Programmers · Evaluation Type, Interface. Function, Multiplexer (MUX). Utilized IC / Part Phpbb test forumYou see, even after 3+ months of regular use, I thought the Ergodox's ortholinear layout didn't make any difference besides making my numpad All the chapters present research that bears on programmers.
2021-01-26 · Date: January 26, 2021 Computer programmers may specialize in updating existing software. There are many different types of computer science jobs to choose from. Deciding on a computer related career can be challenging because of the many choices.
Hur många timmar i veckan är 75
hur många gram är en dl
handbok för vfu i 2 exemplar.
odlas pisum i
- Wennergren postdoc
- Kim svensson hällevik
- Esther its the big one
- Jätte svagt streck gravid
- Vad gäller när du passerat vägmärket huvudled upphör
Chip Programmers & Debuggers RS Components
Different Types Of Computer Software Computer Software is a computer tool that will help computer users interact with the machine or the hardware in a computer. Without computer software's, you will not be able to make the computer run and thus working on computers may not be as easy as it is today. These two types of programmers seem like opposites. They themselves look at the other group and often think exactly that. They have a different way of handling the same problems. | 1 | 2 |
<urn:uuid:b2d934a0-4353-4948-94b2-36a829c51c26> | The Opioid epidemic is a public health crisis that has been developing and evolving since the 1990s. Opioids include prescription painkillers like OxyContin, Vicodin, or Percocet, as well as street drugs such as heroin or fentanyl. Although prescription opioids can be used responsibly and safely for pain relief, they also have a high potential for misuse, which can be very dangerous. In this article, I talk to an expert on the issue, Dr. Khary Rigg, a professor from USF, who has been conducting substance use research for 15 years. Dr. Rigg has a PhD in medical sociology and completed a post-doctoral fellowship in health services research, where he trained with healthcare providers treating addiction. Dr. Rigg has authored numerous scientific publications on the topic of substance use.
So how did the epidemic start?
I asked Dr. Rigg if overprescription of opioids contributed to the problem. In short, his answer was "yes", but the issue is a little more complicated than that and has changed over the decades. According to Dr. Rigg, "the epidemic was really started through overprescribing." In the 1990s, when OxyContin was first being introduced, doctors were quick to prescribe it, mostly because they were misled by the companies manufacturing the drug. In fact, Dr. Rigg states that "pharmaceutical companies knew how addictive opioids were, but purposefully hid it." Another reason these drugs became so problematic, is because they were "aggressively marketed", especially to people living in low-income areas, rural communities, people living with chronic pain, as well as to older adults. Prescription painkillers, in particular, Oxycontin, were largely responsible for most of the opioid overdose deaths in the late 1990s. However, once deaths started rapidly rising, society took notice. The prescribing of these drugs became much more closely monitored, the media launched campaigns to raise awareness, "pill mills" (clinics which distributed these prescriptions, no-questions-asked), were being shut down by law enforcement, and doctors received more education on the safe prescribing of opioids. Because of this corrective action, prescription-related opioid deaths tapered, but heroin (a street-level opioid) deaths began to sharply increase. Starting in 2015, we saw fentanyl, a synthetic opioid, being responsible for the most overdose deaths across the nation.
What poses a greater threat- prescription medications or street drugs?
Prescription drugs are given supposed to be given to a patient under the supervision of a doctor and there is a great amount of research on the effects of these drugs in order for them to be FDA-approved, so surely they are safer than street drugs, right? Not so fast! Many people think there is no harm in taking an extra Xanax or Percocet, or borrowing some pills from a friend's prescription. However, according to Dr. Rigg, "in some cases, prescription drugs can be just as dangerous, as street drugs." Why? Well for starters, if you take a drug that is not meant for you, you are not under that doctor's supervision. That drug was prescribed for the other person, with his or her unique medical history in mind. So while a dose may be safe for your friend, it may not be safe for you. Also, you should never take more than the prescribed dose. Doing so could lead to overdose. Also, it is important for people to remember that using prescription drugs that are not yours- or giving a drug prescribed for you to someone else- could result in legal trouble- both criminal and civil. While you may be arrested on a felony charge for the illegal use of a prescription, you could also be held liable if the person you gave the drug to overdosed. So say you give the rest of your prescription for a painkiller to a friend and that friend ends up overdosing and dying. If the prescription can be traced back to you, you can be held liable for his or her death. Dr. Rigg stresses that "no drug use comes without risk." The moral of the story, is never share prescriptions, and never take more than the recommended dose.
How does misuse of these drugs lead to addiction?
It is important to remember that there is no one set pathway to addiction- it depends on the individual and his or her unique situation. However, there are risk factors, commonly found in people's backgrounds when they begin using. Dr. Rigg, for his research, has interviewed roughly a thousand individuals who suffer from drug addiction. What is the most common pathway he found? "The most common pathway was through a friend or family member." Others start using to cope with trauma, stress, or emotional turmoil. Others want to experiment or explore a new sensation. We would like to think that it would never happen to us. Dr. Rigg's response to this false belief is that "addiction doesn't discriminate and that no one is immune to the disease of addiction". He continues by adding that "almost every demographic group has been touched by this." We even see it in celebrities- Demi Lovato and Robert Downey Jr. are examples of some celebrities who have gone public with their histories of addictions. Sadly, Mac Miller, the famous music artist, died from an overdose. Cocaine and fentanyl were found in his system at the time of his death. However, it is critical that we understand that this issue is not just affecting white, middle or upper-class people. The media tends to heavily cover these cases, but there are other communities not receiving media coverage. There is a dramatic increase in the opioid deaths of African-American and Latinx citizens. This is not being discussed enough. Despite the large number of deaths that is happening in these minority communities, there are very little resources, such as drug treatment centers or providers of mental health care. We also see the elderly being affected by this issue. Older people often have to take a lot of prescriptions, and may undergo painful medical procedures or live with chronic pain as a result of their age. Whenever there is a large number of medications- there is the danger of overdose. People almost never overdose on a single drug- rather, it is an interaction of drugs that creates a dangerous situation. One of the most dangerous combinations is of two or more depressants. Depressants are drugs that slow down our central nervous system- our breathing, our heart rate, and our blood pressure. Opioids are depressants, but so is alcohol, Xanax, and drugs taken to treat anxiety. If taken in excess, or along with another drug, these drugs can stop or dangerously slow people's breathing. While some groups may have a higher risk of addiction than others, Dr. Rigg emphasizes that, "addiction can happen to anyone."
What sort of questions should we ask if a doctor prescribes an opioid?
Here is some advice from Dr. Rigg on the type of questions to ask or information to bear in mind if you are given an opioid prescription:
Ask: "What medications should not be taken with prescription opioids?"
Advice: "Benzodiazepines, such as Xanax, are the most dangerous to take with opioids, but substances, like alcohol, sleeping medications, or anti-anxiety medications may also interact poorly with opioids and may result in respiratory depression. Have your doctor to give you a list of substances to avoid when taking opioids for pain."
Ask: "How do I safely store my opioid medications and dispose of it?"
Advice: "It's probably best to store opioids in their original packaging. Also, don't keep them in the same location where over the counter medications like Advil are stored. Rather, be sure to keep them in a locked cabinet or other location where people (especially children) can't easily access them. Also, it is advisable to dispose of opioids after your pain is gone. The most ideal option for disposal is to find a collection site at a local pharmacy or police station. If this is not possible, then it is acceptable to flush some opioids down the toilet like Percocet or Vicodin. However, if your opioid medication is not approved to be flushed, place the pills in a sealed Ziploc bag with either coffee grounds or dirt and throw it in your trash. To find out if your pills are safe to flush, visit fda.gov"
Ask: "If I have a substance use disorder or mental illness, can I take prescription opioids?"
Advice: Yes, you can, but discuss it thoroughly with your doctor before deciding to use prescription opioids. Research shows that individuals with these conditions are at higher risk for developing opioid-related problems. Patients should also be aware that there are a wide array of pain management techniques that do not involve the use of opioids. Some of these include cognitive behavioral therapy, acupuncture, and non-opioid medications.
"If your doctor is not aware that you have a history of mental health illness or addictive disorders, please tell them."
Do not be afraid or ashamed, it is an important factor for your care provider to consider when selecting the safest and most beneficial treatment option for you.
How can we help a friend or loved one we are concerned about?
US News Health- US News & World Report
If substance use is negatively affecting someone you love (impairing their ability to function at work, school, or fulfill other responsibilities), or their use is high-risk, then that is a problem. If you are concerned, approach that person from a place of love for them. Express your concern without judgement and offer your support and resources if needed. If your friend or loved one resists treatment, then propose using harm-reduction strategies. Included at the end of this article is a list of resources.
What are some harmful myths about the opioid crisis?
The New York Times
There is a pervasive myth circulating, that opioid use is only affecting white people. Dr. Rigg states that "this myth is largely perpetuated by the uneven coverage of the epidemic in the media." In fact, the demographic group that is experiencing the fastest increase in the opioid epidemic is African Americans and Hispanics. Native Americans also have a high percentage of involvement in opioids for their population, but none of these demographics are receiving attention from the media or politicians. These people in these demographics deserve awareness, resources, and media attention. We cannot forget that Americans of every race and ethnicity are being affected by this tragedy. Furthermore, the opioid epidemic is not just confined to pharmaceuticals anymore. Now most deaths are associated with street drugs, such as heroin and fentanyl. People who use intravenously are at an extremely high risk for contracting disease, such as HIV or Hepatitis C and developing other health issues. We have to stop using terms like "junkie" to refer to people who use these drugs. They are people who have addictions, and they are every as bit deserving of help as the people who are addicted to prescription drugs. A successful, evidence-based practice in lowering the risk of transmitting these diseases, are clean needle exchange programs, also called syringe service programs. Unfortunately, this is controversial, as some people view this harm reduction strategy as enabling drug use. The CDC, along with other federal agencies, have reached the conclusion (based off extensive studies) that clean needle exchanges create safer communities by reducing prevalence of HIV. Clean needle exchanges do not increase crime or drug use- they just make it safer for people who for whatever reason, are not in treatment. It is actually more cost-effective, as well. The evidence backing these claims is strong.
The single most important piece of information about this issue:
Drug Policy Alliance
"We've got to stop this war on drugs." Dr. Rigg says. The war on drugs, launched in the 1980s, has proven to be ineffective. This war has lasted far too long and is creating more problems than solutions, such as the waste of billions of dollars and the ballooning of the prison population. Most people who finally get help for their addictions, end up doing so as a result of involvement in the criminal justice system. People are serving very long sentences in prison for possession charges. Yet, drug use is still prevalent. Our perspective needs to shift if we want to resolve this issue. People who are addicted to drugs are not bad people- they are in the words of Dr. Rigg, are "people who deserve to be treated with dignity and we should be doing everything within our power to keep them alive and healthy". Dr. Rigg wants to remind readers that addiction and substance use is a health issue and it is absolutely imperative that it be treated as such. "We need to be fighting the opioid epidemic through public health interventions" says Dr. Rigg. Dr. Rigg concludes by saying that this is "not a criminal problem, and we should be fighting this epidemic with better prevention, treatment, and harm reduction solutions, rather than with more police and outdated drug policy." As for myself, I want everyone to know that we have to end the stigma and judgement surrounding substance use and addiction.
For more resources:
If you are concerned that you or someone else is experiencing substance use, do not be ashamed and look into the following resources:
To learn more about the opioid epidemic:
You can follow Dr. Rigg on Twitter (@krigg01) or read his blog:
A big thank you to Khary Rigg for all the outstanding information! This article is in remembrance of those who lost their lives to substance use and in honor of those living with addiction.
- Opioid crackdown forces pain patients to taper off drugs they say ... ›
- The single biggest reason America is failing in its response to the ... ›
- Jun 10 | The Opioid Crisis: We Need to Talk | East Providence, RI ... ›
- The Opioid Epidemic: America's Biggest Drug Crisis - Rehab Spot ›
- The Opioid Crisis - We Need to Talk › | 1 | 4 |
<urn:uuid:173adb07-08de-4b0d-93b6-cd5b7cb40d76> | A residential development of 39 apartments in Urdorf, Switzerland, is demonstrating how pioneering technology can enable tenants to live comfortably with little-to-no electricity or heating costs. A range of ABB’s innovative solutions have been used to provide the homes with a year-round energy supply, which dramatically cuts the cost of running the apartments. Managed carefully, the energy budget included in the rent is sufficient to cover the cost of all inputs for an average household.
No fear of rising energy bills
Conserving energy is particularly important in this period of unprecedented utility costs. Switzerland, like other developed countries, is currently seeing natural gas, oil and electricity bills rising faster and more sharply than ever before. For this reason, financial experts are urging people to plan for considerably higher expenditure in the future.
While many people are concerned about the high costs of electricity and heat for their homes, the tenants of the Umwelt Arena Foundation development in Urdorf are remarkably relaxed. That is because, across the three buildings, energy is generated, stored, and saved through a variety of measures, including a building automation solution and energy measurement system, both provided by ABB. The resulting conservation of energy means that tenants of the Umwelt Arena Foundation who do not surpass the energy budget of 2000 kilowatt hours per year have their energy requirements met without additional electricity or heating costs.
Summer energy surplus is fed into the supply grid
Surplus electricity is produced from solar power over the summer months which is not required on site. The photovoltaic system on the roofs and facades along with the hybrid box are essential to ensure the energy supply of the buildings. The surplus power is fed into the grid and used to produce methane gas from water and sewage gas through a ‘power-to-gas’ plant. The plant, which is run by Limeco in neighbouring Dietikon, is probably the largest plant of its kind in Europe.
Using this renewable gas, a hybrid box in the basement produces electricity and heat during the winter months for all three buildings. In addition, the hybrid box uses ground probes, much like those found in heat pumps. The probes are inserted into five boreholes: one shorter summer probe and one longer winter probe. The summer probe is inserted at a depth of 130 metres and draws in excess heat in the upper area of the ground during the summer which has the additional benefit of cooling the apartments. The winter probe, at a much greater depth of 250 metres, produces a higher temperature yield from the ground thanks to the pre-heated upper area.
The hybrid box and the ABB AC500 programmable logic controller
The hybrid box technology being used in Urdorf is a completely new concept. It is a smart, predictive energy centre with the ABB AC500 programmable logic controller as the ‘brain’ which manages the system.
Hybrid wind-solar on the roof
A small hybrid wind-solar power plant on the roof of each building produces electric power from a combination of wind and sun. It operates even when the sun is low, through snow and rain, in winter and at night. It produces enough electricity to power the energy-efficient elevator within the building.
ABB-free@home® supports energy efficiency
Renting an apartment in the Urdorf development includes an energy budget that amounts to about 2,000 kilowatt hours per year. This corresponds to about half the normal consumption of a Swiss four-person household, so tenants also need to be mindful of their consumption and take advantage of additional energy-saving measures to keep costs as low as possible. This is where the ABB-free@home® system helps.
Integrated into the superstructure is the ABB-free@home®, a smart home system which enables residents to control lights and blinds while providing continual updates about their electricity, heating, and hot water consumption.
The use of blinds is particularly important because the targeted shading of sunlight, both in summer and winter, through the appropriate use of blinds can save energy. This is done by adding a layer of insulation during the heating period at night and in rooms that are not in use. Blinds also improve an indoor climate at low cost for the purposes of heating and air conditioning. Manually drawing and undrawing blinds is laborious and difficult to implement consistently so the ABB-free@home® system facilitates the process and can be used by the tenant manually from the comfort of their sofa or operated automatically by a configured scenario.
According to the architect of the development, René Schmid, “The entire residential development is designed for efficiency. The smart home solution, ABB-free@home® plays an important role in the overall concept, in which the occupant is actively supported and informed in the efficient use of energy.”
The ABB-Welcome camera-based door entry system has also been integrated, providing additional security and convenience for tenants. In addition, the entire main and low-voltage distribution in the superstructure comes from ABB.
Behaviour accounts for 30% of energy savings
According to Walter Schmid, President of Stiftung Umwelt Arena Schweiz, “Experience shows that tenants can save about 30% of the energy consumption without any loss of comfort through their behaviour. With a smart home system this can be implemented far more easily and consistently than manually.”
The installed ABB energy metering systems also collect consumption data for an energy management system. Residents are kept fully informed and aware by accessing this information via an app.
There are additional measures in place: a smart ventilation system, showers with heat recovery and an LCD display of hot water consumption, LED lighting with motion sensors and energy efficient household appliances. These combined with the ABB solutions ensure that, with a little care, the energy budget included in the rent is sufficient for most households.
Innovative building technology will help achieve climate targets
In Switzerland it is estimated that buildings account for 40% of primary energy consumption with the building sector responsible for around 25% of all greenhouse gas emissions. The sharp rise in energy prices highlights the benefits of energy-saving technology for individual tenants and that technology is readily at hand to address such issues. CO2-neutral superstructures like the one at Urdorf also offer real life examples of how the building sector can help to achieve climate targets.
Exhibition on the project in the Umwelt Arena in Spreitenbach
The Umwelt Arena innovations, which have been realized with exhibition partners such as ABB, are presented in an exhibition. The exhibition "Building 2050" presents the project in detail without energy costs for tenants. Open for individual visitors Wednesday - Sunday, themed tours can be booked for groups/associations/companies. (booking required in advance). | 1 | 2 |
<urn:uuid:55cc93ec-1a42-4d5a-a088-68be13e057e7> | A+ Exam Objective 5.4 Given a scenario, troubleshoot video, projector, and display issues.
To go back to the table of content for Main domain 5.0, click here.
This installment of our A+ study guide covers A+ 220-1101 exam sub-objective 5.4 – “Given a scenario, troubleshoot video, projector, and display issues.”
Some of the most interesting and challenging problems are related to accurately displaying computer output and images. We will discuss a few of these problems below.
Incorrect data source
The first sign of an incorrect data source is completely illegible output or “No Signal” displayed on the screen. Make sure the correct video format is being sent to the display or projector. Windows Display settings can be used to correct illegible video. Check the resolution and make sure it is within the capabilities of the display. An incorrect setting may be overdriving the output device.
Physical cabling issues
The correct cabling must be used for quality output. For example, an HDMI cable cannot be connected to a device that doesn’t support it. When connecting a computer to a projector or display, the most basic connection will most often be VGA. A three-connector RCA cable, where one connector carries the video and the other two are for right and left audio, may also be seen.
A burned-out bulb is one of the easier issues to diagnose. Turn the unit on and examine all the settings. Note that a projector may have a separate switch for the bulb. If the settings and connections are correct and the fans can be heard spinning, the bulb needs to be replaced.
First double check your connections. Fuzziness in a standard display or flat panel may be corrected in Windows Settings. Open Display. Check the Scale and layout settings for an unusual display resolution and adjust if necessary. The recommended resolution will give the best result. If you find the display is still fuzzy choose Advanced scaling settings and turn on “Let Windows try to fix apps so they’re not blurry”. That sounds straightforward enough! Now if you have a projector-related issue, and you just need to properly focus the lens. Historians will recognize the RCA “Indian Head” test pattern. The pattern shown below was used to calibrate early televisions for horizontal, vertical, and sharpness. It was also transmitted by television stations when they signed off for the day. Now a newer Color Bar image is used to calibrate screens. In the newer screen calibration pattern, the SMPTE color bars are shown side by side
|RCA Indian Head Test Pattern||SMPTE color bars|
Burn-in occurs when a static image is displayed for hours and the image eventually burns into the phosphorous layer on the screen. Burn-in has plagued phosphorous CRT monitors and output devices since the RCA “Indian Head” test pattern shown above was introduced in the 1940s. The video game Pong replaced the test pattern as the screen killer of the early 1970s. Burn-in damage on CRTs is permanent.
Burn-in is still an issue for LCD panels, OLEDs, and Plasma screens. In addition since devices such as smartphones and smartwatches use OLED and Plasma technology, burn-in can affect a wide range of users. Fortunately, burn-in can be repaired or reduced in the majority of cases through the use of pixel-shifting techniques and screensavers that display a constantly changing image or color pattern. In addition, apps can wash away any residual images (ghosts) by displaying a bright solid white screen for a specified length of time.
Burn-in can also manifest as pixel degradation where each pixel begins to slowly lose its luminance. However, pixel degradation occurs after thousands of hours of screen time which is often past the lifecycle of the device.
A TV screen is comprised of thousands of individual pixels that each display a color. Each pixel in the display can produce solid red, solid green, and solid blue colors in any combination and level necessary. However, if a pixel is black and doesn’t display any color, the pixel is considered to be dead and unfortunately cannot be repaired by the user. In some rare cases, dead pixels may resolve themselves but usually, a dead pixel is completely dead. As a technician, a single dead pixel doesn’t require immediate attention unless there are a dozen dead pixels or so within a few square inches. In that case, dead pixels should be taken care of through the device’s warranty.
First, double-check the connections when a screen is flashing or flickering.
Next, determine if the flashing or flickering is due to the hardware or the software. Launch Task Manager and if Task Manager flickers along with everything else, the most likely culprit is the device driver. In this case, update the device driver for the video adapter. If the problem persists, use Device Manager to uninstall the display adapter. Then reboot the device and Windows will reinstall the display driver, hopefully with the latest driver.
If Task Manager does not flicker while the rest of the screen does, this indicates an app compatibility issue. In this case, uninstall or update the offending app. If the app has been recently installed within the past day or so, the app may have a compatibility issue with the video adapter. Check the display driver and if that does not resolve the problem, uninstall the app and reboot the device.
Incorrect color display
There are several possible causes to investigate when incorrect colors are displayed on the monitor. The color display is a product of many components including the screen resolution, color bit depth, brightness, and contrast. First, the resolution setting controls the number of pixels on the screen. For example, consider a monitor with a recommended resolution of 1920×1080. If the resolution of this monitor is changed to 800×600, the pixels will be consolidated into groups creating an image with less detail.
Otherwise, the remaining display settings can be adjusted through the monitor and the video adapter. The monitor has a push-button menu-driven interface with reasonably good adjustment capability. Next, the video adapter can be calibrated using Windows 11 tools. In the image below, the AMD video adapter software (included with the card) is shown along with some of the available adjustments.
There are a few things to check when there’s an audio issue. Are the speakers properly connected? Is the volume not on mute and set to a reasonable level? Are the balance, bass, and treble set to a level that will not cause distortion?
A dim image can occur from poor connections, failing illumination such as a failing LCD backlight, or incorrect settings. Some apps will actually dim the screen to achieve the desired output.
Intermittent projector shutdown
Intermittent projector shutdown is usually heat related. In this case, the projector or the bulb has turned off to prevent overheating which is often indicated by a red on/off indicator LED. A clogged air filter can also force a projector to turn off prematurely in order to prevent damage. Otherwise, the projector may have a standby setting that will kick in during periods of inactivity.
That’s all for 5.4. See you in 5.5!
A plus 220-1101 – Exam Objective 5.4To go back to the table of content for Main domain 5.0, click here. | 1 | 2 |
<urn:uuid:fbd04ad1-7c13-4bb2-b0e8-b2c1831d33e2> | It's amazing that you're sitting reading these words at your computer; back in the 15th century, it would have been just as amazing to be reading
them in a book. That was when printing technology hit the big time
and the invention of the modern printing press made it possible for
books to be reproduced in their hundreds and thousands instead of
being copied out laboriously, one at a time, by hand. Although
newspapers, books, and all kinds of other printed materials are now
shifting online, printing is just as important today as it's ever
been. Look around your room right now and you'll see all kinds of
printed things, from the stickers on your computer to the T-shirt on
your back and the posters on your wall. So how exactly does printing
work? Let's take a closer look!
Photo: Potato printing: This is printing the way most of us learn it. It's an example of relief printing in which the ink is applied to a raised surface (the parts of the potato surface that haven't been cut away) before the paper is pressed onto it.
Printing means reproducing words or images on paper, card,
plastic, fabric, or
another material. It can involve anything from making a single
reproduction of a priceless painting to running off millions of
copies of the latest Harry Potter. Why is it called printing? The
word "printing" ultimately comes a Latin word, premĕre, which
means to press; just about every type of printing involves pressing
one thing against another.
Although there are many different variations, typically printing involves
converting your original words or artwork into a printable form,
called a printing plate, which is covered in ink and then pressed
against pieces of paper, card, fabric, or whatever so they become
faithful reproductions of the original. Some popular forms of
printing, such as photocopying and
inkjet and laser printing, work
by transferring ink to paper using heat or static electricity and we
won't discuss them here; the rest of this article is devoted to
traditional printing with presses and ink.
Photo: A typical old-fashioned, wooden printing press, as used by none other than Benjamin
Franklin around 1730. Photo from Carol M. Highsmith's America Project in the Carol M. Highsmith Archive,
courtesy US Library of
Printing is hard, physical work so it's usually done with the help of a
machine called a printing press. The simplest (and oldest) kind of
press is a large table fitted with an overhead screw and lever
mechanism that forces the printing plate firmly against the paper.
Hand-operated presses like this are still occasionally used to produce small volumes of printed materials.
At the other end of the scale, modern presses used to print books,
newspapers, and magazines use cylinder mechanisms rotating at
high-speed to produce thousands of copies an hour.
Animation: How a traditional printing press works. 1) You put the original item you want to print from (typically metal type, black) face up on a table (light gray) and cover it evenly with ink (blue). You put the paper you want to print onto in a wooden frame and slide it along the table under the press. 2) The press consists of two blocks (dark and light gray) held together by a screw mechanism supported by a sturdy wooden frame (brown). 3) When you turn the lever, the lower block, called the platen (4), screws downward and presses the frame, containing the paper, tightly and evenly onto the inked type (5). Finally, you loosen the screw and remove the printed paper from the frame.
Types of printing
The three most common methods of printing are called relief (or letterpress),
gravure (or intaglio), and offset. All three involve transferring
ink from a printing plate to whatever is being printed, but each one
works in a slightly different way. First, we'll compare the three
methods with a quick overview and then we'll look at each one in
much more detail.
Relief is the most familiar kind of printing. If you've ever made a
potato print or used an old-fashioned typewriter, you've used
relief printing. The basic idea is that you make a reversed,
sticking-up (relief) version of whatever you want to print on the
surface of the printing plate and simply cover it with ink. Because
the printing surface is above the rest of the plate, only this part
(and not the background) picks up any ink. Push the inked plate
against the paper (or whatever you're printing) and a right-way-round printed
copy instantly appears.
Gravure is the exact opposite of relief printing. Instead of making a raised
printing area on the plate, you dig or scrape an image into it (a
bit like digging a grave, hence the name gravure). When you want to
print from the plate, you coat it with ink so the ink fills up the
places you've dug out. Then you wipe the plate clean so the ink is
removed from the surface but left in the depressions you've carved
out. Finally, you press the plate hard against the paper (or other
material you're printing) so the paper is pushed into the inky
depressions, picking up a pattern only from those places.
Offset printing also transfers ink from a printing plate onto paper
(or another material), but instead of the plate pressing directly against the paper, there is an
extra step involved. The inked plate presses onto a soft roller,
transferring the printed image onto it, and then the roller presses
against the printing surface—so instead of the press directly
printing the surface, the printed image is first offset to the
roller and only then transferred across. Offset printing stops the
printing plate from wearing out through repeated impressions on the paper,
and produces consistently higher quality prints.
Photo: The three most common types of printing: Left: Relief—Raised parts of the printing block (gray) transfer the ink (red) to the paper (white rectangle with black outline at the top) when the two are pressed together. Middle: Gravure—Grooves dug into the printing block transfer the ink to the paper when the paper is pressed tightly into them. Right: Offset—A rotating cylinder (blue) transfers ink from the printing plate to the paper without the two ever coming into contact.
For over 500 years now, most high-volume, low-quality printed material has been
produced with letterpress machines, which are more or less
sophisticated versions of the printing press Johannes Gutenberg
invented back in the 15th century. In the simplest kind of
letterpress, known as a flatbed press or
platen press, the paper is
supported on a flat metal plate called the platen, which sits
underneath a second flat plate holding a relief version of the item
to be printed (the printing plate, in other words). The printing
plate is covered with ink (either by hand, with a brush or by an
automated roller) before the paper is pressed tightly against it and
then released. The process can be repeated any number of times.
Photo: The keys in an old-fashioned typewriter produce images on paper by relief printing. When you press a key, these metal type letters flip up and press a piece of inked fabric against the paper. The letters are cast in reverse so, when they hit the paper, the printed impression comes out the right way round. Typewriters like this are now largely obsolete, but great fun to use—if a little noisy—when you can find them!
Flatbed presses are generally the slowest of all printing methods, because
it takes time to keep lifting and inking the printing plate and
loading and removing sheets of paper. That's why most
letterpresses use rotating cylinders in place of one or both of the
flat beds. In one type of machine, known as a flatbed cylinder
press, the printing plate is mounted on a flat bed that shifts
back and forth as a cylinder moves past it, inking it, pressing the
paper against it, and then lifting the printed paper clear again.
That speeds up printing considerably, but loading and removing the
paper is still a slow process. The fastest letterpresses, known as
rotary webfed presses, have curved printing plates wrapped
around spinning metal cylinders, which they press against paper that
feeds automatically from huge rolls called webs. Newspapers are
printed on machines like this, which typically print both sides of
the paper at once and can produce thousands of copies per hour.
The simplest kind of gravure printing is engraving, in which an artist draws a
picture by scratching lightly on the surface of a copper plate that
has been thoroughly coated with an acid-resistant chemical.
Lines of shiny copper are revealed as the artist scrapes away. The plate is then dipped in acid
so the exposed copper lines are etched
(eaten much deeper into the metal) by the acid, while the rest of
the plate remains unchanged. The acid-resistant chemical is then
washed off leaving a copper printing plate, from which a number of
copies, called etchings, can be printed. Traditional engraving and
etching is quite a laborious process, so it's used mainly by
artists to produce relatively small volumes of (originally) hand-drawn pictures.
A similar but much quicker and more efficient process called
photogravure is used
commercially to produce large volumes of high-quality prints.
Instead of being slowly and painstakingly drawn, the image to be
printed is transferred photographically onto the copper printing
plate ("photo") and then etched into it ("gravure"). Once
the plate has been produced, it's used to make prints on either a
flatbed press (fed with single printed sheets) or a rotary web press
known as a rotogravure machine. Glossy magazines and cardboard
packaging containers are often printed this way.
The most common type of printing today uses a method called offset
lithography (typically shortened to "offset litho"), which is a
whole lot simpler than it sounds. As we've already seen, offset
simply means that the printing plate doesn't directly touch the
final printed surface (the paper or whatever it might be); instead,
an intermediate roller is used to transfer the printed image from
one to the other. But what about lithography?
Photo: A modern offset printing press used to produce small runs of a weekly newspaper. Note the final printed copy on the top roller and the offset cylinder in the middle just underneath it. Photo by Senior Airman Dilia DeGrego courtesy of US Air Force
Lithography literally means "stone-writing," a method of printing from the surface
of stones that was invented in 1798 by German actor and playwright Alois Senefelder. He took a large
stone and drew a design on it with a wax crayon. Then he dipped the
stone in water so the parts of the stone not covered in crayon
became wet. Next, he dipped the design in ink, so the ink stuck only
to the waxed parts of the stone and not the wet parts. So now he
had an inked "printed plate" (or printing stone, if you prefer)
that he could press against paper to make a copy. Lithography avoids
the need to make a traditional printing plate, as you need for both
relief and gravure printing.
Photo: A small offset printing press. Note the paper sheets feeding in from the left and the rollers that transfer the paper and copy the image. Photo by J. Pond courtesy of
US Department of Defense and
Modern offset lithography printing presses use an updated version of the same
basic idea in which the stone is replaced with a thin metal printing
plate. First, the image to be printed is transferred
photographically to the plate. The parts of the plate from which the
image is printed are coated with lacquer (clear varnish), so they attract ink, while
the rest of the plate is coated with gum, so it attracts water. The
metal plates are curved around a printing cylinder and press against
a series of rollers, which dampen them with water and then brush
them with ink. Only the lacquered parts of the plate (those that will
print) pick up ink. The inked plate presses against a soft rubber
(offset) cylinder, known as the blanket cylinder, and transfers its
image across. The blanket cylinder then presses against the paper
and makes the final print. High-speed offset lithography presses are
web-fed (from paper cylinders) and can produce something like 20km
(~12 miles) of printed material in an hour!
Other types of printing
Relief, gravure, and offset are used to print the overwhelming majority of
books, magazines, posters, headed stationery, and other printed
materials that surround us, but several other methods are used for
printing other things. T-shirt designs, for example, are usually
produced with a process called silk-screen printing (sometimes
called serigraphy). This involves covering the article to be printed
(something like a blank cotton shirt) with a mesh-screen and a
stencil, then wiping ink over the mesh with a brush. Ink transfers
through the mesh to the fabric below except where it's blocked
from doing so by the pattern on the stencil. Collotype (also called
photographic gelatin) is a less commonplace technique in which a
gelatin-coated printing plate is made from a high-quality original
using a kind of photographic method. It produces finely detailed
reproductions and is still used for making high-quality prints of
Black and white, grayscale, and color printing
Photo: Halftones: Here's the same photo up above as a newspaper might print it using different sized areas of black ink. If you squint, or look from a distance, you can see that it looks like it's been printed with many different shades of gray, even though it's really using only one color of ink (black). In practice, newspapers use much smaller dots than this—we've exaggerated greatly so you can see how it works. Photo by Senior Airman Dilia DeGrego courtesy of US Air Force, with simulated halftone treatment by explainthatstuff.
Traditionally, printing presses used a single color of ink (black) to
produce basic black-and-white text, but printing photographs and artworks
was much more difficult because they really needed to be printed either with
many colors or many shades of gray. That problem was solved when people discovered how to simulate shades of gray using
what's called the halftone method. It's a simple way of converting photographs and drawings into images made from tiny
black dots of differing sizes to give the impression they're made from many different shades of gray.
In other words, it's a way of making a convincing gray-scale image using only black ink, and it relies on fooling your eyes
through an optical illusion.
To print in full color, you need to use at least four different inks: three
primary ink colors and black. Most people know that you can produce
light of any color by adding together different amounts of
red, green, and blue light; that's how a
television or LCD computer screen works.
Colored inks work in a different way by subtracting
color: they absorb some of the light that falls on them and reflect
the rest into our eyes—so the color they appear is effectively
subtracted from the original, incoming light. If you have an inkjet
printer with replaceable cartridges, you'll know that you can print
any color on white paper using the three colors cyan (a kind of
turquoise blue), magenta (a reddish purple), and yellow.
Theoretically, you can produce black with equal amounts of cyan,
magenta, and yellow, but in practice you need a fourth ink as well to
produce a deep convincing black. That's why full-color printing is often
referred to as the four-color process,
sometimes as CMYK printing (Cyan, Magenta, Yellow, and K meaning "key," a printer's word that usually means black),
and sometimes (since each color has to be printed separately) as color-separation printing.
Just like with black-and-white printing, the halftone process can also be
used to create varying shades of color.
Photo: Color printing: With black, magenta, cyan, and yellow ink, you can print any color you like.
Why is color printing more expensive?
Printing in color costs much more than printing in black-and-white, for
various reasons. First, and most obviously, there are four inks
involved instead of just one and each is printed by its own
printing plate, so the cost of making the printing plates alone is
several times greater. Second, color printing presses need to be able
to print the four inks on the page one after another, in perfect
alignment, so they need to be considerably more sophisticated and
precise. Third, it takes extra time and effort for the person
operating the printer to check that the colors have been aligned and
reproduced successfully, so there's more human effort involved.
Finally, because color printing is often used for reproducing
photographs, heavier, glossier, and more expensive paper is usually
needed to do it justice.
Sometimes designers get around the cost of color printing by using different
colored papers and inks. So, instead of printing black ink on white
paper, they might print black ink on red paper or red ink on yellow
paper. That achieves a colorful effect but keeps the cost down by
still using only a single color of ink. Another option is to
use spot-color printing, where a single,
specially mixed color is applied to a black-and-white document—though
that is labor intensive and can work out even more expensive than four-color
printing. Another alternative is to use two- or three-color printing, in which pages are printed
with black and one or two other colors. If you were using just cyan
and magenta inks, for example, you could create a whole range of reds
and blues and print quite colorful pages without the expense of
four-color printing. Another option is to print some pages with the
four-color process and other pages with only black-and-white. Books
that contain photographs are often made this way, with the art pages
printed through the four-color process on glossy paper that's bound
inside text pages printed with the black-and-white process on
Who invented printing?
If your immediate answer was "Johannes Gutenberg," you're only
half-right. As this whistle-stop tour through printing history will
show you, the celebrated German appeared only halfway through the story—one of many people who made
printing what it is today.
Artwork: A drawing of Ottmar Mergenthaler's revolutionary Linotype typesetting machine, taken from his original
US patent #543,497: Linotype machine,
courtesy of US Patent and Trademark Office.
~3000-1000BCE: Ancient Babylonia: People use signet stones (stones with designs cut into
their surface) dipped in pigment (paint) to print their signatures
in an early example of gravure printing.
~30BCE–500CE: Ancient Rome: Slaves laboriously copy out manuscripts by hand.
105CE: The Chinese invent the first paper, based on tree bark.
~500CE: The Chinese perfect printing from a single wooden block into which
designs are slowly and laboriously engraved.
~751CE: A book called Mugujeonggwang Daedaranigyeong (The Great Dharani Sutra) is printed with wooden blocks. Today, it's believed to be the oldest surviving printed text in the world.
~900CE: Wooden block printing is further developed in Goryeo (a kingdom of Korea that lasted from the 10th-14th centuries).
~1040CE (11th century): A Chinese printer called Bi Sheng invents the idea of printing with movable type. He makes lots of small clay blocks,
each containing a separate letter or character in relief, and rearranges them in a printing frame so he can print many different things. Unfortunately, because the Chinese language can use thousands of different characters, the idea doesn't immediately catch on; printers prefer to carry on using carved wooden blocks.
11th-13th centuries: In Goryeo, scholars produce the Tripitaka Koreana, a collection of Buddhist scriptures carved onto some 81,000 wooden printing blocks.
12th-14th centuries: The technology of papermaking is transferred from eastern to western countries.
Late 1300s: Block printing is first used in Europe.
1377: In Goryeo, a two-volume book called Jikji (an anthology of Zen Buddhist teachings) is printed with metal type almost 80 years before Gutenberg. Only one copy survives, currently preserved in the National Museum of Korea in Seoul.
1450: Johannes Gutenberg develops the first modern printing press using movable metal type (with each small printing letter or character cast in relief out of metal). Note that he didn't invent either the printing press or movable type: his innovation was to bring these things together in a powerful new way that caught on and spread rapidly through the world.
1803: Brothers Henry and Sealy Fourdrinier invent the modern papermaking machine, based on a series of huge rollers arranged in a row.
1814: Long before electric power becomes widely available,
invents the steam-driven printing press to speed up the laborious printing process. It's a type of flatbed cylinder press, in which
the cylinder is powered by a steam engine.
1846: Richard March Hoe develops the rotary press for newspaper printing and later perfects it so it can print on both sides of the paper at up to 20,000 pages per hour.
1863: William Bullock invents the web-feb rotary press for printing newspapers from giant rolls of paper.
1868: Christopher Latham Sholes develops a machine for printing personal letters and other writing—the typewriter with its QWERTY keyboard.
1880s: American brothers Max and Louis Levy develop halftone printing.
1886: Ottmar Mergenthaler invents the
Linotype machine, an automated way of creating "hot metal" type in a printing plate by casting a whole line of a book or magazine at a time. Typesetting, as this innovation is
known, allows newspapers to be printed more quickly and efficiently than ever before.
Photo: A vintage Linotype machine viewed from the side.
The person operating the machine enters text on the keyboard (at the bottom).
The machine then creates a mold of each line of text that's filled with molten metal to form a
line of type that can be used for printing.
Photo by Carol M. Highsmith, from Gates Frontiers Fund Colorado Collection within the Carol M. Highsmith Archive,
Library of Congress, Prints and Photographs Division.
1887: American inventor Tolbert Lanston develops a rival typesetting system called Monotype, which sets each letter or character as a
separate piece of type instead of a whole line.
1905: Ira Rubel develops offset printing.
1912: Walter Hess of Switzerland is one of the first people to experiment with
lenticular printing (putting lenses over printed paper to make a 3D effect).
1938: Chester Carlson invents the basic principle of the photocopier (a way of reproducing documents almost instantly using static electricity), though another decade passes before the first commercial copier goes on sale and the invention isn't taken up widely until marketed by Xerox in the 1960s and 1970s.
1949: Phototypesetting machines are invented, which produce type by photographic methods instead of using "hot metal." First is the Lumitype, invented at ITT by Frenchmen, René Higonnet and Louis Moyroud. and later developed by the American Lithomat corporation, which markets the machine as the Lumitype Photon.
1967: Gary Starkweather of Xerox gets the idea to develop a laser printer, and is finally granted a patent 10 years later. The machine he produces is very large and cumbersome by today's standards.
1980s: Scott Crump of Stratasys pioneers the modern approach to 3D printing called fused deposition modeling (bulding up a 3D object from hot plastic, layer by layer).
1984: Steve Jobs launches the Apple Macintosh computer (loosely based on the earlier Xerox Alto), which, partnered with a compact and (relatively) affordable laser printer, begins a revolution in desktop publishing.
1989: Tim Berners-Lee writes a proposal for an online publishing system called the World Wide Web, which makes it possible to publish documents instantly and view them anywhere else in the world, seconds later, using the
Photographs of old printing equipment: A fascinating collection of photos showing historic printing equipment and prints from the Edinburgh: City of Print archive, compiled by the City of Edinburgh Museums in Scotland.
Gutenberg Museum: A museum dedicated to the founding father of modern printing in Mainz, Germany. Click on the little English flag at the top for the English version of the text.
Please do NOT copy our articles onto blogs and other websites
Articles from this website are registered at the US Copyright Office. Copying or otherwise using registered works without permission, removing this or other copyright notices, and/or infringing related rights could make you liable to severe civil or criminal penalties. | 1 | 5 |
<urn:uuid:d3a27fb4-8e24-4fa3-966f-2497cd4d8321> | Once on the train, we were discussing the interesting phenomenon we observed during the investigation of Fischer-Tropsch synthesis in the presence of poisoners such as H2S, which we studied in the framework of the project for biosyngas conversion. The Ni/Al2O3 catalyst was deactivated by the addition of a sufficient amount of H2S in inert gas, however, the catalyst was still active after the addition of the same amount of H2S during Fischer-Tropsch synthesis . We explained it by the protection of Ni nanoparticles by adsorbed CO. We came to the idea that it would be interesting to create images of the reacting molecules on the catalyst surface by poisoning agents. This should enable selective adsorption and catalytic transformation for only targeted reacting molecules, while larger molecules will not be able to adsorb and react on the catalyst. The procedure would resemble the imprinting process.
Several years afterwards, we tried to demonstrate the feasibility of this concept by poisoning metallic catalysts (Pd, Co, Ni…) with pre-adsorbed aromatic molecules (toluene, mesitylene and triisopropylbenzene) using a broad range of poisoning molecules such as H2S, O2 etc. However, we could not detect the effect of pre-adsorption on selectivity. We realized that the high mobility of poisons on the metal surface dismissed the effect of imprinting.
Later, we worked on an industrial project for the development of a stable catalyst for hydrogenation of 3-(Dimethylamino)propionitrile:
The product of reaction (3-(Dimethylamino)-1-propylamin (DMAPA)) turned out to be a strong poison for metallic catalysts with almost full deactivation after the first reaction cycle. The reason for this behavior is the high basicity of the amino group leading to strong interaction with metal sites and irreversible deactivation. In our article published in Nature Catalysis , we proposed to use DMAPA as a poison for imprinting aromatic molecules over Pd catalyst and found that our imprinting strategy works very successfully for metal catalysts. Using toluene as a template, we could hydrogenate only this molecule with significantly lower activities for hydrogenation of mesitylene or triisopropylbenzene due to the steric hindrances induced by amine surrounding the active islands (Figure 1). Finally, we used this concept for the removal of cancerogenic benzene from the aromatic mixture by selective hydrogenation of this molecule.
Catalyst imprinting is different in comparison with selective poisoning when the catalyst is treated by poisoning agents to increase selectivity or activity [3, 4]. The effect of deactivation is mainly related to modification of the electronic state of the metal or accessibility of the reagents to the active sites. The catalyst in this case could be more selective for the transformation of similar molecules but cannot be specific to one molecule, which we observe in the case of imprinting.
Figure 1. Scheme of imprinting process for metallic catalyst
The selectivity control requires a new approach to force the development of “smart” catalysts similar to enzymes able to recognize raw molecules with their selective transformation to the target products. Indeed, the high selectivity of enzymes is explained by the confinement of active sites inside a protein matrix, which provides the correct orientation of the reagents and chemical environment before interaction with the active site.
As an attempt to mimic enzymatic systems, molecular imprinting technology has emerged as a new and rapidly developing area aiming at reproducing the environment of enzymatic cavities by the creating of molecularly imprinted polymers complementary to the molecules in shape, size and functional groups . In this case, the molecularly imprinted polymers (MIPs) with recognition sites in polymeric matrices have been developed by using template molecules and functional monomers containing functional groups, which polymerize around templates . However, similar to porous materials, the MIPs catalysts with recognition cavities inside the organic or inorganic matrix suffer from diffusion problems, low thermal stability and can be used only for complex catalysts.
The highly desirable would be the development of the process for modification of existing heterogeneous catalysts to adapt them for the specific catalytic reactions to significantly increase the selectivity of the processes and decrease the amount of side products. The SMI strategy provides numerous opportunities to perform selective reactions over different types of metallic and acid-base heterogeneous catalysts by proper selection of the template, poisoning molecules and design of the imprinting process in special reactors with controlled addition of imprinting agents. The main advantage of this concept is that it can create extremely selective chemical processes on the basis of the existing industrial catalysts avoiding the development of new catalytic materials.
- Legras, B.; Ordomsky, V. V.; Dujardin, C.; Virginie, M.; Khodakov, A. Y., Impact and detailed action of sulfur in syngas on methane synthesis on Ni/γ-Al2O3 ACS Catalysis 2014, 4 (8), 2785-2791, https://doi.org/10.1021/cs500436f.
- Wu, D.; Walid; Gu, Bang; Marinova, Maya; Hernández, Willinton Y; Zhou, Wenjuan; Vovk, Evgeny; Ersen, Ovidiu; Safonova, Olga; Addad, Ahmed; Nuns, Nicola; Khodakov, Andrei; Ordomsky, Vitaly, Surface molecular imprinting over the supported metal catalysts for size-dependent selective hydrogenation reactions. Nature Catalysis 2021, https://doi.org/10.1038/s41929-021-00649-3.
- Wu, D.; Wang, Q.; Safonova, O. V.; Peron, D. V.; Zhou, W.; Yan, Z.; Marinova, M.; Khodakov, A. Y.; Ordomsky, V., Lignin Compounds to Monoaromatics: Selective Cleavage of C-O Bonds over Brominated Ruthenium Catalyst. Angewandte Chemie International Edition 2021, 60, 12513-12523 https://doi.org/10.1002/anie.202101325.
- Niu, F.; Xie, S.; Bahri, M.; Ersen, O.; Yan, Z.; Kusema, B.; Pera-Titus, M.; Khodakov, A.; Ordomsky, V., Catalyst Deactivation for Enhancement of Selectivity in Alcohols Amination to Primary Amines. ACS Catalysis 2019, 9 (7), 5986-5997, https://doi.org/10.1021/acscatal.9b00864.
- Katz, A.; Davis, M. E., Molecular imprinting of bulk, microporous silica. Nature 2000, 403 (6767), 286-289, https://doi.org/10.1038/35002032.
- BelBruno, J. J., Molecularly Imprinted Polymers. Chemical Reviews 2019, 119 (1), 94-119, https://doi.org/10.1021/acs.chemrev.8b00171.
Please sign in or register for FREE
If you are a registered user on Nature Portfolio Chemistry Community, please sign in | 1 | 3 |
<urn:uuid:4791100c-c5ae-448d-851d-8f9b383cd0e5> | In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the measure of an amount of kinetic energy gained by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equivalent to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 redefinition of the SI base units, this sets 1 eV equal to the exact value 1.602176634×10−19 J.
Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge q gains an energy E = qV after passing through a voltage of V. Since q must be an integer multiple of the elementary charge e for any isolated particle, the gained energy in units of electronvolts conveniently equals that integer times the voltage.
It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics, and high-energy astrophysics. It is commonly used with SI prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion (109) electronvolts; it is equivalent to the GeV.
An electronvolt is the amount of kinetic energy gained or lost by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. Hence, it has a value of one volt, 1 J/C, multiplied by the elementary charge e = 1.602176634×10−19 C. Therefore, one electronvolt is equal to 1.602176634×10−19 J.
The electronvolt (eV) is a unit of energy, but is not an SI unit. The SI unit of energy is the joule (J).
Relation to other physical properties and units
|Measurement||Unit||SI value of unit|
By mass–energy equivalence, the electronvolt corresponds to a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from E = mc2). It is common to informally express mass in terms of eV as a unit of mass, effectively using a system of natural units with c set to 1. The kilogram equivalent of 1 eV/c2 is:
For example, an electron and a positron, each with a mass of 0.511 MeV/c2, can annihilate to yield 1.022 MeV of energy. A proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, which makes the GeV/c2 a convenient unit of mass for particle physics:
The atomic mass constant (mu), one twelfth of the mass a carbon-12 atom, is close to the mass of a proton. To convert to electronvolt mass-equivalent, use the formula:
By dividing a particle's kinetic energy in electronvolts by the fundamental constant c (the speed of light), one can describe the particle's momentum in units of eV/c. In natural units in which the fundamental velocity constant c is numerically 1, the c may informally be omitted to express momentum as electronvolts.
in natural units (with )
is a Pythagorean equation. When a relatively high energy is applied to a particle with relatively low rest mass, it can be approximated as in high-energy physics such that an applied energy in units of eV conveniently results in an approximately equivalent change of momentum in units of eV/c.
The dimensions of momentum units are T−1LM. The dimensions of energy units are T−2L2M. Dividing the units of energy (such as eV) by a fundamental constant (such as the speed of light) that has units of velocity (T−1L) facilitates the required conversion for using energy units to describe momentum.
For example, if the momentum p of an electron is said to be 1 GeV, then the conversion to MKS system of units can be achieved by:
In particle physics, a system of natural units in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: c = ħ = 1. In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses.
Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following:
The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via Γ = ħ/τ. For example, the
meson has a lifetime of 1.530(9) picoseconds, mean decay length is cτ = 459.7 μm, or a decay width of (4.302±25)×10−4 eV.
Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds.
Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy:
In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale:
where kB is the Boltzmann constant.
The kB is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is 15 keV (kiloelectronvolt), which is equal to 174 MK (megakelvin).
As an approximation: kBT is about 0.025 eV (≈ 290 K/11604 K/eV) at a temperature of 20 °C.
The energy E, frequency v, and wavelength λ of a photon are related by
where h is the Planck constant, c is the speed of light. This reduces to
In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material.
|5.25×1032 eV||total energy released from a 20 kt nuclear fission device|
|12.2 ReV (1.22×1028 eV)||the Planck energy|
|10 YeV (1×1025 eV)||approximate grand unification energy|
|~624 EeV (6.24×1020 eV)||energy consumed by a single 100-watt light bulb in one second (100 W = 100 J/s ≈ 6.24×1020 eV/s)|
|300 EeV (3×1020 eV = ~50 J)||The first ultra-high-energy cosmic ray particle observed, the so-called Oh-My-God particle.|
|2 PeV||two petaelectronvolts, the highest-energy neutrino detected by the IceCube neutrino telescope in Antarctica|
|14 TeV||designed proton center-of-mass collision energy at the Large Hadron Collider (operated at 3.5 TeV since its start on 30 March 2010, reached 13 TeV in May 2015)|
|1 TeV||a trillion electronvolts, or 1.602×10−7 J, about the kinetic energy of a flying mosquito|
|172 GeV||rest energy of top quark, the heaviest measured elementary particle|
|125.1±0.2 GeV||energy corresponding to the mass of the Higgs boson, as measured by two separate detectors at the LHC to a certainty better than 5 sigma|
|210 MeV||average energy released in fission of one Pu-239 atom|
|200 MeV||approximate average energy released in nuclear fission fission fragments of one U-235 atom.|
|105.7 MeV||rest energy of a muon|
|17.6 MeV||average energy released in the nuclear fusion of deuterium and tritium to form He-4; this is 0.41 PJ per kilogram of product produced|
|2 MeV||approximate average energy released in a nuclear fission neutron released from one U-235 atom.|
|1.9 MeV||rest energy of up quark, the lowest mass quark.|
|1 MeV (1.602×10−13 J)||about twice the rest energy of an electron|
|1 to 10 keV||approximate thermal temperature, , in nuclear fusion systems, like the core of the sun, magnetically confined plasma, inertial confinement and nuclear weapons|
|13.6 eV||the energy required to ionize atomic hydrogen; molecular bond energies are on the order of 1 eV to 10 eV per bond|
|1.6 eV to 3.4 eV||the photon energy of visible light|
|1.1 eV||energy required to break a covalent bond in silicon|
|720 meV||energy required to break a covalent bond in germanium|
|< 120 meV||approximate rest energy of neutrinos (sum of 3 flavors)|
|25 meV||thermal energy, , at room temperature; one air molecule has an average kinetic energy 38 meV|
|230 μeV||thermal energy, , of the cosmic microwave background|
One mole of particles given 1 eV of energy each has approximately 96.5 kJ of energy – this corresponds to the Faraday constant (F ≈ 96485 C⋅mol−1), where the energy in joules of n moles of particles each with energy E eV is equal to E·F·n.
- ^ a b "2018 CODATA Value: electron volt". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 2019-05-20.
- ^ "2018 CODATA Value: elementary charge". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 2019-05-20.
- ^ Barrow, J. D. (1983). "Natural Units Before Planck". Quarterly Journal of the Royal Astronomical Society. 24: 24. Bibcode:1983QJRAS..24...24B.
- ^ Gron Tudor Jones. "Energy and momentum units in particle physics" (PDF). Indico.cern.ch. Retrieved 5 June 2022.
- ^ "Units in particle physics". Associate Teacher Institute Toolkit. Fermilab. 22 March 2002. Archived from the original on 14 May 2011. Retrieved 13 February 2011.
- ^ "CODATA Value: Planck constant in eV s". Archived from the original on 22 January 2015. Retrieved 30 March 2015.
- ^ What is Light? Archived December 5, 2013, at the Wayback Machine – UC Davis lecture slides
- ^ Elert, Glenn. "Electromagnetic Spectrum, The Physics Hypertextbook". hypertextbook.com. Archived from the original on 2016-07-29. Retrieved 2016-07-30.
- ^ "Definition of frequency bands on". Vlf.it. Archived from the original on 2010-04-30. Retrieved 2010-10-16.
- ^ Open Questions in Physics. Archived 2014-08-08 at the Wayback Machine German Electron-Synchrotron. A Research Centre of the Helmholtz Association. Updated March 2006 by JCB. Original by John Baez.
- ^ "A growing astrophysical neutrino signal in IceCube now features a 2-PeV neutrino". Archived from the original on 2015-03-19.
- ^ Glossary Archived 2014-09-15 at the Wayback Machine - CMS Collaboration, CERN
- ^ ATLAS; CMS (26 March 2015). "Combined Measurement of the Higgs Boson Mass in pp Collisions at √s=7 and 8 TeV with the ATLAS and CMS Experiments". Physical Review Letters. 114 (19): 191803. arXiv:1503.07589. Bibcode:2015PhRvL.114s1803A. doi:10.1103/PhysRevLett.114.191803. PMID 26024162.
- ^ Mertens, Susanne (2016). "Direct neutrino mass experiments". Journal of Physics: Conference Series. 718 (2): 022013. arXiv:1605.01579. Bibcode:2016JPhCS.718b2013M. doi:10.1088/1742-6596/718/2/022013. S2CID 56355240. | 1 | 4 |
<urn:uuid:78e3bbbb-6309-4670-be2b-48ebf62ede05> | COVID-19, which is caused by SARS-CoV-2, poses a great threat to public health and the global economy [1
]. Patients with COVID-19 generally raise antibodies against SARS-CoV-2 following infection, and the antibody level is positively correlated to the severity of disease [2
]. Although it was believed that antibodies, particularly neutralizing antibodies played a pivotal role in inhibiting SARS-CoV-2 replication in patients, it has also been argued that they may also exacerbate COVID-19 through antibody-dependent enhancement (ADE) [3
ADE has been documented to other viruses including dengue virus (DENV), respiratory syncytial virus (RSV), measles virus, and feline infectious peritonitis virus (FIPV) [4
]. In these cases, ADE increased the severity of diseases either by enhanced antibody-mediated virus uptake into Fc gamma receptor (FcγR)-expressing phagocytic cells, leading to increased viral infection and replication (type I ADE), or by excessive antibody Fc-mediated effector functions or immune complex formation causing enhanced inflammation and immunopathology (type II ADE) [3
]. The type I ADE normally requires viral productive infection of target immune cells, for example, macrophages or monocytes in the case of FIPV in cats [6
]. The type II ADE can occur without the need of viral productive infection albeit causing dysregulated immune activation of target cells [3
]. In humans, FcγR is expressed broadly among the various leukocyte subsets including macrophages, monocytes, B cells, and others, and modulates downstream immune responses upon binding to the Fc domain of an IgG antibody [8
]. Subsequently, these leukocytes could be potential targets of virus induced ADE.
ADE has been well characterized in cats infected with FIPV, a feline betacoronavirus. Experimental infection of FIPV antibody positive cats resulted in more severe diseases, regardless of naturally acquired or vaccine acquired antibodies [6
]. This ADE is also closely related to more viral replication and more inflammatory responses in viral target cells including monocytes and macrophages in an aminopeptidase N (APN)-independent, FcγR-dependent manner [9
]. Likewise, it is not unexpected that SARS-CoV-1 and MERS-CoV could induce ADE in FcγR-expression Raji B cells or HEK293T cells in vitro [11
]. It is also postulated that SARS-CoV-2 may also induce ADE in some leukocytes.
Severe patients with COVID-19 normally generated high levels of SARS-CoV-2 antibodies, and the antibody titer was positively related to the severity of disease, which showed less neutralization potency [13
]. This phenomenon suggests that ADE induced by non-neutralizing antibodies could play an important role in the pathogenesis of SARS-CoV-2 in patients. Moreover, it was argued that immune cells, which normally express low or no ACE2 receptors, could also be infected. Viral RNA positive or antigen positive have been reported in a few single-cell analysis of patient BALF or in postmortem analysis of COVID-19 patients. Whether this positivity is caused by ADE or a direct ACE2-independent infection is still unknown.
In this study, we tested SARS-CoV-2 induced ADE in vitro using convalescent COVID-19 patient serum samples in a list of FcγR-expression leukocytes. Our results contributed to the understanding of the pathogenesis of SARS-CoV-2 in the context of viral treatment and control.
2. Materials and Methods
2.1. Primary Immune Cells Preparation and SARS-CoV-2 Infection
This study obtained informed consent from all subjects. The blood samples from healthy donors were treated with Ficoll-Paque Plus (17144002; Cytiva, Danaher Corporation, WDC, USA). Briefly, 3 mL of Ficoll was added to a centrifuge tube, then 4 mL of whole blood was gently added before being centrifuged at 400× g for 30 min at 20 °C. The PBMC layer was carefully taken with a pipette. Magnetic beads conjugated with different cell markers were used to sort immune cells: CD19 microbeads (130-050-301; Miltenyi Biotec--, Bergisch Gladbach, Germany) for B cells, CD14 microbeads for monocytes (130-050-201; Miltenyi Biotec, --Bergisch Gladbach, Germany), and CD11b-beads for macrophages (130-049-601; Miltenyi Biotec, --Bergisch Gladbach, Germany). Primary immune cells after sorting were cultured in Roswell Park Memorial Institute 1640 culture medium (RPMI1640, C22400500BT; Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% fetal bovine serum (FBS, 10099141; Life Technologies, Thermo Fisher Scientific, Waltham, MA, USA ).
For infection, primary B cells, monocytes, and macrophages were seeded into 24-well plates or 48-well plates at a density of 1 × 106 cells/mL. Cells were infected by SARS-CoV-2 at a moi of 0.1. 0 h samples were harvested once mixed cells with virus. Cells were washed three times using RPMI1640 and harvested for qRT-PCR or flow cytometry detection. Remaining cells were cultured at 37 °C supplied with 5% CO2 for 24 h or 48 h before collecting for further analysis.
For detection of antibody-dependent enhancement (ADE), virus (moi = 0.1) was incubated with equal-volume convalescent sera from COVID-19 patients (for no-sera group, virus were incubated with equal-volume RPMI1640) at 37 °C for 30 min. Mixture were added to primary cells and samples were harvested at 0 h, 24 h, or 48 h post infection.
2.2. Cell Lines and Virus Culture
Primary B cell, monocyte, macrophage, and Raji in RPMI-1640 (C22400500BT; Thermo Fisher Scientific, Waltham, MA, USA) + 10% FBS (10099141; Life Technologies, Thermo Fisher Scientific, Waltham, MA, USA), or Vero E6 and Caco-2 in DMEM + 10% FBS (Gibco, C 11995500BT) were cultured at 37 °C in a humidified atmosphere of 5% CO2. All cell lines were tested free of mycoplasma contamination and applied to species identification and authenticated by microscopic morphologic evaluation. None of the cell lines was on the list of commonly misidentified cell lines (by ICLAC). The SARS-CoV-2 isolate WIV04 (GISAID accession number EPI_ISL_402124) was used in this study. WIV04 was isolated from Huh7 cells from the original sample and was passaged in Caco-2 cells. Viral titer (TCID50/mL) was determined in Vero E6 cells.
2.3. Proteins and Antibodies for SARS-CoV-2
NP and predicted RBD of SARS-CoV-2 strain WIV04 were inserted into pCAGGS vector with an N-terminal S-tag. Constructed plasmids were transiently transfected into HEK293T-17 cells. The supernatant collected for protein purification was purified using S-tag resin and the purity and yield was tested using anti-S-tag mAb (generated in-house). Rabbits were immunized with purified NP proteins or RBD protein three times at a dose of 700 ng/each at a two week interval. Rabbit serum was collected at 10 days after the final injection. Antibody titer was determined in an ELISA using purified NP protein or RBD protein as the detection antigen.
2.4. B Cell Line Infection
Raji B cells were infected with SARS-CoV-2 at a moi of 0.01, 0.1, or 0.2 depending on the purpose of the experiment. Infected cells were harvested at 0, 24, or 48 h after three washes with RPMI1640. Cellular viral RNA or sg-RNA expression was determined in qPCR or RNA-Seq. GAPDH was used in qPCR as the internal control. For detection of antibody-dependent enhancement (ADE), virus (moi = 0.01, 0.1, or 0.2) were incubated with equal-volume convalescent sera from COVID-19 patients (for no-sera group, virus were incubated with equal-volume RPMI1640) at 37 °C for 30 min. Mixture was added to primary cells and samples were harvested at 0 h, 24 h, or 48 h post infection.
2.5. Flow Cytometry Analysis of Human Peripheral Blood Samples
For FCGRs detection, primary B cells, monocytes, and macrophages were incubated with fluorochrome-labeled antibodies specific for humans before fixation: FITC mouse anti-human CD32 (552883; BD Pharmingen, San Diego, CA, USA), APC mouse anti-human CD64 (561189; BD Pharmingen, San Diego, CA, USA ), CD16 Rabbit PAb (16559-1-AP; Proteintech,--). FITC-anti-Rabbit IgG (H+L) (SA00003-2; Proteintech, Chicago, IL, USA) was used as the secondary antibody for CD16.
For SARS-CoV-2 infected primary immune cells, surface staining was conducted before fixation with AF700-anti-CD45 (368514; BioLegend, San Diege, CA, USA), BV650-anti-CD11b (101239; BioLegend, San Diege, CA, USA), PE-anti-CD68 (333808; Biolegend, San Diege, CA, USA), and Percp Cy5.5-anti-CD14 (367110; Biolegend, San Diege, CA, USA). Antibody stained cells were fixed overnight with 4% PFA at 4 °C and taken out of the BSL3 lab for downstream analysis. Cells were stained further with in house made SARS-CoV NP pAb (1:500) at 4 °C for 30 min after permeabilization. Then, cells were stained with FITC-anti-Rabbit IgG (H + L) (SA00003-2; Proteintech, Chicago, IL, USA) at room temperature for 30 min.
2.6. RNA Extraction and qRT-PCR
Whenever commercial kits were used, the manufacturer’s instructions were followed without modification. Viral RNA was extracted from 140 μL of samples with the QIAamp®
Vival RNA Mini Kit (52906; QIAGEN, Hilden, Germany). RNA was eluted in 50 μL of elution buffer and used as the template for qRT–PCR. The QPCR detection method based on the 2019-nCoV S gene can be found in the previous study (Zhou et al., 2020). Two microliters of RNA were used as a template for the amplification of selected genes by real-time quantitative PCR using HiSxript®
II One step qRT-PCR SYBR®
Green Kit. (Q221-01; Vazyme Biotech Co., Ltd, Nanjing, Jiangsu, China). The 10 μL qPCR reaction mix contained 1.9 μL of nuclease free water, 5 μL of 2× One Step SYBR Green Mix, 0.5 μL One Step SYBR Green Enzyme Mix, 0.2 μL of 50× ROX Reference Dye 1, 0.2 μL of each primer (10 μM), and 2 μL of template RNA. Amplification was performed as follows: 50 °C for 3 min, 95 °C for 30 s followed by 40 cycles consisting of 95 °C for 10 s, 60 °C for 30 s, and a default melting curve step in an Step-One Plus Real-time PCR machine (ABI) [14
2.7. Transcriptome Analysis
Using software Hisat2 v2.1.0, raw reads was mapped to genome that combined with human GRCH38.913 and Sars-CoV2 (MN996528.1). After transforming the format and sorting in Samtools v1.10-24, the BAM file was passed to StringTie v2.1.0 for transcriptome assembly and quantitation. Read count table of transcriptome generated by prepDE.py, a tool in StringTie, was used for gene differential expression analysis in R v4.1.0 with package DESeq2 v1.32.0. Compared to the mock group, the gene with Log2 Fold Change >2 and p-value < 0.05 was retained. Furthermore, the genes whose expression increased with degree of ADE were passed to the online tool Metascape for enrichment analysis.
2.8. Micro-Neutralization Assay
For detection of the neutralization antibody titer of convalescent sera, SARS-CoV-2 were diluted to 4000 TCID50/mL and incubated with equal-volume diluted patient sera (diluted from 1:10 to 1:1280, two-fold serial dilution) at 37 °C for 30 min. This was added to the mixture of Vero E6 cells seeded in 96-well plates and infected at 37 °C for 1 h. The supernatant was removed and cells washed with PBS. DMEM medium containing 2% FBS were added to the cells. Cell plates were fixed at 24 h post infection. Stained cells with in house made SARS-CoV NP pAb (1:500) and Cy3-anti-Rabbit IgG (H+L) were from Proteintech (SA00009-2) to detect viral NP.
2.9. Serological Test
An in-house anti-SARS-CoV-2 IgG ELISA kit was developed using recombinant RBD of the SARS-CoV-2 isolate WIV04 (MN996528.1). The RBD proteins were expressed in HEK293-17 cell lines. For IgG analysis, MaxiSorp Nunc-immuno 96-well ELISA plates were coated (100 ng per well) overnight at room temperature with RBD protein. Plasma from different donors were used at a dilution of 1:20 for 1 h at 37 °C. A HRP-conjugated anti-human IgG monoclonal antibody (Kyab Biotech Co. Ltd., Wuhan, China) was used at a dilution of 1:40,000. The OD value (450–630 nm) was calculated.
2.10. Statistical Analysis
Data analyses were performed using GraphPad Prism 7.0 software. Data are shown as mean ± SD. Data were analyzed with the Shapiro–Wilk normality test and confirmed with the Gaussian distribution. Statistical analysis was performed using the Student’s t-test with two tailed, 95% confidence. p values less than 0.05 were considered statistically significant.
Here, we tested the ADE effect of convalescent serum samples using in vitro immune cells, aiming for a better understanding of possible SARS-CoV-2 viral antibody induced pathology in vivo. Our data indicated that ADE could occur to FcγR-expression cells such as B cells, monocytes, or macrophages. Although the ADE effect was not correlated to the dose of antibody in a particular patient, it was found to be highest within a narrow range of preexisting titer in the serum that was known to induce ADE. Finally, the ADE effect includes not only an enhancement of viral replication, but also an excessive immune response in these immune cells.
The role of SARS-CoV-2 antibodies in the severity of diseases is controversial. Based on the observations that severe patients tend to have higher antibody titers, there have been two hypotheses: one possibility is that the extensive viral replication and hyper-inflammation in severe patients induced overproduction of antibodies, while another possibility is that the high levels of antibodies worsen disease severity via ADE. The hyper-inflammation, however, was a result of ADE [3
]. In this study, we showed the in vitro evidence of ADE in multiple peripheral blood immune cells, supporting the second hypothesis that non-neutralizing antibodies could worsen the disease severity by enhancing viral infection or enhancing excessive immune activation in certain immune cells, suggesting they contributed to SARS-CoV-2 induced immunopathology in severe patients.
The ADE effect has been well characterized in cat coronavirus FIPV infection. FIPV infected macrophages and monocytes via APN receptor, and this infection or infection induced host responses were greatly enhanced by pre-existing native antibodies in cats [6
]. Likewise, a list of in vitro experiments also proved that SARS-CoV-1 or MERS-CoV induced ADE, although there is a lack of in vivo evidence [11
]. Therefore, it has been suggested that strong antibody titers are more closely linked to severe COVID-19, which could be important in the lower respiratory tract that contributes to lung pathology [3
]. Our data proved the theory that more viral infection and more excessive immune responses in B cells, macrophages, and monocytes, the three cell types that are heavily recruited in lung under severe condition, and thus would increase the severity in lung. Moreover, a medium level of antibody can more easily induce ADE, a phenomenon that is also observed in dengue disease [4
], suggesting that the patients with antibody titers at or near the peak enhancement titer may place these individuals at greater risk of severe disease than if they only developed a small amount of antibody. Finally, the ADE effect may further dampen our immune defense mechanism by causing the dysfunction of B cells or macrophages, which eventually leads to impaired adaptive immunity.
Our study also has some limitations. First, this in vitro study provides insight into the pathogenesis of COVID-19 enhanced by antibodies. However, we were unable to provide in vivo data in the pathogenesis of a more severe clinical outcome in patients or in experimental animals because the effector functions of antibodies are altered by species–species interactions between antibodies and immune cells [16
]. Second, our study may not apply to predict potential ADE effect upon vaccination. It was revealed that SARS-CoV-2 natural infection induced a list of “bad antibodies” that would not show up after vaccination, for example, autoantibodies or lower levels of fucosylation of SARS-CoV-2-specific antibodies [18
]. The IgG types may also vary between infection and vaccination.
Collectively, we showed in vitro evidence of convalescent serum-dependent enhancement of either SARS-CoV-2 infection or viral induced excessive immune responses in immune cells. Our study provides insights into the understanding of an association of high viral antibody titer and severe lung pathology in severe patients with COVID-19. | 1 | 5 |
<urn:uuid:e58da45d-a747-4b49-86bf-e1ccb4206416> | Kilnave Chapel & Cross
The historic Kilnave Chapel and Cross are a ‘secret’ feature of Loch Gruinart; they’re almost invisible from the road and therefore often overlooked. Most visitors to Loch Gruinart end up at the RSPB visitor centre, the bird hide or on the east shore where there are beautiful walks up to Killinallan Point and beyond to Rhuvaal.
The west side of Loch Gruinart isn’t very different. Here, between the RSPB visitor centre and Ardnave Farm you’ll find Kilnave Chapel and Cross, a few hundred metres from the road, close to the shore. ‘Kilnave’ comes from the Gaelic ‘Naomh’, which means saint or holy. The chapel at Kilnave was built around late 1300s or the early 1400s and belonged to the parish of Kilchoman.
The ruined Kilnave Chapel measures nearly nine metres by just over four metres, with walls well over half a metre thick. The door at the west end is round-headed and very low. Its arch is constructed of thin slabs of whinstone and is furnished with a long bolt-hole common to Highland churches: a strong wood beam was pulled completely across the door on the inside, while a sufficient length of the beam remained in the hole to keep it horizontal.
The church is lit by a small round-headed window at the east end and by a smaller one in the south wall near the altar. You’ll discover traces of the foundations of the altar and one sculptured gravestone in the churchyard.
Another important feature is Kilnave Cross, a beautiful standing cross at the west end of the church.
This cross is carved on one side only and very little remains; indeed, you'll need very good light to decipher what is left. However, the cross resembles one at the 11th century Kiells Chapel near the village of Tayvallich, in Knapdale, west Highlands of Scotland.
Battle of Traigh Ghruineard
It’s hard to imagine when you stroll though the grounds of the silent, tranquil chapel that a horrible tragedy took place here.
The battle of Traigh Ghruineard (Gruinart) was fought in 1598. It was the last big Clan battle on the Isle of Islay, between Sir Lachlan Mor MacLean, the 14th Chief of Duart and his nephew Sir James MacDonald of Islay.
They fought over possession of the Rinns of Islay which Lachlan Mor claimed was the dowry given to his wife in 1566 by her brother Angus MacDonald, chief of Clan Donald South, the most powerful branch of Clann Dhomhnuill.
When the battle was nearly over, 30 MacLeans sought sanctuary in Kilnave Chapel. Rushing inside they bolted the door and waited fearfully, hoping that the MacDonalds would respect holy ground. Sadly, the men were half mad with grief and anger at the thought that their chief had been killed. Lusting for vengeance they set fire to the roof. All inside died except one man, a Mac Mhuirich (Currie) who climbed through a hole in the roof when the burning thatch collapsed.
In 2012 the Islay Gaelic Choir set three poems about the battle to music and gave the first performance at Loch Gruinart. | 1 | 2 |
<urn:uuid:6db31d5f-ea30-42b5-823c-8a680ae7aa8d> | People read online for the same reasons that they read print documents: to obtain information or knowledge, to complete forms and applications, or to be entertained. The key difference, however, between habits of print readers and online readers is that online readers are more likely than print readers to be researching, not reading. Here are some recommendations for producing successful websites.
Consider these study results:
- Four out of five people scan online content rather than read word by word.
- On a typical Web page, readers read only about one-fifth of the content.
- The more words on a Web page, the lower the percentage of words readers are likely to read.
- Readers tend to read closer to one-half of online content when a Web page’s text is limited to about one hundred words.
Most of these figures date back to the late 1990s, when fewer people went online, Web design and architecture was less sophisticated, and much of the content was functional (now, many websites, like this one, are equivalent to periodicals or books), but the findings are still essentially valid.
For that reason, clarity and conciseness — advisable in any form of communication — is even more important in online content. In many circumstances, readers will be drawn to easily accessed information. Rather than presenting paragraph after paragraph of content in blocks of text, as is routine in print publication, give readers multiple reference points:
- Use headlines that are informative first, and clever second, if at all.
- Break content up into small blocks of text separated by subheadings.
- Organize brief items into numbered or bullet lists.
- Provide information in captions for photographs and graphics.
- Place the most important information at the top of a page or at the beginning of a piece of content.
The primary goal for the owner of a website, whether it’s a commercial site or one whose primary purpose is to provide information or impart knowledge, should be to increase the number of readers and retain those readers. To that end, websites should be designed and organized to help visitors
- locate what they need or want
- understand what they locate
- apply what they locate to satisfy their needs or wants
How do you know what readers want from your website? Try these strategies:
- Analyze reader communication — comments, emails, and other contact.
- Engage with readers by asking them directly by email or through the site itself.
- Note, in your site analytics, the most popular pages and the top word searches.
16 thoughts on “Writing for the Web”
I think that all of the above about writing merely condones people who are simple-minded and who have short attention spans. Don’t do that. Leave them behind in the dust – and let them work at McDonald’s and the Wal-Mart.
I write on the Web just like I write anywhere else, and the reader should be challenged to learn something. No matter where you are writing, emulate Carl Sagan, Isaac Asimov, Watson & Crick, Arthur C. Clarke, Sir Winston Churchill, Woodward & Bernstein, Martin Gardner, Michael Shermer, and Bertrand Russell.
Write like you are trying to win the Pulitzer Prize and not the Big Mac Prize.
Dale A. Wood
Excellent tips on writing for the web. I’m bookmarking this so I can refer to it when I begin my next project.
I agree with commenter Dale A. Wood that the writing of Carl Sagan, Isaac Asimov, et al., is far more interesting and entertaining than most web writing. I have been snared many times by a lengthy, superbly written web page that drew me in and kept me there.
Still, most of my web browsing involves, just as you say, research of one kind or another. I want to find what I need, find it fast, and get on with the rest of my life.
The one thing I would add to this: Verifiable sources. So often people make claims on the web without providing corroborating sources. Say you’re writing about reusable coffee mugs. Where did you get that statistic on the number of disposable coffee cups heading to our landfills every day? That number is only as good as the source.
Thanks for a sweet piece, informative and exemplary.
People using the web to gather information or complete tasks in short order are not simple-minded, they’re busy. And they don’t need to be “challenged” by a self-important snob who can’t be bothered to consider his (supposed) audience’s needs and preferences.
It depends on what the primary goal of the owner of a website is, of course. Moreover, I don’t understand why simple-minded people and all of those that work at McDonald’s and the Wal-Mart must be left behind in the dust.
Mark ~ I’m with Mr. Wood on this point. Unless you are writing instructions on how to assemble a lawn sprinkler, write with gusto. Write with soul. “Write like you are trying to win the Pulitzer Prize . . .”
And as a side note, quoting statistics from the 1990s doesn’t add steel to your argument.
“Most of these figures date back to the late 1990s, when fewer people went online, Web design and architecture was less sophisticated, and much of the content was functional (now, many websites, like this one, are equivalent to periodicals or books), but the findings are still essentially valid.”
At that point I switched off. You basically list all the reasons why that information may not be reliable now, before declaring it all “essentially valid” with no justification.
I think a lot of this does still stand. People are short for time, it’s not that they are simple-minded. There’s too much out there to read, so you’d better know your audience and the type of posts they like and don’t make them wade through an ocean of words to find the point of your article or blogpost.
@D.A.W and John – that’s fine if you’re going to print it off, or if you’re reading it on a tablet. But looking at a standard upright computer monitor is very different from looking at paper, and you need to make allowances for this – form follows the medium, so to speak.
I think it’s as much about this physical aspect as people having short attention spans: let’s face it, computer screens are not the greatest of reading mediums.
One thing I’ve noticed, even in informal comments on websites such as this, is that they are much easier to read when the paragraphs are kept really short and spaces are left between them.
I probably wouldn’t break for a new paragraph here in conventional writing, but I do here (and similarly on my own blog), for ease of reading.
Warsaw Will ~ thoughtful point that I hadn’t considered.
I have recently been wrestling with a corollary of this in the sense of whether to take the plunge and do my reading on a tablet. Maybe it’s too many decades of holding a physical book but I’m having a hard time envisioning reading a novel from a computer screen.
This article is a good example how to break your rules. The article has 411 words. Has no subheadings. Has no photographs and graphics. Has no summary (most important information at the top)
Writing for the web this way has similarities to screenwriting. The idea being to engage non-readers with your content and make them readers.
While some complain about writers using lists and bullet points to show the 7 Best Way or 11 Perfect Paintings, the idea is still the same: Help readers get over their inherent resistance to read online.
Think of the phone text. When that started, did you think a text message was as important as a call? That if it was important, they would call, not text.
However you get your content up, use it to reach outside the base audience. If it takes a different format, use it. Then get back to winning that Pulitzer. Mix it up. Every fastball pitcher in baseball also has a nice change up and slider.
People really do have a short attention span, as said. Not many actually set out to find and read quality things on the internet. The just want to be updated about their favorite celebrities and all that.
As for the medium, it is quite difficult to keep on reading for more than five minutes on a computer monitor or tablet, made all the more challenging and exasperating by long paragraphs.
And we can all agree on one thing: the quality of english used by many is quite poor because of their ignorance; condensing phrases and clause to acronyms. Now, I use them myself but after a certain point of time they begin to take a toll on your language.
And one more thing- most blogs are just pure scum; completely oblivious of grammar though the articles of the blog may be actually interesting.
Put your heart and soul into writing- however minute and insignificant it may be.
The way people read online articles has changed, Dramatically! We as writers, journalists, whatever position you may hold have to keep up to date with technology. The main reason why people read online is because it is more “convenient” and “practical”. We know the “man” of the 21st century is short on time, you have to tell your story in a way that you can manage to capture your reader’s attention and still be as informative as you can. It is because of this that articles have become just a tad shorter, this does not mean the value will be diminished in the process. The point I’m trying to make is that time is now a key factor in attention spans. We have to be very aware of that. Cheers!
I’ve been reading similar advice to that above for at least fifteen years, and I have never been comfortable with it. If you look at what is actually written for the web — Huffington Post, the Daily Beast, Salon and this site itself! — they follow none of these rules. Nor should they. These sites consist of articles and essays that follow the traditional norms, and they seem to work just fine.
Several visitors have commented about the fact that many people read rich content on the Internet in the same way that they read similar print materials. I agree — I know I do. Others mentioned that this post contradicts its message.
I’m sorry that I didn’t distinguish between commercial content (websites offering products and services, for example) and practical, how-to-type sites, on the one hand, which this advice largely applies to, and more journalistic and literary content, on the other hand, which can be more substantial without discouraging thorough engagement by readers.
But webmasters and developers of any online content need to consider that for every site visitor comfortable with dense content, there may be another out there who wishes to be treated more gently and will go elsewhere if not accommodated.
And as for the ergonomic arguments against reading dense content on a screen, I have two words: ebook reader (that was one), and tablets. I prefer to read novels on a Kindle than pulp and paper. Reading content such as this site would need a tablet, which I don’t have but should. My bottom line is that writing style should not be dictated by the ergonomics of the desktop, which is a temporary and solvable problem. | 1 | 3 |
<urn:uuid:a2274151-b917-42d8-9147-77816183461c> | Installing GDAL for Windows
GDAL is a useful command line tool to process spatial data, if you haven’t heard of the tool before some examples of what it can do are:
- Create contours from a DEM
- Create a TMS tile structure
- Rasterize vector into a raster file
- Build a quick mosaic from a set of images
Each of the above functions are python scripts which can be run from the command-line once GDAL is successfully installed.
This tutorial covers how to install GDAL on a Windows PC, if you are interested in getting GDAL running on a Mac please go here (https://sandbox.idre.ucla.edu/sandbox/general/how-to-install-and-run-gdal)
Step 1: Install Python
Python is necessary for GDAL, and if you already have an installation of Python then skip to step 4 below.
1. Feel free to download the latest 2.7x version of python (rather than the 3.x python version).
The python version used for this tutorial can be downloaded here:
2. Install python with the default options and directories.
3. After installation, go to Python –> IDLE (Python GUI) to find out what version of Python you are using:
4. Make a note of the number that shows the version of your Python in the top right, as highlighted below:
Note: MSC v.1500 may differ if you are using a different Python installation, if it does then please make a note of that number. Note, if you installed the 64-bit version of Python, for the rest of the tutorial please remove the (x86) from the paths.
Step 2: Install GDAL
1. Head over to Tamas Szekeres’ Windows binaries and download the appropriate GDAL Binary.
For this tutorial, we are using the MSC v.1500 on a 32-bit system, the picture below illustrates how to match the version with your own python version. The blue highlight is where you should look for either 64-bit or 32-bit systems, and the green shows the release-1500 number which should match the number from IDLE in step 4 above.
2. Clicking the link will take you to the list of binaries (installers) to download.
3. Locate the “core” installer, which has most of the components for GDAL.
4. After downloading your version, install GDAL with standard settings.
5. Next, return to the list of GDAL binaries and install the python bindings for your version of Python, this can either be 2.7, 3.1, or 3.2.
Recall that we had installed Python 2.7 earlier, so we have to locate this version, as seen below:
6. Download the Python bindings and install them.
Step 3: Adding Path Variables:
We need to tell Windows system where the GDAL installations are located, so we need to add some system variables.
1. Right click on “Computer” on the desktop and go to “Properties”:
2. Click on Advanced System Properties
3. Select Environment Variables.
4. Under the System variables pane, find the ‘Path’ variable, then click on Edit.
5. Go to the end of the box and copy and paste the following:
;C:\Program Files (x86)\GDAL
Note: For 64-bit GDAL installations you would simply remove the (x86) after Program Files.
6. In the same System variables pane, click on “New” and then add the following in the dialogue box:
Variable name: GDAL_DATA
Variable value: C:\Program Files (x86)\GDAL\gdal-data
7. Click “OK”
8. Add one more new variable by clicking “New…”
10. Add the following in the dialogue box:
Variable name: GDAL_DRIVER_PATH
Variable value: C:\Program Files (x86)\GDAL\gdalplugins
11. Click “OK”
Step 4: Testing the GDAL install
1. Open the Windows command line, by going to the Start Menu -> Run ->Type in cmd and press Enter.
2. Type in
3. Press Enter.
4. If you get the following result, then congratulations your GDAL installation worked smoothly!
Thanks for this great guide! Only one that actually helped with clear instructions 🙂
Thank you very much for the positive feedback Nick! Feel free to share the guide with anyone who may find use for it!
Just a quick question while running the version command I get the error
ogr_MSSQLSpatial.dll , the specified module couldnt be found.
But it displays version and release date.
I’m working with Spyder rn , and when I try to import gdal it says no module found.
Any help would be appreciated.
I am also having the similar kind of problem.
Can you help me to use gdal with Matlab anyway.
same error. even gdal have ogr_MSSQLSpatial.dll
Have you installed GDAL as ‘Typical’ or as ‘Complete’ in the core msi installer?
When I installed it as ‘Complete’ I had the same problem with ogr_MSSQLSpatial.dll, but when I uninstalled GDAL and reinstalled it as ‘Typical’, the problem was solved.
If you are not looking for SQL Server stuff, that will hopefully solve your problem guys.
Delete ogr_MSSQLSpatial.dll from gdalplugins
[crayon-5a4f16390978c204694594 lang=”default” decode=”true” inline=”1″ ]gdalinfo –version
[crayon-5a4f16390978c204694594 it is not recognized as a batch file this is the error i am getting
Oh sorry, you should be typing in “gdalinfo –version”, the other stuff is a mistake from the wordpress crayon plugin!
[crayon-5afbe5446446b421143482 lang=”default” decode=”true” inline=”1″ ]gdalinfo –version…
did u type all this in step-4??????
That’s just a formatting error. All you need to type is: gdalinfo –version
only type gdalinfo–version
giving an error….”not recognised as an internal or external command”
It’s good tutorial. Gdal is working but i can’t run python stuffs like gdal_merge.py or “from osgeo import gdal”, it gives an error like: DLL load failed. Could you guys please guide me a bit more on this. Thanking in advance.
Thank you for the feedback Alam! To help with your situation further, what environment are you trying to run GDAL under? Is it the Python IDLE or the Windows Command line?
It looks like the link is no longer active to download the .msi files. Is there another website?
Having the same problem as Alam. Running from Idle.
Anyways, great post!
i have a probleme when i am executing this command python manage.py runserver
the error message i get in __init__ self._handle=_dlopen(self._name,mode)
OSError:[WinError 126] le module spécifié est introuvable
Hi, I follow all the instructions but I get ‘gdalinfo’ is not recognized as an internal or external command, operable program or batch file. Could you please help me to solve this? thank you. Nice guide!
please share the guide
What guide are you referring to?
Thanks a lot for this guide. I mean it.
Works like a charm. Thanks for posting!
One caveat: on my Windows 7 32-bit virtual machine, the path is still c:Program Files and not c:Program Files (x86). Might be so for more users.
Oh, this must be a change with Windows 10? I’ll edit the post to reflect that if this is the case!
You rock! Thank you my brother in code!
If only all installation processes were documented so precisely… Thank you so much!
Haha, thank you for the compliment!
Gdal is working but i cannot import gdal in python currendly using python idle
if you are a user of arcgis, then the cause is probably because you are using the python idle that is bundled with arcgis/arcpy; not the other python installation. otherwise, you should check to see if your python class path includes gdal or not:
if it doesn’t, then you should add gdal to your python path.
hope that helps!
Please how can I add gdal to my python path?
I would like to know this too
I tried it in windows 10. Basic binding command “from osgeo import gdal” did not work from IDLE (python GUI) at the beginning. Then it tried from Python (command line) and it worked fine. Latter it also worked from IDLE. Thanks
Thank you very much!!! You’ve saved me!!!!
You are very welcome! Glad you found this useful! =)
I see a couple people have referenced the same issue I’m having and I wondering if you have the solution!
Gdal works from command line (and THANK YOU for that!) however I’m unable to use the GDAL module in any of my python IDEs (sublime, pyscripter).
I am in fact using ArcGIS (C:Python27) is my installation location.
Oh yes, if you are using ArcGIS’s Python installation, then yes you will not be able to use GDAL in those IDEs because ArcGIS’s Python installation was installed first.
If you want to enable support for GDAL in those IDEs you have to include the path to the GDAL Python installation in your Build Environments. In my case, C:Python27gdal needs to be included in the Python Paths under PyScripter’s Tools panel: http://imgur.com/g0DntAh
Hope this helps!
Thanks a lot, this is such a great help.
Great guide, thanks. Probably the cleanest GDAL install I’ve ever had.
Dude … you are a hero … get yourself a cape and a mask.
I was looking for a site with gdal installation instructions for a new convert to the gdal religion.
very nice tutorial ! All works fine now !! BIG THANKS
Very useful tutorial. Thanks for sharing
still I am getting ‘gdalinfo’ is not recognised as an internal or external command, operable program or batch file
Thx a lot!
Amazingly straight forward and clear instructions. Thank you!
When I try to install 64b version, I can’t use gdal_calc.py. It reports following error:
Traceback (most recent call last):
File “C:\Program Files\GDAL\gdal_calc.py”, line 50, in
from gdalnumeric import *
ImportError: No module named gdalnumeric
No matching distribution for gdalnumeric is found by pip. I don’t have numpy installed by default, however if I install it, the same error is reported. I don’t have any problems with 32bit version. Of course, my environment variables are set to C:\Program Files\…, nor (x86) 😉
receive the error when trying to use gdal2tiles.py from cmd
line 44, import error no module named osgeo
Thank you for the help, however I am having a serious amount of trouble with GDAL. I followed the steps that you listed and unfortunately the command prompt isn’t recognizing gdalinfo as an internal or external command.
I’ve tried uninstalling and reinstalling using this process several times to no avail.
Perhaps it’s the environments variables that I have set up?
GDAL_DATA > C:\Program Files(x86)\gdal-data
GDAL_DRIVER_PATH > C:\Program Files(x86)gdalplugins
To be honest, I am a newbie and this environment variable aspect is tripping me up.
A couple of questions before I can troubleshoot:
1) What version of python are you using?
2) What version of Windows are you using (and is it 32 or 64 bit)?
3) What does your “PATH” variable look like?
Looking forward to hearing back from you!
For me that is:
1) Python 2.7.10
2) Windows 10 Pro 64 bit
3) Under the Path variable I added C:\Program Files (x86)\GDAL which is visible in a new line
I use Windows10 64b
Python 2.7.12 (64b)
GDAL 1.11.4 (installed via gdal-111-1500-x64-core.msi)
Python bindings installed via GDAL-1.11.4.win-amd64-py2.7.msi
PATH entries connected with Python and GDAL:
gdal2tiles.py and other e.g. gdaltranslate seem to work as expected, but gdal_calc.py is missing that gdalnumeric. When I install 32bit version, it works fine (even I have to install missing numpy Python module).
I am having the same problem as Ido. Any suggestions anyone? Thanks!
Similar to a couple of the other posts, my issue is that I cannot import GDAL modules without first importing arcpy. But only in the Windows command line and IDLE – if I try to import GDAL modules within the ArcMap/ArcCatalog Python window, I am successful.
In IDLE and Windows command line, ‘from osgeo import gdal’ throws an error:
ImportError: DLL load failed: The operating system cannot run %1.
However, if I ‘import arcpy’ first, then I can import the GDAL modules.
And, as I mentioned, the Python window from within Arc does NOT throw the error – I can use ‘from osgeo import gdal’ right off the bat, without importing arcpy.
One of the best installation tutorials ever.Had no problem following it.You saved a lot of my time. Thankyou
For anyone struggling with the external, internal paht not found, when using the cmd. Check where your file is and add \ for each time the folder changes as my computer wouldn’t understand it without them.
For example, my data was in C:\Program Files (x86)\ GDAL\ gdal-data
On this tutorial the \ aren’t used which meant my laptop couldn’t find it.
all the back slashes are missing in the instructions above.
Maybe it is a problem with my browser.
After entering the extra \ everything worked.
thank you for catching that! i don’t know what happened to them, but i just re-added them now!
Thanks a lot for your clear and efficient tutorial!
The BEST tutorial I have seen. Thank you, Albert!
Plain awesome!! Thanks for putting this together!
I agree this is a very good and straight forward guide, but my command window closes really fast. I tried it with putting in the slashes and without
C:Program Files (x86)GDALgdal-data
C:\Program Files (x86)\GDAL\gdal-data
Also, I unistalled phython and GDAL, restated my machine, and re-installed everything but get the same problem. I am running windows 10. What am I doing wrong?
If the command line is closing really fast, then you should try going to the command prompt (Start Menu –> Command Prompt) and typing in “gdalinfo –version” to test your installation.
You need to add “C:\Program Files (x86)\GDAL\gdal-data” into your system variables, not the command prompt though. In order to do that you should follow Step 3: Adding Path Variables.
Hope that helps!
I am facing an error when i try to use the gdal_translate command, t cannot find org_MSSQLSpatial.dll. Is there any way to solve this?
Just delete the org_MSSQLSpatial.dll if you are not using MS SQLServer.
There is a missing backslash at the GDAL_DATA variable value:
Variable value: C:\Program Files (x86)\GDALgdal-data
Variable value: C:\Program Files (x86)\GDAL\gdal-data
The screenshot shows the correct value.
Any thoughts on how to run gdal2tiles (with or without parallel) without command prompt? My workstation isn’t allowed to use it due to network restrictions. We do however have permissions to use PowerShell. Getting GDAL to run buildvrt and other basic functions isn’t an issue but to make the TMS we would like to use gdal2tiles (preferably gdal2tiles_parallel). I know this is quite the reach but any thoughts/help is greatly appreciated.
Brother, you are great!!! A awesome instruction.. !! You just help me a lot!!
Not sure if this is the right place to post, but worth a try.
I have successfully installed GDAL (thanks for the great guide!) on a new machine. The script for which I need GDAL, however, does not work. From having the same problem in the past on my other computer, I found out that the reason the script did not work was that GDAL was not working well with numpy. The solution was to install an older version of GDAL compatible with numpy 1.7.
Here is my problem, the archived distribution no longer include the .msi installer. Any suggestions as to how to solve this? (i.e. is there a way to install the compiled binaries archived on the GISinternals website)
This saved me today.
this is really super…easy to follow….but there IS a problem that many people comment about. it is common to get the error message :
ImportError: No module named _gdal_array
when you try to assign to a numpy array with this kind of code:
ImportError: No module named _gdal_array
Can you try to alter the process to make this go away? I’m installing under py 2.7 in win10… Again, there are many people having problems with this
Unfortunately trying to install this on a client laptop without Administrator access, any suggestions on how to install GDAL manually?
When I run gdalinfo I get a lot of these errors
ERROR 1: Can’t load requested DLL: C:\Users\jor0135\Program_Files\GDAL\gdalplugins\gdal_netCDF.dll
14001: The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail.
i tried to install on windows 10 and the GDAL installed, but the last step don’t, when i verified it doesn’t run the code, actually a i have installed arcgis 10.2. does it cause a problems?
from gdalplugins folder delete ogr_MSSQLSpatial.dll
hi, i have this strange error and i dont know what to do plz help me 🙂
GDAL 2.1.3, released 2017/20/01
GDAL 2.1.3, released 2017/20/01
Python 3.6.1 |Anaconda 4.4.0 (64-bit)| (default, May 11 2017, 13:25:24) [MSC v.1900 64 bit (AMD64)] on win32
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import gdal
Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\Valdas\Anaconda3\lib\site-packages\gdal.py”, line 2, in
from osgeo.gdal import deprecation_warn
File “C:\Users\Valdas\Anaconda3\lib\site-packages\osgeo\__init__.py”, line 21, in
_gdal = swig_import_helper()
File “C:\Users\Valdas\Anaconda3\lib\site-packages\osgeo\__init__.py”, line 17, in swig_import_helper
_mod = imp.load_module(‘_gdal’, fp, pathname, description)
File “C:\Users\Valdas\Anaconda3\lib\imp.py”, line 242, in load_module
return load_dynamic(name, filename, file)
File “C:\Users\Valdas\Anaconda3\lib\imp.py”, line 342, in load_dynamic
ImportError: DLL load failed: The specified module could not be found.
I got the same error. Did you solve it?
I am facing the same issue.
How did you solve it
Thanks for this awesome tutorial. Awaiting more
Hey when I use the version command it works fine but it only works when my directory is located in GDAL Program Files folder (C:\Program Files\GDAL) . What can I change so that the command works even when I’m in a higher folder like C:?
Wow, Thanks for the detailed instructions. Nice Post
Tried installing the GDAL Version 2.2.1 today and kept getting ” ‘gdalinfo’ is not recognized as an internal or external command ” error in command prompt. After checking the system variables a few times for typos, I ended up uninstalling the 2.2.1 version and installing the 1.11.4 version. Seems to be working ok now when I check in CMD prompt. More or less an FYI I suppose. Not sure if others have encountered this behavior …
OS: Windows 10, 64-bit
Python: 2.7.1 (32-bit)
There’s a presentational problem under Step 4, to do with the crayon WordPress plugin (it seems). Instead of showing simply ‘gdalinfo –version’ there’s a whole load of other crayon-related gubbins (using Chrome)
Thanks, Sir Nice Article.
Thanks for sharing the post… very informative
Thanks for your good information.
Thanks for this good tutorial
I have successfully installed GDAL in windows
I want to use it with OpenCV library to read multispectral images
But when I CMAKE, it asks me to set two path: GDAL_LIBRARY and GDAL_INCLUDE_DIR
I don’t what are the differecnes. I set C:\Program Files (x86)\GDAL for both but CMAKE tells me:
WARNING: Target “opencv_imgcodecs” requests linking to directory “C:/Program Files (x86)/GDAL”. Targets may link only to libraries. CMake is dropping the item.
Could you please help me?
This tutorial is great. Thanks
Thanks for the great help
thanks for the great post. it helped me and my team.
Thanks for the installation tutorial – it worked a treat. One issue though that I have encountered. Other people have posted on this as well. When I type gdalinfo –version on the windows command line I get:
ERROR 1: Can’t load requested DLL: C:\Program Files\GDAL\gdalplugins\ogr_MSSQLSpatial.dll
126: The specified module could not be found
I had a look and ogr_MSSQLSpatial.dll does exists in the folder (C:\Program Files\GDAL\gdalplugins) yet for some reason it cannot find it.
My environmental variables are as following:
Variable Name: GDAL_DATA
Variable Value: C:\Program Files\GDAL\gdal-data
Variable Name: GDAL_DRIVER_PATH
Variable Name: C:\Program Files\GDAL\gdalplugins
Variable Name: Path
Variable Value: C:\Python27;C:\Program Files\GDAL
I have Python 2.7.15 [MSC v.1500 64 bit (AMD64)] and installed
gdal-202-1500-x64-core.msi and GDAL-2.2.3.win-amd64-py2.7.msi
I should note that:
from osgeo import gdal
from osgeo.gdalconst import *
work fine on the Python 2.7.15 shell and I am able to load and save ENVI datasets.
Is there any way I can fix this ogr_MSSQLSpatial.dll issue.
I installed (Windows 10, GDAL 1.11.4, released 2016/01/25) it as you explained and set all enviroment variable then i get this
FAILURE: no target datasource provided
C:\Program Files\GDAL>ogr2ogr -f GPKG Berlin.gpkg “C:\Users\alo\Desktop\London.osm”
Unable to open datasource `C:\Users\alo\Desktop\London.osm’ with the following drivers.
-> ESRI Shapefile
-> MapInfo File
-> UK .NTF
-> Interlis 1
-> Interlis 2
Potentially, that error could be because the “C:\Users\alo\Desktop\London.osm” file is missing or not accessible.
Hey Albert, it is very kind of you to take time and write this awesome guide, I knew how to install Python and GDAL but I couldn’t add the path variables properly, this guide helped me overcome that issue.
Keep up the great work bro, God bless you 🙂
I have install geopandas. Almost all dependencies have installed. What I did was I downloaded GDAL afterwards. So in anaconda prompt it showing ” This system cannot find the path specified.
C:\Users\prajapati jyoti>set “GDAL_DRIVER_PATH=” ”
I did everything u said above after installing geopandas. Now I m not able to install geopandas again. Even after setting GDAL_DRIVER_PATH it showing same. Please help me. I want to use ‘crs’ in import geopandas but I not able to. What should I do now. How to resolve this problem
Let me preface this by saying i have not used geopandas before but 1) anaconda is a separate python installation that should not be affected by system variables… this means your conda path should only be governed by conda.. and not effected by my installation instructions above 2) you can try to delete the (gdal_driver) if you think that might help you install geopandas again though.. if you want to set a coordinate system i would recommend using gdal’s projection tool:
This tutorial with screenshot guidance is so straight forward, easy to understand, and works perfect for me!
As a beginner, this is all I need to know. Great post, very informative and useful! Thanks for sharing your work and keep up the great work.
I really wish that this worked – I’m a bit of a python newbie and I’ve been trying to just run “from osgeo import gdal” in one of my scripts for days but nothing is working. After trying this procedure outlined above, I get “error 1: can’t load requested DLL…” for my gdalplugins whenever I try “gdalinfo –version.” I’ve downloaded anaconda python 2.7 and I have a couple ArcGIS installations as well, and I’ve tried copying the osgeo module folder into each site-packages folder but whenever I run my one wee script with IDLE I get the error “No module named _gdal”. Workin with windows.
Please help a grad student in need!
This was so very helpful, thank you for documenting!
Thanks for Sharing
As a beginner, this is all I need! Thanks so much, it really helps!
This actually worked for me man. Thank you so much!!
The gdal test went smoothly.
Awesome post and explained very well. I found your post very useful while installation.
Hello, I installed gdal and gdalinfo –version works perfectly. However, when I type “python” on the cmd to open the python shell and try to import gdal, there is an error saying that module not found. How do I solve it? Thanks
It actually worked for me. Thanks
After installation of gdal (to enable PostGIS tools) on Windows, sf gives warnings about dependencies?
As gdal continues to release, how to adjust to this tutorial?
Python 2.7.16 is still MSC v. 1500, but as of Nov. 2019, a win 32, realease-1500 isn’t available from gdal when following the link to Tamas Szekeres’ Windows binaries as provided above.
Only 1900 & up are still available.
So, for a newbie, that’s a bit confusing. What must the user know as time moves forward, in order to continue applying this tutorial?
to run gdal .py tools, try not to install into C:\Program Files\GDAL install it in C:\GDAL is better.
My python version is MSC v.1916 64 bit (AMD64)] on win32 while GDAL only has MSC1911.
How to resolve this issue?
I have the same problem
i have a version of python 64 bit on win 32, i dont know what installer use if i use for x64 or win32
Very Useful tutorial, thanks for sharing, keep sharing
Good information. Lucky me I found your site by chance (stumbleupon).
I’ve book marked it for later!
An interesting read that we might suggest our students view while learning to install python
Very well done guide. Thanks a lot mate.
very helpful, thank you! 🙂
The core installer does not seem to function on a Windows Server 2016 machine. A double click turns on the stopwatch for a brief moment then nothing at all. There isn’t anything the task list running either.
Ignore my last reply, the server had the downloaded files blocked. Everything worked just fine once they had been unblocked.
Very detailed and useful tutorial. Thanks a lot
Thanks, the best tutorial!!!!
nice info thanks for the post
command-line once GDAL is nice
This is simple and clear
Very good content. I love it. Keep it up.
very helpful article
Thank you for the article!
screenshots helped me a lot
You are the best thank you so much 😀
Hi! Thanks for the tutorial! I did everything step by step, twice.
“‘gdalinfo’ is not recognized as an internal or external command,
operable program or batch file.”
any suggestions? Has anything changed since you wrote this tutorial?
win 10 home, python 3.8, gdal301.dll
Great piece of information mate, still helping people like me 🙂
it was hard to reach comment section for the post awesomeness
Thank you so much 🙂
Best tutorial, thank you | 1 | 22 |
<urn:uuid:1daa92fb-f5b4-4828-a37a-265e6ba0d5b1> | Irritable bowel syndrome (IBS) is the most prevalent disorder of brain-gut interactions that affects between 5 and 10% of the general population worldwide. The current symptom criteria restrict the diagnosis to recurrent abdominal pain associated with altered bowel habits, but the majority of patients also report non-painful abdominal discomfort, associated psychiatric conditions (anxiety and depression), as well as other visceral and somatic pain-related symptoms. For decades, IBS was considered an intestinal motility disorder, and more recently a gut disorder. However, based on an extensive body of reported information about central, peripheral mechanisms and genetic factors involved in the pathophysiology of IBS symptoms, a comprehensive disease model of brain-gut-microbiome interactions has emerged, which can explain altered bowel habits, chronic abdominal pain, and psychiatric comorbidities. In this review, we will first describe novel insights into several key components of brain-gut microbiome interactions, starting with reported alterations in the gut connectome and enteric nervous system, and a list of distinct functional and structural brain signatures, and comparing them to the proposed brain alterations in anxiety disorders. We will then point out the emerging correlations between the brain networks with the genomic, gastrointestinal, immune, and gut microbiome-related parameters. We will incorporate this new information into a systems-based disease model of IBS. Finally, we will discuss the implications of such a model for the improved understanding of the disorder and the development of more effective treatment approaches in the future.
IBS is one of the most common disorders of brain-gut interaction globally, with prevalence rates between 1.1 and 45% worldwide, and between 5 and 10% for most Western countries and China . In contrast to many chronic non-communicable diseases, such as metabolic, neurological, cardiovascular and some forms of cancer, there has been no progressive increase in prevalence during the past 75 years, even though prevalence numbers have been fluctuating due to the periodic changes in official symptom criteria. Based on questionnaire data, women are 1.5–3.0 times more likely to have IBS, reflecting a prevalence in women of 14% and in men of 8.9% [2, 3]. However, based on healthcare system utilization, women are up to 2–2.5 times more likely to see a healthcare provider for their symptoms . Based on the current symptom criteria , IBS is defined by chronically recurring abdominal pain associated with altered bowel habits in the absence of detectable organic disease. IBS symptoms can be debilitating in a small number of patients, but are mild to moderate in the majority of affected individuals . Based on this definition, other frequently associated somatic or visceral pain and discomfort, as well as anxiety and depression are considered so called comorbid conditions.
The gut-restricted definition of the Rome criteria overlooks the fact that a large number of individuals who meet diagnostic criteria for an anxiety or depressive disorder have IBS and vice versa [7,8,9,10], and a majority of IBS patients show elevated levels of trait anxiety and neuroticism [10,11,12,13], or meet diagnostic criteria for an anxiety disorder . Currently, the commonly associated psychiatric and somatic symptoms are generally referred to as comorbidities, separate from the primary GI diagnosis and not present in all patients. However, detailed patient histories, frequently reveal symptoms of abdominal discomfort, anxiety and behavioral disturbances starting in early childhood in a majority of patients, and a large recent genetic epidemiological study has provided an intriguing explanation for the co-occurrence of abdominal and psychiatric symptoms in IBS patients on the basis of several shared single nucleotide polymorphisms (see paragraph IBS related genes shared with anxiety disorders below) . These new findings are consistent with genetic vulnerabilities affecting both the central and the enteric nervous system (ENS), and argue against the long held linear pathophysiological concepts that emotional factors may cause IBS symptoms, or that chronic IBS gut symptoms lead to anxiety and depression Box 1.
Much of research and drug development in IBS patients has been based on descriptive and symptomatic features, rather than on biology-based disease definitions. These definitions suggest a core abnormality shared by all IBS patients (chronic, recurrent abdominal pain) as well as heterogeneity based on self reports of predominant bowel habit. However, a comprehensive identification of distinct biology-based subgroups of patients including those based on sex, with different underlying pathophysiological components and differential responsiveness to specific therapies, has not been achieved. Subtypes based on bowel habits are generally based on subjective reports of altered bowel habits, without consistent correlates in intestinal transit times, altered regional motility patterns or altered fluid and electrolyte handling by the gut . Even though some of the most commonly used pharmacological and behavioral therapies are targeted at the level of the brain (low dose tricyclic antidepressants , serotonin reuptake inhibitors , cognitive behavioral therapies [19, 20], gut directed hypnosis, stress management ), research and drug development efforts are still predominantly focused on single, usually peripheral targets identified in preclinical models .
Based on such studies and on clinical reports from small samples, an astonishing list of biological abnormalities at various levels of the brain gut axis have been reported in the last 30 years and proposed as potential biomarkers or pathophysiological factors : smooth muscle cells [22, 23], the gut epithelium ; bile acids [25,26,27,28]; immune system activation [29, 30]; neuroendocrine mechanisms ; brain structure and function [32, 33]; stress responsiveness ; affective [35, 36], cognitive [37,38,39,40], pain modulation [41, 42], gene polymorphisms ; and most recently the gut microbiome [43,44,45,46,47]. In addition, there has been a wealth of comprehensive data and clinical reports demonstrating a strong relationship between psychosocial factors and IBS symptoms . However, despite the emergent discoveries about possible peripheral [29, 30] and central [32, 33, 35, 49, 50] components in IBS pathophysiology, the development of animal models with high face and construct validity , the reproduction of visceral hypersensitivity and IBS-relevant features after transplantation of human biospecimen into rodent models, and the recent acceptance of a brain-gut model of IBS , the controversy on the primary role of the nervous system versus peripheral factors still persists in the field [33, 53].
In this review, we will discuss the evidence supporting an integrative brain gut microbiome (BGM) model (Fig. 1) which incorporates a large body of evidence from studies on peripheral and central neurobiological disease mechanisms, brain and gut targeted influences of the exposome, and results from recently reported large scale genetic analyses with relevance for neuronal dysfunction of the CNS (central nervous system) and ENS (enteric nervous system). This systems biological model is consistent with the frequent comorbidity of IBS with other so-called functional GI disorders, and with other chronic pain and psychiatric disorders, in particular with anxiety. We will use this model to discuss the implications for the pathophysiology of IBS, its association with psychiatric symptoms, and the development of more effective treatment approaches in the future.
The brain-gut-microbiome system
The enteric nervous system and gut connectome
The ENS is a vast network of different types of intrinsic enteric neurons and glia which are “sandwiched” between the mucosa, and the circular and longitudinal muscle layers of the gut, containing motor neurons, intrinsic primary afferent neurons, and interneurons. Nearly every neurotransmitter class found in the CNS is present in the ENS . These neurons are organized into two interconnected networks, the myenteric and submucosal plexus, which regulate motility and secretion respectively in a coordinated fashion . Different classes of neurons are chemically coded by different combinations of neurotransmitters and modulators, many of which are also found in the CNS .
Within the gut, the ENS is closely connected with the gut-based immune system, endocrine system, glial and epithelial cells, making up the gut connectome (Fig. 2). The term connectome reflects close proximity, connectivity, and functional interactions between many cell types and functions in the gut that interact with ENS and CNS.
Beyond the gut, the ENS is connected with the spinal cord, brainstem, and brain via primary spinal and vagal afferents, and postganglionic sympathetic and vagal efferent fibers [58, 59]. Although the ENS is capable of regulating all GI functions without input from the CNS, the CNS (brain and spinal cord) has strong modulatory functions in regulating intestinal behaviors in accordance with the overall state of the organisms and homeostatic perturbations .
Even though the ENS is often being referred to as the “second brain” , evolutionarily speaking, the ENS can be traced back to the cnidaria phylum and epitomized by the hydra genus 650 million years ago . Historically, it has been classified as a nerve net, but evidence has shown specialized neurons with neurotransmitters such as serotonin, catecholamines, and neuropeptides are also involved [63, 64]. In the hydra, the main function of the ENS is peristalsis, mixing movements and expulsion in addition to avoidance behaviors, . The process of cephalization and the development of bilateria (i.e., organisms through evolution with a head/tail [anterior/posterior axis] and belly/back [dorsal/ventral axis]) led to the development of more complex neuronal systems, most notably the CNS around a central region and highly developed brains. Thus from an evolutionary standpoint, the ENS can be considered “the first brain” [56, 62].
ENS related genes
A recent profiling of the human ENS at single-cell resolution highlighted important genes related to neuropathic, inflammatory, and extraintestinal diseases . Overlapping with the largest GWAS of IBS to date , CADM2, encoding the cell-adhesion molecule, was highly expressed in myenteric but not mucosal glia . The known functions of myenteric glia include modulating myenteric neuron activity, regulating oxidative stress and neuroinflammation, providing trophic support, gliogenesis, and neurogenesis . CADM2 encodes a member of synaptic cell adhesion molecules (SynCAMs) involved in synaptic organization and signaling , and cell adhesion-mediated mechanisms underlying the communication between glia and neurons in the ENS are important in understanding of ENS function in health and disease. For example, perturbed communication between enteric glia and neurons may play a role in dysfunctional ENS circuits in IBS . The mechanisms underlying neuronal-glia signaling of the ENS in the context of gastrointestinal disorders, IBS, and visceral pain has recently been extensively reviewed [66, 68]. It is worth noting that CADM2 has been implicated in a wide range of psychological and neurological traits often observed in IBS patient including, but not limited to psycho-behavioral traits, risk-taking behavior, nervousness-like traits, and neurodevelopmental disorders (e.g., intellectual disability and autism spectrum disorder) . Moreover, SynCAMs have a large role in synaptogenesis, axon guidance, and synaptic plasticity at a basic neurodevelopmental level which has the potential to affect a variety of disorders .
Similarly, NCAM1 is another gene found in the largest GWAS to date and has been implicated in the development of the ENS. In a similar manner to CADM2, NCAM1 has been shown to play a role in the ENS regarding cell migration, axon growth, neuronal plasticity and fasciculation , but has not been as thoroughly investigated as CADM2. A recent cross-tissue atlas applied single-nucleus RNA sequencing from eight healthy human organs showed that a cluster of genes including NCAM1 and CADM2 were involved particularly with cognitive/psychiatric symptoms including general cognitive ability, risk-taking behavior, intelligence, and neuroticism . Even though the study did not contain tissue samples from the intestinal regions of the ENS, these genes involved in cognitive/psychiatric functions were highly expressed in Schwann cells in the esophagus mucosa, and interstitial cells of Cajal (ICCs) and neurons in the esophagus muscularis .
The gut microbiome
The term gut microbiome refers to the 40 trillion microbial organisms (bacteria, fungi, and archae) and their millions of genes that live throughout the gastrointestinal tract, from the oral cavity to the rectum, with the highest concentration and diversity in the large bowel . The symbiotic interactions of the 3 groups of microorganisms within the microbiome, and with the extensive gut virome are incompletely understood [74, 75]. The characterization of these microorganisms in IBS to date is primarily based on identification of relative abundances and diversity using 16S rRNA sequencing techniques with limited resolution beyond the species level. We refer to several recent review articles on this topic [76, 77]. The extensive literature reveals inconsistent findings and a causative relationship of specific microorganisms with IBS symptoms has not been demonstrated. However, both preclinical and some clinical studies have demonstrated a significant effect of psychosocial stress on the relative abundance of gut microbes which is mediated both by stress-induced alterations in regional transit and secretion, and by direct effects of norepinephrine and possibly other signaling molecules released from gut cells on gut microbial gene expression and virulence , suggesting the possibility that the microbiome in subgroups of IBS patients with greater stress reactivity may contribute to certain symptoms .
Brain Connectome alterations in IBS
A growing body of research paired with clinical observations supports a critical role of the brain in the generation and maintenance of IBS symptoms. Regardless of primary symptom triggers, the brain is ultimately responsible for constructing and generating the conscious perception of abdominal pain, discomfort, and anxiety based on sensory input from the gut. Stressful and traumatic events during early life increase chances of developing IBS, and psychosocial stressors in adulthood play a crucial role during the first onset, symptom flare, and perceived severity of the symptoms ; centrally targeted pharmacological treatments and cognitive behavioral strategies have been some of the most effective IBS treatment strategies [3, 16, 81].
Specific brain functions such as sensory processing and modulation, emotion regulation, or cognition are the result of dynamic interactions of distributed brain areas operating in large-scale networks. As summarized in Fig. 3C and Table 1, these central networks and their properties have been assessed by neuroanatomical and neurophysiological studies in animals , as well as by a wealth of studies using different structural and functional brain imaging techniques and analyses in humans [82,83,84,85,86].
In humans, several types of networks have been reported (summarized in Table 1): functional brain networks based on evoked responses or intrinsic connectivity of the brain during rest [82, 83]; structural networks based on gray matter parameters and white matter properties; and anatomical networks based on white matter connectivities . Both evoked and resting state studies performed in patients with IBS have demonstrated abnormalities in regions and task-related networks linked to salience detection [90, 91], emotional arousal [92,93,94,95], central autonomic control [38, 96,97,98], central executive control [90, 94, 99], and sensorimotor processing [38, 100, 101]. IBS-related alterations in these networks have provided plausible neurobiological substrates for several information-processing abnormalities reported in patients with IBS, such as stress hyperresponsiveness, biased threat appraisal, expectancy of outcomes, cognitive inflexibility, autonomic hyperarousal (emotional arousal and central autonomic networks), symptom-focused attention (central executive network) [33, 53] and cognitive inflexibility (central executive network). Supporting the concept of shared pathophysiological factors (so called p-factors), several reported brain network alterations have also been described in other chronic pain conditions and in anxiety disorders (see Table 1).
The Salience Network
The salience network (SN) is integral in mediating the switching of activation between the default mode network (DMN) and central executive network, coordinating and adjusting physiologic/behavioral responses to internal and environmental perturbations of homeostasis . Visceral inputs to the affective-motivational component of the SN converge onto the anterior insula coordinating response selection and conflict monitoring with the dACC . Controlled rectal distention in IBS subjects has been shown consistently to result in increased engagement of the core hubs of the SN which are associated with increased affective, emotional, and arousal processes [104,105,106]. Reduced neurokinin-1 receptor (NK-1R) availability in the dACC, reflecting NK-1R endocytosis in response to substance P release, was found to be associated with duration of IBS symptoms . Increased substance P release is thought to result from noxious visceral stimuli and increased engagement of endogenous pain or stress inhibition systems . In adolescent girls with IBS, lower gray matter volume of the dACC has been observed , and greater salience-sensorimotor connectivity quantified by multiple neuroimaging techniques predicts a lack of symptom alleviation over 3–12 months in patients with IBS .
The default mode network (DMN)
The DMN’s role in pain perception is known to act as an opposite manner to the SN, such that the DMN is suppressed when attention is placed on present sensory stimuli, and is activated when attention is engaged with thoughts away from present sensory stimuli and engaged in mind wandering (i.e., thoughts unrelated to the present sensory environment) . Studies in chronic pain subjects have shown altered functional connectivity and topological reorganization in various regions, consistent with DMN dysregulation . Overall neuroimaging research suggests decreased activity of the DMN in patients with IBS . Lower integrity of anatomical connectivity and resting-state functional connectivity, and lower morphological integrity within the DMN (between the aMPFC and PCC) were found to be predictive of sustained IBS symptom severity over 3–12 months . Rectal lidocaine administration in IBS subjects was associated with decreased pain perception and with increased coherence in the DMN , supporting an involvement of the DMN in visceral hypersensitivity in patients with IBS.
The Sensorimotor Network
Similar to other chronic pain disorders, imaging studies in IBS subjects have shown alterations of the sensorimotor network (SMN), consistent with alterations in central processing and modulation of viscerosensory and somatosensory information [32, 100, 109, 114,115,116,117]. This network consists of the primary motor cortex, area 24 of the cingulate cortex, premotor cortex, supplementary motor area (SMA), posterior operculum/insula, as well as primary and sensory cortices in the parietal lobe. In addition, lower gray matter volume in the basal ganglia and thalamus as well as greater functional connectivity within the SMN have been observed in young children with chronic pain . Greater intrinsic functional connectivity in adults, greater cortical thickness of the posterior insula positively associated with symptom duration, and increasing functional coupling of area 24 and the thalamus, and greater SMN connectivity to the SN predicting sustained symptoms over 3–12 months . When viewed together, current evidence suggests patients with IBS have functional, morphological, and microstructural SMN alteration, which are likely to play a role in the increased perception of both visceral and somatic stimuli.
The central autonomic network
The central autonomic network (CAN) regulates visceromotor, neuroendocrine, pain, and behavioral responses essential for survival . Afferents project through the spinal cord and eventually arrive at the main homeostatic processing sites in the brainstem/central autonomic network (including hypothalamus, amygdala, and PAG), and higher cortical processing and modulatory regions . Historically it has been difficult to non-invasively study the brain stem nuclei in humans due to the limited spatial resolution of neuroimaging methods, but new imaging protocols with a resolution of 1mm3 and below are allowing new insights .
The CAN is closely connected by vagal and sympathetic efferent projections with the ENS, and afferents from the ENS send viscerosensory signals back to the brain. The hubs of the SN also participate in autonomic control via descending projections to the amygdala (tagging emotional valence and engaging autonomic survival responses to behaviorally relevant stimuli), hypothalamus (regulating homeostasis and a pattern generator for the stress response) and brainstem structures including the periaqueductal gray (PAG) and locus coeruleus (LC). The PAG is a key structure for integrating autonomic, pain modulatory/analgesic, and motor responses to stress , and the LC-norepinephrine system plays a central role in behavioral arousal and stress responses [122,123,124].
When viewed together, based on a large number of structural, and functional (resting state and evoked) studies, IBS patients show alterations in several brain networks related to salience assessment, attention, stress perception and responsiveness, and sensory processing. The responsiveness and connectivity of these networks are modulated by several vulnerability genes, which are shared both with ENS genes, and with genes identified in anxiety disorders. Based on these findings, we hypothesize that perturbations of homeostasis arising from the exposome, in the form of psychosocial and gut-targeted stressors interact with genetic factors to a spectrum of clinical phenotypes, ranging from gut symptoms to anxiety.
IBS-related genes shared with anxiety disorders
Prior to the availability of biobank scale data, many candidate gene studies uncovered potential pathways underlying IBS symptoms. These pathways have been extensively reviewed and include the serotonin pathway, SCN5A, and intestinal channelopathy, and sucrase-isomaltase malabsorption . As serotonin is secreted from enteroendocrine cells and activates enteric sensory and motor neurons, expression level alterations in serotonin receptors and transporters are likely to play a potential role in visceral hypersensitivity, pain, intestinal motility, and secretion. SCN5A encodes the voltage-gated sodium Nav1.5 channel present on interstitial cells of Cajal (ICCs) in the ENS [31, 126]. Genetic mutations on this gene have shown to impair peristalsis and cause constipation, even though slow transit constipation is an uncommon finding in IBS-C . Lastly, two faulty copies of the SI gene result in reduced disaccharide activity responsible for degradation of sucrose and starch, resulting in diarrhea and gas production in the large intestine from bacterial fermentation and is termed congenital sucrase-isomaltase deficiency (CSID), and should not be considered as IBS . Even though these findings have established causal relationships between specific genetic abnormalities and non-specific IBS-like GI symptoms in a small number of affected individuals, it is highly unlikely that they play an important role in the great majority of patients.
Recently, the largest genome wide association study with 53,000 cases of IBS across multiple cohorts was completed . In this study, the strongest risk factors for IBS included long-term or recurring antibiotic exposure in childhood, somatic pain conditions (back pain, limb pain, headaches), psychiatric conditions (anxiety, depression, excessive worrying) and fatigue. The genes included CADM2, BAG6, PHF2/FAM120AOS, NCAM1, CKAP2/TPTE2P3, and DOCK9. Four of the six loci are highly implicated in anxiety/mood disorders and there was a strong genome-wide genetic correlation of IBS with anxiety, neuroticism, depression, insomnia, and schizophrenia. Moreover, the high genetic correlations persisted after taking into account individuals with phenotypic overlap, suggesting common etiological pathways between IBS and anxiety/mood disorders. Implication of the central nervous system was further suggested by the finding that the six identified loci regulate gene expression in many genes primarily expressed in the brain. As already mentioned under ENS above, the genes NCAM1 and CADM2 were two genes which regulate neural circuit formation and influence changes in white matter microstructure in IBS and mood disorders [128,129,130]. Specifically, they regulate synaptic cell adhesion molecules, which are present in dorsal root ganglia sensory neurons throughout development, mediate adhesion of sensory axons, and induce neurite outgrowth . Mechanisms relating to brain development were further implicated by the genes PHF2 (i.e., proper expansion of neural progenitors) and DOCK9 (i.e., dendritic development of the hippocampus), but have not yet been studied in patients with IBS [131,132,133].
Importantly, the heritability of IBS was estimated to be a modest 5.8%, suggesting that perturbation of the brain-gut axis by environmental factors arising from the exposome such as early adversity, psychosocial stress, learned behaviors, diet, and possibly dysbiosis play a prominent role.
Considering these new genetic findings and the reported frequent comorbidities of IBS with other chronic pain and psychiatric conditions it is becoming increasingly recognized that IBS is part of a constellation of symptoms that occur on a larger spectrum of altered brain-body interactions [134, 135]. This concept is consistent with the “somatic symptom disorder” concept, previously proposed . The main co-occurring symptoms include hypersensitivity to multiple internal and external sensory stimuli, which could explain the observed association with a variety of seemingly unrelated external and internal factors, previously reported. Other co-occurring symptoms include mood problems, fatigue, and problems with sleep onset and maintenance, as well as memory disturbance . The neurogenetic basis integrating mood/anxiety and central amplification of sensory inputs (“central sensitization”) based on many of these genetic hits have been well established, which will be discussed below.
Known functions of NCAM1, DOCK9, and PHF2 and possible roles in IBS pathophysiology are summarized in Table 2.
Central sensitization and comorbid chronic pain conditions
The primary mechanism for the core symptom of persistent, chronically recurring abdominal pain that patients with IBS report is thought to result from alterations in the central processing of sensory input from the gut, also referred to as central sensitization [134, 136]. The term was originally coined to represent the specific spinal mechanisms responsible for the amplification of nociceptive signaling involving spinal activation of the NMDA receptor [137, 138], and is present in various chronic pain disorders such as chronic neuropathic pain, fibromyalgia, headaches, and IBS [6, 134, 139,140,141]. Today, it is understood that spinal and supraspinal mechanisms both play key roles in the development and maintenance of central sensitization. Based on rodent models of pain, plausible spinal mechanisms include alterations in converging sensory input from different sites on the GI tract and body, temporal and spatial summation, reduced endogenous dorsal horn inhibition, and glial cell activation. Based on human brain imaging studies, supraspinal mechanisms include an altered balance between facilitatory and inhibitory endogenous pain modulation influences, hyperconnectivity between brain networks, alterations of gray matter architecture, elevated CSF glutamate and substance P levels, reduced GABAergic transmission, altered noradrenergic signaling/receptors, and glial cell activation [122, 134].
The large overlap - up to a 4.27 odds ratio - between psychiatric phenotypes (primarily anxiety and depression [136, 142]) and IBS and other chronic pain disorders, as well as genetic overlap [8, 143,144,145] mentioned earlier, suggests central sensitization as a possible shared pathophysiological factor (p factor) [134, 146,147,148]. The concept of central sensitization was introduced in psychological research in the 1990s based on the observation that highly sensitive persons (HSPs) often share a history of early adversity, psychological profile of introversion (“neuroticism”), and greater emotionality . Patients with IBS are significantly more likely to exhibit qualities of HSPs, and show central sensitization which is expressed as general sensory hypersensitivity . The association between chronic pain disorders, psychiatric symptoms, and mechanisms of central sensitization is likely due to the above-mentioned supraspinal alterations, including monoamine neurotransmitter systems (i.e., serotonin, dopamine, noradrenaline), the amino acid GABA, and brain regions underlying both pain transmission/modulation and mood disorders [151, 152]. Striato-thalamic-frontal cortical pathways including the prefrontal cortex, amygdala, nucleus accumbens, and thalamic nuclei are key hubs, and alterations in neuronal firing and communication underlie sensory sensitivity and psychiatric symptoms including altered perception, arousal, cognition, and mood [152,153,154]. Behaviorally, chronification of central sensitization and negative mood states have been proposed to be in the same continuum of aversion, such that pain motivates the avoidance of further injury, and anxiety promotes behaviors that diminish anticipated danger .
An extensive literature supports the importance of early programming by early adverse life (EAL) events for the development not only of IBS , but also of other chronic pain conditions and psychiatric syndromes [155, 156]. Perturbations to the developing brain play a large sole in sensitizing cortical nociceptive circuitry , with the most mechanistic study in humans showing larger event-related potentials (ERPs) to nociceptive stimuli, but not tactical stimuli in infants exposed to many invasive, skin-breaking, painful procedures and morphine . Moreover, up to 68.4% of children who are exposed to early life traumatic events such as the NICU can develop chronic pain by age 10. Greater amounts of pain-related stressors, painful procedures, and morphine are associated with lower global gray matter volumes throughout childhood [159, 160]. In addition to the well documented changes in stress response systems [161,162,163], the effect of early-life dietary influences on the gut microbiome and the BGM axis have received increasing attention, even though a direct link with chronic abdominal pain has not been established [164, 165].
Clinical and therapeutic implications
Despite a decades-long effort by the pharmaceutical industry, a large number of IBS candidate drugs identified and validated in preclinical models and targeted at both central and gut mechanisms have failed, either due to lack of efficacy or serious side effects . Of the small number of new drugs obtaining FDA approval, efficacy above placebo has generally not exceeded 10% in phase 3 trials. The great majority of available, FDA approved IBS medications are targeted at intestinal secretion and motility, and the gut microbiome with the goal to improve altered bowel habits and bloating-type symptoms in subgroups of patients .
Pharmacological treatments have been clinically divided into first and second-line approaches , and are aimed at specific symptoms. Moderate quality data has shown low-dose tricyclic antidepressants and SSRIs to be effective for pain (primarily the former) and comorbid anxiety and depression (primarily the latter) [16, 18]. As 5-HT receptor-mediated signaling plays important roles both in the brain, as well as in the gut, there is a good rationale for IBS treatments targeted at these receptors. 5-HT released from enterochromaffin cells mediates many GI functions including peristalsis, secretion, pain, and nausea via receptors on ENS and vagal nerve endings . For example, 5HT-3 receptor antagonists (acting on both gut and brain-located 5HT-3 receptors (such as alosteron, and ramosteron) have shown effectiveness in slowing colonic transit, improving diarrhea, and reducing visceral pain in well-designed randomized controlled trials . High-quality preclinical data has shown the antagonism of 5HT-3 receptors on the area postrema and vagus nerve have shown a reduction of visceral pain and diarrhea [16, 18, 166], and older data have demonstrated anxiolytic effects [167,168,169].
Despite evidence obtained in rodent models of IBS, efforts to develop peripheral visceral analgesics or central stress modulators (antagonists for CRF-1 and NK-1 receptors) have failed to show therapeutic benefits in IBS. This is surprising, as multiple preclinical studies as well as a human brain imaging study had demonstrated effectiveness of the CRF-R1 antagonist Emicerfont (GW876008) on evoked visceral pain and on central stress circuits [170, 171]. Because of these disappointing results, increased attention has been shifted to behavioral treatments, including gut-directed hypnosis [21, 81, 172,173,174,175], mindfulness-based stress reduction , and cognitive behavioral approaches [19, 20, 177,178,179]. Several of these therapeutic approaches have shown promise in improving IBS symptoms, and a few studies have demonstrated associated neurobiological effects on brain mechanisms in salience, emotional arousal, and executive networks [172, 177].
As access to therapists specialized in these behavioral IBS treatments is limited, and traditional delivery is time-consuming, web-based versions of these therapies have been evaluated, some of which have been FDA approved and are becoming available to patients . In addition, several randomized controlled studies have shown some benefits of certain dietary interventions (low FODMAP diet ), and microbiome-targeted treatments (probiotics, antibiotics) .
Summary and conclusions
Even though in subsets of patients, SSRIs and bowel movement targeted therapies are helpful, the model of IBS presented in this review provides precedence for a multidisciplinary therapeutic approach including pharmacological, behavioral, and dietary approaches. Current evidence suggests that there are significant interindividual variations in the response to such therapies, including the predominant bowel habit subtype, severity of gut and psychiatric symptoms, and possibly the presence of gut microbial alterations.
There is growing evidence from clinical, preclinical, and genetic studies supporting the existence of shared p factors in IBS and often comorbid gastrointestinal and non-gastrointestinal pain conditions, as well as psychiatric conditions. Despite shared vulnerability genes, different influences from the environment (exposome) in particular during childhood ultimately shape the specific clinical phenotype. The emerging disease model can explain the failure of reductionistic single mechanism targeted treatment approaches, and is consistent with the evidence for the effectiveness of personalized multidisciplinary approaches involving behavioral, dietary, and pharmacological interventions.
irritable bowel syndrome (IBS); brain-gut-microbiome (BGM); gastrointestinal (GI); enteric nervous system (ENS); central nervous system (CNS); synaptic cell adhesion molecules (SynCAMs); default mode network (DMN); salience network (SAL); sensorimotornNetwork (SMN); central autonomic network (CAN); central executive network (CEN); locus coeruleus (LC); periaqueductal grey (PAG); dorsal anterior cingulate cortex (dACC); posterior cingulate cortex (PCC); N-methyl-D-aspartate (NMDA); gamma-aminobutyric acid (GABA); cerebrospinal fluid (CSF); early adverse life events (EAL); serotonin (5-HT); selective serotonin reuptake inhibitor (SSRI); long-term potentiation (LTP); event-related potentials (ERPs).
Lovell RM, Ford AC. Global prevalence of and risk factors for irritable bowel syndrome: a meta-analysis. Clin Gastroenterol Hepatol. 2012;10:712–21.
Enck P, Aziz Q, Barbara G, Farmer AD, Fukudo S, Mayer EA, et al. Irritable bowel syndrome. Nat Rev Dis Prim. 2016;2:16014.
Ford AC, Sperber AD, Corsetti M, Camilleri M. Irritable bowel syndrome. Lancet 2020;396:1675–88.
Drossman DA, Li Z, Andruzzi E, Temple RD, Talley NJ, Grant Thompson W, et al. U. S. Householder survey of functional gastrointestinal disorders. Dig Dis Sci. 1993;38:1569–80.
Drossman DA. Functional gastrointestinal disorders: history, pathophysiology, clinical features, and Rome IV. Gastroenterology 2016;150:1262–79.
Simrén M, Törnblom H, Palsson OS, Van Oudenhove L, Whitehead WE, Tack J. Cumulative effects of psychologic distress, visceral hypersensitivity, and abnormal transit on patient-reported outcomes in irritable bowel Syndrome. Gastroenterology 2019;157:391–402.e2.
Banerjee A, Sarkhel S, Sarkar R, Dhali GK. Anxiety and depression in Irritable Bowel Syndrome. Indian J Psychol Med. 2017;39:741–5.
Eijsbouts C, Zheng T, Kennedy NA, Bonfiglio F, Anderson CA, Moutsianas L, et al. Genome-wide analysis of 53,400 people with irritable bowel syndrome highlights shared genetic pathways with mood and anxiety disorders. Nat Genet. 2021;53:1543–52.
Bengtson M-B, Aamodt G, Vatn MH, Harris JR. Co-occurrence of IBS and symptoms of anxiety or depression, among Norwegian twins, is influenced by both heredity and intrauterine growth. BMC Gastroenterol. 2015;15:9.
Lee C, Doo E, Choi JM, Jang S-H, Ryu H-S, Lee JY, et al. The increased level of depression and anxiety in irritable bowel syndrome patients compared with healthy controls: systematic review and meta-analysis. J Neurogastroenterol Motil. 2017;23:349–62.
Muscatello MRA, Bruno A, Mento C, Pandolfo G, Zoccali RA. Personality traits and emotional patterns in irritable bowel syndrome. World J Gastroenterol. 2016;22:6402–15.
Fond G, Loundou A, Hamdani N, Boukouaci W, Dargel A, Oliveira J, et al. Anxiety and depression comorbidities in irritable bowel syndrome (IBS): a systematic review and meta-analysis. Eur Arch Psychiatry Clin Neurosci. 2014;264:651–60.
Hu Z, Li M, Yao L, Wang Y, Wang E, Yuan J, et al. The level and prevalence of depression and anxiety among patients with different subtypes of irritable bowel syndrome: a network meta-analysis. BMC Gastroenterol. 2021;21:23.
Fadgyas-Stanculete M, Buga A-M, Popa-Wagner A, Dumitrascu DL. The relationship between irritable bowel syndrome and psychiatric disorders: from molecular changes to clinical manifestations. J Mol Psychiatry. 2014;2:4.
Mayer EA, Catherine Bushnell M. Functional Pain Syndromes: Presentation and Pathophysiology. Lippincott Williams & Wilkins; 2015.
Camilleri M. Diagnosis and treatment of irritable bowel syndrome: a review. J Am Med Assoc. 2021;325:865–77.
Rahimi R, Nikfar S, Rezaie A, Abdollahi M. Efficacy of tricyclic antidepressants in irritable bowel syndrome: a meta-analysis. World J Gastroenterol. 2009;15:1548–53.
Black CJ, Yuan Y, Selinger CP, Camilleri M, Quigley EMM, Moayyedi P, et al. Efficacy of soluble fibre, antispasmodic drugs, and gut–brain neuromodulators in irritable bowel syndrome: a systematic review and network meta-analysis. Lancet Gastroenterol Hepatol. 2020;5:117–31.
Kinsinger SW. Cognitive-behavioral therapy for patients with irritable bowel syndrome: current insights. Psychol Res Behav Manag. 2017;10:231–7.
Lackner JM, Jaccard J, Keefer L, Brenner DM, Firth RS, Gudleski GD, et al. Improvement in gastrointestinal symptoms after cognitive behavior therapy for refractory irritable bowel Syndrome. Gastroenterology 2018;155:47–57.
Flik CE, Bakker L, Laan W, van Rood YR, Smout AJPM, de Wit NJ. Systematic review: The placebo effect of psychological interventions in the treatment of irritable bowel syndrome. World J Gastroenterol. 2017;23:2223–33.
Takeshita E, Matsuura B, Dong M, Miller LJ, Matsui H, Onji M. Molecular characterization and distribution of motilin family receptors in the human gastrointestinal tract. J Gastroenterol. 2006;41:223–30.
Miller LJ. Characterization of cholecystokinin receptors on human gastric smooth muscle tumors. Am J Physiol. 1984;247:G402–G410.
Bischoff SC, Barbara G, Buurman W, Ockhuizen T, Schulzke J-D, Serino M, et al. Intestinal permeability–a new target for disease prevention and therapy. BMC Gastroenterol. 2014;14:189.
Wei W, Wang H-F, Zhang Y, Zhang Y-L, Niu B-Y, Yao S-K. Altered metabolism of bile acids correlates with clinical parameters and the gut microbiota in patients with diarrhea-predominant irritable bowel syndrome. World J Gastroenterol. 2020;26:7153–72.
Vijayvargiya P, Busciglio I, Burton D, Donato L, Lueke A, Camilleri M. Bile acid deficiency in a subgroup of patients with irritable bowel syndrome with constipation based on biomarkers in serum and fecal samples. Clin Gastroenterol Hepatol. 2018;16:522–7.
Slattery SA, Niaz O, Aziz Q, Ford AC, Farmer AD. Systematic review with meta-analysis: the prevalence of bile acid malabsorption in the irritable bowel syndrome with diarrhoea. Aliment Pharm Ther. 2015;42:3–11.
Bajor A, Törnblom H, Rudling M, Ung K-A, Simrén M. Increased colonic bile acid exposure: a relevant factor for symptoms and treatment in IBS. Gut 2015;64:84–92.
Hughes PA, Zola H, Penttila IA, Blackshaw LA, Andrews JM, Krumbiegel D. Immune activation in irritable bowel syndrome: can neuroimmune interactions explain symptoms? Am J Gastroenterol. 2013;108:1066–74.
Ohman L, Simrén M. Pathogenesis of IBS: role of inflammation, immunity and neuroimmune interactions. Nat Rev Gastroenterol Hepatol. 2010;7:163–73.
Mawe GM, Hoffman JM. Serotonin signalling in the gut—functions, dysfunctions, and therapeutic targets. Nat Rev Gastroenterol Hepatol. 2013;10:473–86.
Mayer EA, Gupta A, Kilpatrick LA, Hong JY. Imaging brain mechanisms in chronic visceral pain. Pain 2015;156:S50–S63.
Mayer EA, Labus J, Aziz Q, Tracey I, Kilpatrick L, Elsenbruch S, et al. Role of brain imaging in disorders of brain-gut interaction: a Rome Working Team Report. Gut 2019;68:1701–15.
Larauche M, Mulak A, Taché Y. Stress and visceral pain: from animal models to clinical therapies. Exp Neurol. 2012;233:49–67.
Elsenbruch S. Abdominal pain in Irritable Bowel Syndrome: a review of putative psychological, neural and neuro-immune mechanisms. Brain Behav Immun. 2011;25:386–94.
Elsenbruch S, Schmid J, Bäsler M, Cesko E, Schedlowski M, Benson S. How positive and negative expectations shape the experience of visceral pain: an experimental pilot study in healthy women. Neurogastroenterol Motil. 2012;24:914–e460.
Aizawa E, Sato Y, Kochiyama T, Saito N, Izumiyama M, Morishita J, et al. Altered cognitive function of prefrontal cortex during error feedback in patients with irritable bowel syndrome, based on FMRI and dynamic causal modeling. Gastroenterology 2012;143:1188–98.
Fukudo S. Stress and visceral pain: focusing on irritable bowel syndrome. Pain 2013;154(Suppl 1):S63–S70.
Kennedy PJ, Clarke G, O’Neill A, Groeger JA, Quigley EMM, Shanahan F, et al. Cognitive performance in irritable bowel syndrome: evidence of a stress-related impairment in visuospatial memory. Psychol Med. 2014;44:1553–66.
Tanaka Y, Kanazawa M, Fukudo S, Drossman DA. Biopsychosocial model of irritable bowel syndrome. J Neurogastroenterol Motil. 2011;17:131–9.
Piché M, Arsenault M, Poitras P, Rainville P, Bouin M. Widespread hypersensitivity is related to altered pain inhibition processes in irritable bowel syndrome. Pain 2010;148:49–58.
Wilder-Smith CH. The balancing act: endogenous modulation of pain in functional gastrointestinal disorders. Gut 2011;60:1589–99.
Ringel Y, Ringel-Kulka T. The intestinal microbiota and Irritable Bowel Syndrome. J Clin Gastroenterol. 2015;49(Suppl 1):S56–S59.
Ringel Y. The gut microbiome in irritable bowel syndrome and other functional bowel disorders. Gastroenterol Clin North Am. 2017;46:91–101.
Tap J, Derrien M, Törnblom H, Brazeilles R, Cools-Portier S, Doré J, et al. Identification of an intestinal microbiota signature associated with severity of irritable bowel syndrome. Gastroenterology 2017;152:111–23.e8.
Zhuang X, Xiong L, Li L, Li M, Chen M. Alterations of gut microbiota in patients with irritable bowel syndrome: A systematic review and meta-analysis. J Gastroenterol Hepatol. 2017;32:28–38.
Bennet SMP, Ohman L, Simren M. Gut microbiota as potential orchestrators of irritable bowel syndrome. Gut Liver. 2015;9:318–31.
Lackner JM. The role of psychosocial factors in gastrointestinal disorders. Gut. 2014;33:104–16.
Grundy L, Erickson A, Brierley SM. Visceral Pain. Annu Rev Physiol. 2019;81:261–84.
Al Omran Y, Aziz Q. Functional brain imaging in gastroenterology: to new beginnings. Nat Rev Gastroenterol Hepatol. 2014;11:565–76.
Greenwood-Van Meerveld B, Prusator DK, Johnson AC. Animal models of visceral pain: pathophysiology, translational relevance and challenges/B. Am J Physiol Gastrointest Liver Physiol. 2015;463:G885–G903.
Keefer L, Ballou SK, Drossman DA, Ringstrom G, Elsenbruch S, Ljótsson B. A Rome Working Team Report on brain-gut behavior therapies for disorders of gut-brain interaction. Gastroenterology 2022;162:300–15.
Mayer EA, Labus JS, Tillisch K, Cole SW, Baldi P. Towards a systems view of IBS. Nat Rev Gastroenterol Hepatol. 2015;12:592–605.
Furness JB, Callaghan BP, Rivera LR, Cho H-J. The enteric nervous system and gastrointestinal innervation: integrated local and central control. Adv Exp Med Biol. 2014;817:39–71.
Furness JB The Enteric Nervous System. London, England: Blackwell Publishing; 2005.
Furness JB. The enteric nervous system and neurogastroenterology. Nat Rev Gastroenterol Hepatol. 2012;9:286–94.
Bohórquez DV, Liddle RA. The gut connectome: making sense of what you eat. J Clin Invest. 2015;125:888–90.
Margolis KG, Gershon MD, Bogunovic M. Cellular organization of neuroimmune interactions in the gastrointestinal tract. Trends Immunol. 2016;37:487–501.
Rao M, Gershon MD. The bowel and beyond: the enteric nervous system in neurological disorders. Nat Rev Gastroenterol Hepatol. 2016;13:517–28.
Browning KN, Alberto, Travagli R. Central control of gastrointestinal motility. Curr Opin Endocrinol Diabetes Obes. 2019;26:11–6.
Gershon M. The Second Brain: The Scientific Basis of Gut Instinct and a groundbreaking new understanding of nervous disorders of the stomach and intestine. HarperCollins; 1998.
Furness JB, Stebbing MJ. The first brain: Species comparisons and evolutionary implications for the enteric and central nervous systems. Neurogastroenterol Motil. 2018;30. https://doi.org/10.1111/nmo.13234.
Kass-Simon G, Pierobon P. Cnidarian chemical neurotransmission, an updated overview. Comp Biochem Physiol A Mol Integr Physiol. 2007;146:9–25.
Westfall JA, Elliott SR, MohanKumar PS, Carlin RW. Immunocytochemical evidence for biogenic amines and immunogold labeling of serotonergic synapses in tentacles of Aiptasia pallida (Cnidaria, Anthozoa). Invertebr Biol. 2005;119:370–8.
Drokhlyansky E, Smillie CS, Van Wittenberghe N, Ericsson M, Griffin GK, Eraslan G, et al. The human and mouse enteric nervous system at single-cell resolution. Cell 2020;182:1606–22.e23.
Seguella L, Gulbransen BD. Enteric glial biology, intercellular signalling and roles in gastrointestinal disease. Nat Rev Gastroenterol Hepatol. 2021;18:571–87.
Biederer T, Sara Y, Mozhayeva M, Atasoy D, Liu X, Kavalali ET, et al. SynCAM, a synaptic adhesion molecule that drives synapse assembly. Science 2002;297:1525–31.
Morales-Soto W, Gulbransen BD. Enteric Glia: A new player in abdominal pain. Cell Mol Gastroenterol Hepatol. 2019;7:433–45.
Pasman JA, Chen Z, Smit DJA, Vink JM, Van Den Oever MC, Pattij T, et al. The CADM2 gene and behavior: a phenome-wide scan in UK-Biobank. Behav Genet 2022;52:306–14. https://doi.org/10.1007/s10519-022-10109-8. 22 July 2022.
Frei JA, Stoeckli ET. SynCAMs – From axon guidance to neurodevelopmental disorders. Mol Cell Neurosci. 2017;81:41–8.
Fu M, Vohra BPS, Wind D, Heuckeroth RO. BMP signaling regulates murine enteric nervous system precursor migration, neurite fasciculation, and patterning via altered Ncam1 polysialic acid addition. Dev Biol. 2006;299:137–50.
Eraslan G, Drokhlyansky E, Anand S, Fiskin E, Subramanian A, Slyper M, et al. Single-nucleus cross-tissue molecular reference maps toward understanding disease gene function. Science 2022;376:eabl4290.
de Vos WM, Tilg H, Van Hul M, Cani PD. Gut microbiome and health: mechanistic insights. Gut 2022;71:1020–32.
Liang G, Bushman FD. The human virome: assembly, composition and host interactions. Nat Rev Microbiol. 2021;19:514–27.
Cao Z, Sugimura N, Burgermeister E, Ebert MP, Zuo T, Lan P. The gut virome: A new microbiome component in health and disease. EBioMedicine 2022;81:104113.
Osadchiy V, Martin CR, Mayer EA. The Gut-brain axis and the microbiome: mechanisms and clinical implications. Clin Gastroenterol Hepatol. 2019;17:322–32.
Martin CR, Osadchiy V, Kalani A, Mayer EA. The brain-gut-microbiome axis. Cell Mol Gastroenterol Hepatol. 2018;6:133–48.
Sandrini S, Aldriwesh M, Alruways M. Microbial endocrinology: host–bacteria communication within the gut microbiome. J Endocrinol 2015;225:R21–R34.
Margolis KG, Cryan JF, Mayer EA. The microbiota-gut-brain axis: from motility to mood. Gastroenterology 2021;160:1486–501.
Mayer EA. The neurobiology of stress and gastrointestinal disease. Gut 2000;47:861–9.
Ford AC, Quigley E, Lacy BE, Lembo AJ, Saito YA, Schiller LR, et al. Effect of antidepressants and psychological therapies, including hypnotherapy, in irritable bowel syndrome: systematic review and meta-analysis. Am J Gastroenterol. 2014;109:1350–65.
Seeley WW, Menon V, Schatzberg AF, Keller J, Glover GH, Kenna H, et al. Dissociable intrinsic connectivity networks for salience processing and executive control. J Neurosci. 2007;27:2349–56.
Guo CC, Kurth F, Zhou J, Mayer EA, Eickhoff SB, Kramer JH, et al. One-year test-retest reliability of intrinsic connectivity network fMRI in older adults. NeuroImage 2012;61:1471–83.
Bullmore E, Sporns O. The economy of brain network organization. Nat Neurosci Rev. 2012;13:336–49.
Grayson DS, Fair DA. Development of large-scale functional networks from birth to adulthood: A guide to the neuroimaging literature. Neuroimage 2017;160:15–31.
Sporns O, Betzel RF. Modular brain networks. Annu Rev Psychol. 2016;67:613–40.
Mayer EA, Aziz Q, Coen S, Kern M, Labus JS, Lane R, et al. Brain imaging approaches to the study of functional GI disorders: A Rome Working Team Report. Neurogastroenterol Motil. 2009;21:579–96.
Tijms BM P, Series, Willshaw DJ, Lawrie SM. Similarity-based extraction of individual networks from gray matter MRI Scans. Cereb Cortex. 2012;22:1530–41.
Hagmann P, Kurant M, Gigandet X, Thiran P, Wedeen VJ, Meuli R, et al. Mapping human whole-brain structural networks with diffusion MRI. PLoS ONE. 2007;2:e597.
Gupta A, Kilpatrick L, Labus J, Tillisch K, Braun A, Hong J-Y, et al. Early adverse life events and resting state neural networks in patients with chronic abdominal pain: evidence for sex differences. Psychosom Med. 2014;76:404–12.
Naliboff BD, Berman S, Suyenobu B, Labus JS, Chang L, Stains J, et al. Longitudinal change in perceptual and brain activation response to visceral stimuli in irritable bowel syndrome patients. Gastroenterology 2006;131:352–65.
Bradford K, Shih W, Videlock EJ, Presson AP, Naliboff BD, Mayer EA, et al. Association between early adverse life events and irritable Bowel Syndrome. Clin Gastroenterol Hepatol. 2012;10:385–90.
Dickhaus B, Mayer EA, Firooz N, Stains J, Conde F, Olivas TI, et al. Irritable bowel syndrome patients show enhanced modulation of visceral perception by auditory stress. Am J Gastroenterol. 2003;98:135–43.
Labus JS, Gupta A, Coveleskie K, Tillisch K, Kilpatrick L, Jarcho J, et al. Sex differences in emotion-related cognitive processes in irritable bowel syndrome and healthy control subjects. Pain 2013;154:2088–99.
Tillisch K, Mayer EA, Labus JS. Quantitative meta-analysis identifies brain regions activated during rectal distension in irritable bowel syndrome. Gastroenterology 2011;140:91–100.
Farmer AD, Aziz Q. Visceral pain hypersensitivity in functional gastrointestinal disorders. Br Med Bull. 2009;91:123–36.
Mayer EA, Berman S, Chang L, Naliboff BD. Sex-based differences in gastrointestinal pain. Eur J Pain. 2004;8:451–63.
Tillisch K. Sex specific alterations in autonomic function among patients with irritable bowel syndrome. Gut 2005;54:1396–401.
Hong J-Y, Kilpatrick LA, Labus JS, Gupta A, Katibian D, Ashe-McNalley C, et al. Sex and disease-related alterations of anterior insula functional connectivity in chronic abdominal pain. J Neurosci. 2014;34:14252–9.
Hong J-Y, Kilpatrick LA, Labus J, Gupta A, Jiang Z, Ashe-Mcnalley C, et al. Patients with chronic visceral pain show sex-related alterations in intrinsic oscillations of the resting brain. J Neurosci. 2013;33:11994–2002.
Kilpatrick LA, Ornitz E, Ibrahimovic H, Treanor M, Craske M, Nazarian M, et al. Sex-related differences in prepulse inhibition of startle in irritable bowel syndrome (IBS). Biol Psychol. 2010;84:272–8.
Martucci KT, MacKey SC. Neuroimaging of pain: human evidence and clinical relevance of central nervous system processes and modulation. Anesthesiology 2018;128:1241–54.
Menon V. Salience network. Brain Mapp: Encycl Ref. 2015;2:597–611.
Hall GBC, Kamath MV, Collins S, Ganguli S, Spaziani R, Miranda KL, et al. Heightened central affective response to visceral sensations of pain and discomfort in IBS. Neurogastroenterol Motil. 2010;22:276–e80.
Elsenbruch S, Rosenberger C, Bingel U, Forsting M, Schedlowski M, Gizewski ER. Patients with irritable bowel syndrome have altered emotional modulation of neural responses to visceral stimuli. Gastroenterology 2010;139:1310–9.
Elsenbruch S, Rosenberger C, Enck P, Forsting M, Schedlowski M, Gizewski ER. Affective disturbances modulate the neural processing of visceral pain stimuli in irritable bowel syndrome: an fMRI study. Gut 2010;59:489–95.
Jarcho JM, Feier NA, Bert A, Labus JA, Lee M, Stains J, et al. Diminished neurokinin-1 receptor availability in patients with two forms of chronic visceral pain. Pain 2013;154:987–96.
Bhatt RR, Gupta A, Labus JS, Zeltzer LK, Tsao JC, Shulman RJ, et al. Altered Brain Structure and Functional Connectivity and Its Relation to Pain Perception in Girls With Irritable Bowel Syndrome. Psychosom Med. 2019;81:146–54.
Bhatt RR, Gupta A, Labus JS, Liu C, Vora PP, Jean S, et al. A neuropsychosocial signature predicts longitudinal symptom changes in women with irritable bowel syndrome. Mol Psychiatry. 2022;27:1774–91.
Kucyi A, Davis KD. The dynamic pain connectome. Trends Neurosci. 2015;38:86–95.
Qi R, Ke J, Joseph Schoepf U, Varga-Szemes A, Milliken CM, Liu C, et al. Topological reorganization of the default mode network in irritable bowel syndrome. Mol Neurobiol. 2016;53:6585–93.
Nisticò V, Rossi RE, D’Arrigo AM, Priori A, Gambini O, Demartini B. Functional neuroimaging in irritable bowel syndrome: a systematic review highlights common brain alterations with functional movement disorders. J Neurogastroenterol Motil. 2022;28:185–203.
Letzen JE, Craggs JG, Perlstein WM, Price DD, Robinson ME. Functional connectivity of the default mode network and its association with pain networks in irritable bowel patients assessed via lidocaine treatment. J Pain. 2013;14:1077–87.
Ellingson BM, Mayer E, Harris RJ, Ashe-Mcnally C, Naliboff BD, Labus JS, et al. Diffusion tensor imaging detects microstructural reorganization in the brain associated with chronic irritable bowel syndrome. Pain 2013;154:1528–41.
Jiang Z, Dinov ID, Labus J, Shi Y, Zamanyan A, Gupta A, et al. Sex-related differences of cortical thickness in patients with chronic abdominal pain. PLoS ONE. 2013;8:e73932.
Piché M, Chen JI, Roy M, Poitras P, Bouin M, Rainville P. Thicker posterior insula is associated with disease duration in women with irritable bowel syndrome (IBS) whereas thicker orbitofrontal cortex predicts reduced pain inhibition in both IBS patients and controls. J Pain. 2013;14:1217–26.
Labus J, Dinov I, Jiang Z, Ashe-McNalley C, Zamanyan A, Shi Y, et al. Irritable Bowel Syndrome in female patients is associated with alterations in structural brain networks. Pain 2014;155:137–49.
Benarroch EE. The central autonomic network: functional organization, dysfunction, and perspective. Mayo Clin Proc. 1993;68:998–1001.
Lamotte G, Shouman K, Benarroch EE. Stress and central autonomic network. Auton Neurosci. 2021;235:102870.
Napadow V, Sclocco R, Henderson LA. Brainstem neuroimaging of nociception and pain circuitries. PAIN Rep. 2019;4:e745.
Bandler R, Shipley MT. Columnar organization in the midbrain periaqueductal gray: modules for emotional expression? Trends Neurosci. 1994;17:379–89.
Suárez-Pereira I, Llorca-Torralba M, Bravo L, Camarena-Delgado C, Soriano-Mas C, Berrocoso E. The role of the Locus Coeruleus in pain and associated stress-related disorders. Biol Psychiatry. 2022;91:786–97.
Valentino RJ, Van, Bockstaele E. Convergent regulation of locus coeruleus activity as an adaptive response to stress. Eur J Pharm. 2008;583:194–203.
Taché Y, Mönnikes H, Bonaz B, Rivier J. Role of CRF in stress-related alterations of gastric and colonic motor function. Ann N. Y Acad Sci. 1993;697:233–43.
Camilleri M, Zhernakova A, Bozzarelli I, D’Amato M. Genetics of irritable bowel syndrome: shifting gear via biobank-scale studies. Nat Rev Gastroenterol Hepatol. 2022;19:689–702.
Sanders KM, Ward SM, Koh SD. Interstitial cells: regulators of smooth muscle function. Physiol Rev. 2014;94:859–907.
Beyder A, Mazzone A, Strege PR, Tester DJ, Saito YA, Bernard CE, et al. Loss-of-function of the voltage-gated sodium channel NaV1.5 (channelopathies) in patients with irritable bowel syndrome. Gastroenterology 2014;146:1659–68.
Petrovska J, Coynel D, Fastenrath M, Milnik A, Auschra B, Egli T, et al. The NCAM1 gene set is linked to depressive symptoms and their brain structural correlates in healthy individuals. J Psychiatr Res. 2017;91:116–23.
Kolkova K, Novitskaya V, Pedersen N, Berezin V, Bock E. Neural cell adhesion molecule-stimulated neurite outgrowth depends on activation of protein Kinase c and the RAS–mitogen-activated protein kinase pathway. J Neurosci. 2000;20:2238–46.
Frei JA, Andermatt I, Gesemann M, Stoeckli ET. The SynCAM synaptic cell adhesion molecules are involved in sensory axon pathfinding by regulating axon-axon contacts. Development 2015;142:e0106–e0106.
Kuramoto K, Negishi M, Katoh H. Regulation of dendrite growth by the Cdc42 activator Zizimin1/Dock9 in hippocampal neurons. J Neurosci Res. 2009;87:1794–805.
Pappa S, Padilla N, Iacobucci S, Vicioso M, Álvarez de la Campa E, Navarro C, et al. PHF2 histone demethylase prevents DNA damage and genome instability by controlling cell cycle progression of neural progenitors. Proc Natl Acad Sci USA. 2019;116:19464–73.
Shi L. Dock protein family in brain development and neurological disease. Commun Integr Biol. 2013;6:e26839.
Fitzcharles MA, Cohen SP, Clauw DJ, Littlejohn G, Usui C, Häuser W. Nociplastic pain: towards an understanding of prevalent pain conditions. Lancet 2021;397:2098–110.
Nijs J, George SZ, Clauw DJ, Fernández-de-las-Peñas C, Kosek E, Ickmans K, et al. Central sensitisation in chronic pain conditions: latest discoveries and their potential for precision medicine. Lancet Rheumatol 2021;3:e383–e392.
Midenfjord I, Grinsvall C, Koj P, Carnerup I, Törnblom H, Simrén M. Central sensitization and severity of gastrointestinal symptoms in irritable bowel syndrome, chronic pain syndromes, and inflammatory bowel disease. Neurogastroenterol Motil. 2021;33:e14156.
Woolf CJ. Evidence for a central component of post-injury pain hypersensitivity. Nature 1983;306:686–8.
Latremoliere A, Woolf CJ. Central SENSITIZATION: A GENERATOR OF PAIN HYPERSENSITIVITY BY CENTRAL NEURAL PLASTicity. J Pain. 2009;10:895–926.
Verne NG, Himes NC, Robinson ME, Gopinath KS, Briggs RW, Crosson B, et al. Central representation of visceral and cutaneous hypersensitivity in the irritable bowel syndrome. Pain 2003;103:99–110.
Wilder-Smith CH, Robert-Yap J. Abnormal endogenous pain modulation and somatic and visceral hypersensitivity in female patients with irritable bowel syndrome. World J Gastroenterol. 2007;13:3699–704.
Caldarella MP, Giamberardino MA, Sacco F, Affaitati G, Milano A, Lerza R, et al. Sensitivity disturbances in patients with irritable bowel syndrome and fibromyalgia. Am J Gastroenterol. 2006;101:2782–9.
Iimura S, Takasugi S, Division R, Co M Hsp and gastrointestinal disease symptoms. https://psyarxiv.com/n2c39/download?format=pdf. Accessed 7 August 2022.
Mocci E, Ward K, Dorsey SG, Ament SA, GWAS meta-analysis reveals dual neuronal and immunological etiology for pain susceptibility. medRxiv. 2021:2021.08.23.21262510.
McWilliams LA, Cox BJ, Enns MW. Mood and anxiety disorders associated with chronic pain: an examination in a nationally representative sample. Pain 2003;106:127–33.
Tang J, Gibson SJ. A psychophysical evaluation of the relationship between trait anxiety, pain perception, and induced state anxiety. J Pain. 2005;6:612–9.
Clark JR, Nijs J, Yeowell G, Holmes P, Goodwin PC. Trait sensitivity, anxiety, and personality are predictive of central sensitization symptoms in patients with chronic low back pain. Pain Pr. 2019;19:800–10.
Shigetoh H, Tanaka Y, Koga M, Osumi M, Morioka S. The mediating effect of central sensitization on the relation between pain intensity and psychological factors: a cross-sectional study with mediation analysis. Pain Res Manag. 2019;2019:3916135.
Adams LM, Turk DC. Psychosocial factors and central sensitivity syndromes. Curr Rheumatol Rev. 2015;11:96–108.
Aron EN, Aron A. Sensory-processing sensitivity and its relation to introversion and emotionality. J Pers Soc Psychol. 1997;73:345–68.
Boyce WT. Differential susceptibility of the developing brain to contextual adversity and stress. Neuropsychopharmacology 2016;41:142–62.
Meerwijk EL, Ford JM, Weiss SJ. Brain regions associated with psychological pain: implications for a neural network and its relationship to physical pain. Brain Imaging Behav. 2013;7:1–14.
Elman I, Borsook D. Threat response system: parallel brain processes in pain vis-à-vis fear and anxiety. Front Psychiatry. 2018;9:29.
Belujon P, Grace AA. Regulation of dopamine system responsivity and its adaptive and pathological response to stress. Proc Biol Sci. 2015;282:20142516.
Baliki MN, Apkarian AV. Nociception, pain, negative moods, and behavior selection. Neuron 2015;87:474–91.
Zouikr I, Bartholomeusz MD, Hodgson DM. Early life programming of pain: focus on neuroimmune to endocrine communication. J Transl Med. 2016;14:123.
Bale TL, Baram TZ, Brown AS, Goldstein JM, Insel TR, Mccarthy MM, et al. Early life programming and neurodevelopmental disorders. Biol Psychiatry. 2010;68:314–9.
Verriotis M, Chang P, Fitzgerald M, Fabrizi L. Development of the Nociceptive brain. Neuroscience 2016;338:207–19.
Slater R, Fabrizi L, Worley A, Meek J, Boyd S, Fitzgerald M. Premature infants display increased noxious-evoked neuronal activity in the brain compared to healthy age-matched term-born infants. Neuroimage 2010;52:583–9.
Van Den Bosch GE, White T, El Marroun H, Simons SHP, Van Der Lugt A, Van Der Geest JN, et al. Prematurity, Opioid exposure and neonatal pain: do they affect the developing brain? Neonatology 2015;108:8–15.
Ranger M, Chau CMY, Garg A, Woodward TS, Beg MF, Bjornson B, et al. Neonatal pain-related stress predicts cortical thickness at age 7 years in children born very preterm. PLoS One. 2013;8:e76702.
Nemeroff CB. Neurobiological consequences of childhood trauma. J Clin Psychiatry. 2004;65(Suppl 1):18–28.
Heim C, Nemeroff CB. The role of childhood trauma in the neurobiology of mood and anxiety disorders: preclinical and clinical studies. Biol Psychiatry. 2001;49:1023–39.
Lippard ETC, Nemeroff CB. The devastating clinical consequences of child abuse and neglect: increased disease vulnerability and poor treatment response in mood disorders. Am J Psychiatry. 2020;177:20–36.
Coley EJL, Hsiao EY. Malnutrition and the microbiome as modifiers of early neurodevelopment. Trends Neurosci. 2021;44:753–64.
Ratsika A, Codagnone MC, O’Mahony S, Stanton C, Cryan JF. Priming for life: early life nutrition and the microbiota-gut-brain axis. Nutrients 2021;13:423.
Andresen V, Montori VM, Keller J, West CP. Effects of 5-hydroxytryptamine (serotonin) type 3 antagonists on symptom relief and constipation in non constipated irritable bowel syndrome: a systematic review and meta-analysis of randomized controlled trials. Clin Biomed Res. 2008;6:545–55.
Costall B, Naylor RJ. Anxiolytic potential of 5-HT3 receptor antagonists. Pharm Toxicol. 1992;70:157–62.
Fakhfouri G, Rahimian R, Dyhrfjeld-Johnsen J, Zirak MR, Beaulieu J-M. 5-HT3 receptor antagonists in neurologic and neuropsychiatric disorders: the iceberg still lies beneath the surface. Pharm Rev. 2019;71:383–412.
Olivier B, van Wijngaarden I, Soudijn W. 5-HT(3) receptor antagonists and anxiety; a preclinical and clinical review. Eur Neuropsychopharmacol. 2000;10:77–95.
Hubbard CS, Labus JS, Bueller J, Stains J, Suyenobu B, Dukes GE, et al. Corticotropin-releasing factor receptor 1 antagonist alters regional activation and effective connectivity in an emotional-arousal circuit during expectation of abdominal pain. J Neurosci. 2011;31:12491–500.
Labus JS, Hubbard CS, Bueller J, Ebrat B, Tillisch K, Chen M, et al. Impaired emotional learning and involvement of the corticotropin-releasing factor signaling system in patients with irritable bowel syndrome. Gastroenterology 2013;145:1253–61.e3.
Lowén MBO, Mayer EA, Sjöberg M, Tillisch K, Naliboff B, Labus J, et al. Effect of hypnotherapy and educational intervention on brain response to visceral stimulusin the irritable bowel syndrome. Aliment Pharm Ther. 2013;37:1184–97.
Rutten JMTM, Reitsma JB, Vlieger AM, Benninga MA. Gut-directed hypnotherapy for functional abdominal pain or irritable bowel syndrome in children: a systematic review. Arch Dis Child. 2013;98:252–7.
Rutten JMTM, Vlieger AM, Frankenhuis C, George EK, Groeneweg M, Norbruis OF, et al. Home-based hypnotherapy self-exercises vs individual hypnotherapy with a therapist for treatment of pediatric irritable bowel syndrome, functional abdominal pain, or functional abdominal pain syndrome. JAMA Pediatr. 2017;171:470.
Peters SL, Muir JG, Gibson PR. Review article: gut-directed hypnotherapy in the management of irritable bowel syndrome and inflammatory bowel disease. Aliment Pharm Ther. 2015;41:1104–15.
Naliboff BD, Smith SR, Serpa JG, Laird KT, Stains J, Connolly LS, et al. Mindfulness-based stress reduction improves irritable bowel syndrome (IBS) symptoms via specific aspects of mindfulness. Neurogastroenterol Motil. 2020;32:e13828.
Jacobs JP, Gupta A, Bhatt RR, Brawer J, Gao K, Tillisch K, et al. Cognitive behavioral therapy for irritable bowel syndrome induces bidirectional alterations in the brain-gut-microbiome axis associated with gastrointestinal symptom improvement. Microbiome 2021;9:236.
Lackner JM, Keefer L, Jaccard J, Firth R, Brenner D, Bratten J, et al. The Irritable Bowel Syndrome Outcome Study (IBSOS): Rationale and design of a randomized, placebo-controlled trial with 12 month follow up of self-versus clinician-administered CBT for moderate to severe irritable bowel syndrome. Contemp Clin Trials. 2012;33:1293–310.
Edebol-Carlman H, Ljótsson B, Linton SJ, Boersma K, Schrooten M, Repsilber D, et al. Face-to-face cognitive-behavioral therapy for irritable bowel syndrome: the effects on gastrointestinal and psychiatric symptoms. Gastroenterol Res Pr. 2017;2017:8915872 https://doi.org/10.1155/2017/8915872.
Owusu JT, Sibelli A, Moss-Morris R, van Tilburg MAL, Levy RL, Oser M. A pilot feasibility study of an unguided, internet-delivered cognitive behavioral therapy program for irritable bowel syndrome. Neurogastroenterol Motil. 2021;33:e14108.
Ford AC, Harris LA, Lacy BE, Quigley EMM, Moayyedi P. Systematic review with meta-analysis: the efficacy of prebiotics, probiotics, synbiotics and antibiotics in irritable bowel syndrome. Aliment Pharm Ther. 2018;48:1044–60.
Greicius MD, Krasnow B, Reiss AL, Menon V. Functional connectivity in the resting brain: A network analysis of the default mode hypothesis. Proc Natl Acad Sci USA. 2003;100:253–8.
Raichle ME, Macleod AM, Snyder AZ, Powers WJ, Gusnard DA, Shulman GL. A default mode of brain function. Proc Natl Acad Sci USA. 2001;98:676–82.
Whitfield-Gabrieli S, Ford JM. Default mode network activity and connectivity in psychopathology. Annu Rev Clin Psychol. 2012;8:49–76.
Buckner RL, Andrews-Hanna JR, Schacter DL. The brain’s default network: Anatomy, function, and relevance to disease. Ann N. Y Acad Sci. 2008;1124:1–38.
Northoff G. Anxiety disorders and the brain’s resting state networks: from altered spatiotemporal synchronization to psychopathological symptoms. Adv Exp Med Biol. 2020;1191:71–90.
Kim Y-K, Yoon H-K. Common and distinct brain networks underlying panic and social anxiety disorders. Prog Neuropsychopharmacol Biol Psychiatry. 2018;80:115–22.
MacNamara A, DiGangi J, Phan KL. Aberrant spontaneous and task-dependent functional connections in the anxious brain. Biol Psychiatry Cogn Neurosci Neuroimaging. 2016;1:278–87.
Kolesar TA, Bilevicius E, Wilson AD, Kornelsen J. Systematic review and meta-analyses of neural structural and functional differences in generalized anxiety disorder and healthy controls using magnetic resonance imaging. NeuroImage: Clin. 2019;24:102016.
ten Donkelaar HJ, Broman J, van Domburg P The Somatosensory System. In: ten Donkelaar HJ, editor. Clinical Neuroanatomy: Brain Circuitry and Its Disorders, Cham: Springer International Publishing; 2020. p. 171–255.
Woodworth D, Mayer E, Leu K, Ashe-McNalley C, Naliboff BD, Labus JS, et al. Unique microstructural changes in the brain associated with Urological Chronic Pelvic Pain Syndrome (UCPPS) revealed by diffusion tensor MRI, super-resolution track density imaging, and statistical parameter mapping: A MAPP network neuroimaging study. PLoS One. 2015;10:e0140250.
Grinsvall C, Ryu HJ, Van Oudenhove L, Labus JS, Gupta A, Ljungberg M, et al. Association between pain sensitivity and gray matter properties in the sensorimotor network in women with irritable bowel syndrome. Neurogastroenterol Motil. 2020;33:e14027.
Bouziane I, Das M, Friston KJ, Caballero-Gaudes C, Ray D. Enhanced top-down sensorimotor processing in somatic anxiety. Transl Psychiatry. 2022;12:295.
Brandl F, Weise B, Mulej Bratec S, Jassim N, Hoffmann Ayala D, Bertram T, et al. Common and specific large-scale brain changes in major depressive disorder, anxiety disorders, and chronic pain: a transdiagnostic multimodal meta-analysis of structural and functional MRI studies. Neuropsychopharmacology 2022;47:1071–80.
Berman SM, Chang L, Suyenobu B, Derbyshire SW, Stains J, FitzGerald L, et al. Condition-specific deactivation of brain regions by 5-HT3 receptor antagonist Alosetron. Gastroenterology 2002;123:969–77.
Tillisch K, Labus J, Nam B, Bueller J, Smith S, Suyenobu B, et al. Neurokinin-1-receptor antagonism decreases anxiety and emotional arousal circuit response to noxious visceral distension in women with irritable bowel syndrome: a pilot study. Aliment Pharm Ther. 2012;35:360–7.
Grupe DW, Nitschke JB. Uncertainty and anticipation in anxiety: an integrated neurobiological and psychological perspective. Nat Rev Neurosci. 2013;14:488–501.
Hong J-Y, Naliboff BD, Labus JS, Kilpatrick LA, Fling C, Ashe-McNalley C, et al. Sa2014 IBS patients show altered brain responses during uncertain, but not certain expectation of painful stimulation of the abdominal wall. Gastroenterology 2015;148:S–384.
Sylvester CM, Corbetta M, Raichle ME, Rodebaugh TL, Schlaggar BL, Sheline YI, et al. Functional network dysfunction in anxiety and anxiety disorders. Trends Neurosci. 2012;35:527–35.
Pessoa L. A network model of the emotional brain. Trends Cogn Sci. 2017;21:357–71.
Stein JL, Wiedholz LM, Bassett DS, Weinberger DR, Zink CF, Mattay VS, et al. A validated network of effective amygdala connectivity. Neuroimage 2007;36:736–45.
Pezawas L, Meyer-Lindenberg A, Drabant EM, Verchinski BA, Munoz KE, Kolachana BS, et al. 5-HTTLPR polymorphism impacts human cingulate-amygdala interactions: a genetic susceptibility mechanism for depression. Nat Neurosci. 2005;8:828–34.
Labus JS, Mayer EA, Jarcho J, Kilpatrick LA, Kilkens TOC, Evers EAT, et al. Acute tryptophan depletion alters the effective connectivity of emotional arousal circuitry during visceral stimuli in healthy women. Gut 2011;60:1196–203.
Yágüez L, Coen S, Gregory LJ, Amaro E, Altman C, Brammer MJ, et al. Brain response to visceral aversive conditioning: a functional magnetic resonance imaging study. Gastroenterology 2005;128:1819–29.
Berman SM, Naliboff BD, Suyenobu B, Labus JS, Stains J, Ohning G, et al. Reduced brainstem inhibition during anticipated pelvic visceral pain correlates with enhanced brain response to the visceral stimulus in women with Irritable Bowel Syndrome. J Neurosci. 2008;28:349–59.
Kilpatrick LA, Labus JS, Coveleskie K, Hammer C, Rappold G, Tillisch K, et al. The HTR3A Polymorphism c. -42C>T is associated with amygdala responsiveness in patients with irritable bowel syndrome. Gastroenterology. 2011;140:1943–51.
Harrewijn A, Cardinale EM, Groenewold NA, Bas-Hoogendam JM, Aghajani M, Hilbert K, et al. Cortical and subcortical brain structure in generalized anxiety disorder: findings from 28 research sites in the ENIGMA-Anxiety Working Group. Transl Psychiatry. 2021;11:502.
Dosenbach NUF, Fair DA, Miezin FM, Cohen AL, Wenger KK, Dosenbach RAT, et al. Distinct brain networks for adaptive and stable task control in humans. Proc Natl Acad Sci USA. 2007;104:11073–8.
Vincent JL, Kahn I, Snyder AZ, Raichle ME, Buckner RL. Evidence for a frontoparietal control system revealed by intrinsic functional connectivity. J Neurophysiol. 2008;100:3328–42.
Niendam TA, Laird AR, Ray KL, Dean YM, Glahn DC, Carter CS. Meta-analytic evidence for a superordinate cognitive control network subserving diverse executive functions. Cogn Affect Behav Neurosci. 2012;12:241–68.
Menon V. Large-scale brain networks and psychopathology: a unifying triple network model. Trends Cogn Sci. 2011;15:483–506.
Afzal M, Potokar JP, Probert CSJ, Munafò MR. Selective processing of gastrointestinal symptom-related stimuli in irritable bowel syndrome. Psychosom Med. 2006;68:758–61.
Gibbs-Gallagher N, Palsson OS, Levy RL, Meyer K, Drossman DA, Whitehead WE. Selective recall of gastrointestinal-sensation words: evidence for a cognitive-behavioral contribution to irritable bowel syndrome. Am J Gastroenterol. 2001;96:1133–8.
Phillips K, Wright BJ, Kent S. Irritable bowel syndrome and symptom severity: Evidence of negative attention bias, diminished vigour, and autonomic dysregulation. J Psychosom Res. 2014;77:13–9.
Tkalcic M, Domijan D, Pletikosic S, Setic M, Hauser G. Attentional biases in irritable bowel syndrome patients. Clin Res Hepatol Gastroenterol. 2014;38:621–8.
Labus JS, Naliboff BD, Berman SM, Suyenobu B, Vianna EP, Tillisch K, et al. Brain networks underlying perceptual habituation to repeated aversive visceral stimuli in patients with irritable bowel syndrome. NeuroImage 2009;47:952–60.
Seminowicz DA, Shpaner M, Keaser ML, Michael Krauthamer G, Mantegna J, Dumas JA, et al. Cognitive-behavioral therapy increases prefrontal cortex gray matter in patients with chronic pain. J Pain. 2013;14:1573–84.
Blankstein U, Chen J, Diamant NE, Davis KD. Altered brain structure in irritable bowel syndrome: potential contributions of pre-existing and disease-driven factors. Gastroenterology 2010;138:1783–9.
Qiu C, Liao W, Ding J, Feng Y, Zhu C, Nie X, et al. Regional homogeneity changes in social anxiety disorder: a resting-state fMRI study. Psychiatry Res. 2011;194:47–53.
Naliboff BD, Berman S, Chang L, Derbyshire SWG, Suyenobu B, Vogt BA, et al. Sex-related differences in IBS patients: central processing of visceral stimuli. Gastroenterology 2003;124:1738–47.
Nagel M, Jansen PR, Stringer S, Watanabe K, De Leeuw CA, Bryois J, et al. Meta-analysis of genome-wide association studies for neuroticism in 449,484 individuals identifies novel genetic loci and pathways. Nat Genet. 2018;50:920–7.
Ko H-G, Choi J-H, Park DI, Kang SJ, Lim C-S, Sim S-E, et al. Rapid turnover of Cortical NCAM1 regulates synaptic reorganization after peripheral nerve injury. Cell Rep. 2018;22:748–59.
Ao W, Cheng Y, Chen M, Wei F, Yang G, An Y, et al. Intrinsic brain abnormalities of irritable bowel syndrome with diarrhea: a preliminary resting-state functional magnetic resonance imaging study. BMC Med Imaging. 2021;21:4.
Chen A, Chen Y, Tang Y, Bao C, Cui Z, Xiao M, et al. Hippocampal AMPARs involve the central sensitization of rats with irritable bowel syndrome. Brain Behav. 2017;7:e00650.
Teicher MH, Samson JA, Anderson CM, Ohashi K. The effects of childhood maltreatment on brain structure, function and connectivity. Nat Rev Neurosci. 2016;17:652–66.
Kim H-J, Hur SW, Park JB, Seo J, Shin JJ, Kim S-Y, et al. Histone demethylase PHF2 activates CREB and promotes memory consolidation. EMBO Rep. 2019;20:e45907.
Wei F, Xu ZC, Qu Z, Milbrandt J, Zhuo M. Role of EGR1 in hippocampal synaptic enhancement induced by tetanic stimulation and amputation. J Cell Biol. 2000;149:1325–34.
Ploghaus A, Narain C, Beckmann CF, Clare S, Bantick S, Wise R, et al. Exacerbation of pain by anxiety is associated with activity in a hippocampal network. J Neurosci. 2001;21:9896–903.
Fagerberg L, Hallström BM, Oksvold P, Kampf C, Djureinovic D, Odeberg J, et al. Analysis of the human tissue-specific expression by genome-wide integration of transcriptomics and antibody-based proteomics. Mol Cell Proteom. 2014;13:397–406.
Lee J-G, Ye Y. Bag6/Bat3/Scythe: a novel chaperone activity with diverse regulatory functions in protein biogenesis and degradation. Bioessays 2013;35:377–85.
Kawahara H, Minami R, Yokota N. BAG6/BAT3: emerging roles in quality control for nascent polypeptides. J Biochem. 2013;153:147–60.
Binici J, Koch J. BAG-6, a jack of all trades in health and disease. Cell Mol Life Sci. 2014;71:1829–37.
Case CM, Sackett DL, Wangsa D, Karpova T, McNally JG, Ried T, et al. CKAP2 ensures chromosomal stability by maintaining the integrity of microtubule nucleation sites. PLoS One. 2013;8:e64575.
Zhang S, Wang Y, Chen S, Li J. Silencing of cytoskeleton-associated protein 2 represses cell proliferation and induces cell cycle arrest and cell apoptosis in osteosarcoma cells. Biomed Pharmacother. 2018;106:1396–403.
EAM is a member of the scientific advisory boards of Danone, Axial Therapeutics, Amare, Mahana Therapeutics, Pendulum, Bloom Biosciences, and APC Microbiome Ireland.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mayer, E.A., Ryu, H.J. & Bhatt, R.R. The neurobiology of irritable bowel syndrome. Mol Psychiatry (2023). https://doi.org/10.1038/s41380-023-01972-w | 1 | 29 |
<urn:uuid:c53404ee-97fc-43ac-93f5-a43dd4b86255> | A Short History
As we are all too well aware, the economy is now in the deepest recession since the Great Depression. Much has been written about irresponsible bankers, irresponsible borrowers, speculators, investors, regulation, deregulation, and ineffective government. But that is not the focus of today’s article. Today, we find ourselves in the midst of the greatest binge in government borrowing and spending in the history of civilization. One may or may not agree with our government’s actions, but it is fitting to examine the stated economic rationale behind the policies.
Prior to the mid 1930s, most economists believed that free markets were self balancing and would emerge from recessions if left to their own devices. They knew that capitalist economies were a balance between savings and investment. If there was a large expansion in savings, then there would be a large supply of money available. The law of supply and demand mandates that any commodity in great supply (in this case money) will become less expensive. In the case of capital, this is manifested by lower interest rates. As interest rates fall, it becomes less expensive for both consumers and businesses to borrow and invest. Consumers buy CDs, stocks and bonds (because we are talking about savings, not consumption, for the moment ignore purchases of consumable goods such as cars, TVs, and the like). Businesses find it cheaper to borrow money to expand manufacturing capability, invest in research and new product development, expand marketing, or move into other product lines and geographies. As investment ramps up, capital (savings) are absorbed and put to productive use, resulting in economic growth. In the shorter term, as capital is sopped up, there is a reduction in the money supply. Interest rates once again begin to climb, bringing the entire system back into balance.
At least that was the classical theory. But during the Great Depression, economists were stumped. It is generally agreed that the Federal Reserve contributed to the onset of the crisis by raising interest rates in the late 1920s in an effort to stem stock speculation, but that is a side issue. The great stock market crash occurred in 1929 and the economy was in a downward spiral.
Keynes provides an explanation
The depression went on for years. Why didn’t automatic mechanisms in the free market bring the economy back into balance? In 1936, John Maynard Keynes believed that he knew the answer, published in his masterpiece “The General Theory of Interest, Employment, and Money“. In it, Keynes argued that the basic problem of the Depression (or any deep, lasting recession) was that there was a lack of investment on the part of business in spite of low interest rates. If there is a general malaise, businesses surely are not going to risk taking on debt to expand into a future where there is uncertain demand for their products. Such a course is far too risky. And here we come to the crux of Keynesianism; Keynes’ solution was that the only recourse remaining was for government to step into the breach and spur investment by borrowing and spending. Government spending would guarantee (some) businesses economic activity, which would provide a market for other industries that serve those businesses, and so on. This would halt the downward slide and reverse the course of the economy. As business recovered, the government could withdraw and allow private enterprise to return to normal.
It should be noted that “The General Theory” was published in 1936, 3 years into Franklin Roosevelt’s first term. Under Roosevelt, government spending had already increased 50% by 1936 as compared to 1929 ($15B vs. $10B). Although private investment did increase somewhat, the unemployment rate fell to only 17% from 25%. In spite of government expenditures, it would rise once again (to 19% by 1939). This was hardly a vindication of Keynesianism. In his 1953 work, “The Worldy Philosophers“, Robert Heilbroner provides the most cogent explanation of this ineffectiveness, one which is eerily prescient of the current policy debate:
“Neither Keynes nor the government spenders had taken into account that the beneficiaries of the new medicine might consider it worse than the disease. Government spending was meant as a helping hand for business. It was interpreted by business as a threatening gesture.
Nor is this surprising. The New Deal had swept in on a wave of anti-business sentiment; values and standards that had become virtually sacrosanct were suddenly held up to skeptical scrutiny and criticism. The whole conception of “business rights,” “property rights,” and “the role of government” was rudely shaken; within a few years business was asked to forget its traditions of unquestioned preeminence and to adopt a new philosophy of cooperation with labor unions, acceptance of new rules and regulations, reform of many of its practices. Little wonder that it regarded the government in Washington as inimical, biased, and downright radical. And no wonder, in such an atmosphere, that its eagerness to undertake large-scale investment was dampened by the uneasiness it felt in this unfamiliar climate.
Hence every effort of the government to undertake a program of sufficient magnitude to mop up all the unemployed–probably a program at least twice as large as it did in fact undertake–was assailed as further evidence of Socialist design. And at the same time, the halfway measures the government did employ were just enough to frighten business away from undertaking a full-scale effort by itself. It was a situation not unlike that found in medicine; the medicine cured the patient of one illness, only to weaken him with its side effects. Government spending never truly cured the economy–not because it was economically unsound, but because it was ideologically upsetting.”
Note that during World War II the federal budget peaked at $103B, fully 10 times the 1929 amount. This did result in full employment, but at the cost of rampant inflation, as would be expected when the government indulges in the wholesale expansion of the monetary base.
Many modern politicians invoke Keynes in the name of government expansion, but the fact was that Keynes was a great admirer of Edmund Burke. He believed that government activity in the economy should be targeted and temporary, should focus on stimulus and investment, and should be withdrawn as soon as the free market was once again healthy.
In a letter to the New York Times in 1934, Keynes wrote “I see the problem of recovery in the following light; How soon will normal business enterprise come to the rescue? On what scale, by which expedients, and for how long is abnormal government expenditure advisable in the meantime?“. [emphasis added]
Are current policies “Keynesian?”
Governments around the world, from China, to the European Union, to the United States, are passing “stimulus” bills. The idea is to spark economic activity in an effort to get business to once again invest. Given what we have learned, an effective stimulus should have the following attributes:
- It should be large enough to have an effect. The 2007 Gross Domestic Product of the US economy was $14T. An $800B stimulus package is 5.7% of GDP. The 2007 federal budget was $2.8T. As explained above, in World War II, the U.S. government spent 10x the 1929 budget.
- It should be immediate. If the government is going to borrow huge amounts of money to stimulate the economy, it needs to get that money into the system as quickly as possible. One way to do so is to fund projects that are already in the pipeline. The money should not be spent on programs that do not spur investment or spark economic activity in the private sector.
- It should encourage private investment. No matter how much the government spends, if the private sector is not confident about the future, they will not invest. Therefore, the program should endeavor to make private investment as attractive as possible. Lower capital gains taxes encourage companies and individuals to take on more risk. Lower individual tax rates immediately provide an infusion of capital into the system, as well as incentivizing individuals to take more risk. If federal income tax, social security, medicare, state income tax, and property taxes add up to a tax rate of 65%, one can hardly expect an individual to risk their savings or livelihood in an effort to better their economic situation. They will be more reluctant to work harder for a bonus, more reluctant to join a start-up, more reluctant to relocate. In short, if you lower the rewards, then you have depressed the risk-taking activities that are the beating heart of a free market economy. Counter-productive in the best of times, policies that depress the investment climate are potentially catastrophic in the midst of a recession.
A word about the monetarists
Typically, one hears that the economic debate is between Keynesians and monetarists. Policy makers, rightly or wrongly, tend to invoke Keynes when arguing for more government involvement in the economy. Other policy makers invoke monetarists, principally Milton Friedman, to argue for a more laissez-faire approach to the free market.
What is monetarism? At it’s core, it is the belief that government can best tune the economy and prevent economic bubbles and recessions by controlling the supply of money and balancing the budget. By what mechanism? Primarily a central bank’s (for example the Federal Reserve) control of interest rates, as well as its sale (or withdrawal) of government bonds. As espoused by Milton Friedman, government should concentrate primarily on keeping prices stable. If there is too much money in the system, the result is inflation. Too little and there could be a lack of investment, causing a recession, and in severe cases a deflationary spiral. (Some ask why falling prices are a problem. Ask yourself what the result would be if businesses were incapable of making a profit).
Ben Bernanke, the current Chairman of the Federal Reserve, is generally thought to be non-ideological in his views of Keynesianism and monetarism. In his writings and actions, he seems to be a pragmatist, willing to use whatever tools are at the disposal of government to forestall a crisis or alleviate one.
Who is right?
In my (admittedly) uneducated opinion, neither school of economic thought is fully correct or incorrect. From a non-ideological viewpoint, we don’t live in world with a pure free market economy, free from all regulation and government interference. Nor do we live in a world with economies fully controlled in minute detail by government (unless you are one of the unfortunates residing in countries like Cuba or North Korea).
Was it a lack of regulation that caused the housing bust, as some claim? Were banks running wild? Did Alan Greenspan lower interest rates too much in the wake of the Internet bust and 9/11 (monetarism) in an effort to forestall a severe recession, thus contributing to the housing bubble?
What of government interfering in the housing market via the Community Reinvestment Act and the quasi-governmental entities, Fannie Mae and Freddie Mac? Most of us remember a time when a 20% down payment and a high credit rating were required to qualify for a mortgage. Was it deregulation of the banks that loosened lending standards, or was it that the CRA mandated that 50% of bank lending “meet the needs of the entire community”? (Note that this threshold was raised from 42% in 1999 by the Clinton administration). At the same time, Fannie Mae and Freddie Mac were mandated to meet housing goals set by the Department of Housing and Urban Development . As such, they bought and securitized trillions of dollars in sub-prime mortgages. One can hardly declare the failure of a “free market” that requires lenders to loan money to those that would otherwise be denied as poor credit risks, backstopped by GSEs (Government Sponsored Enterprises) holding trillions in risky mortgages; $6 trillion total, fully half of all mortgages written in the United States.
Modern economic systems are complex. Government regulation and intrusion only make them more so. Pure monetarism or Keynesianism is nearly impossible in such an environment. The best that we, as citizens, can do is to be watchful that government actors are invoking neither Milton Friedman nor John Maynard Keynes as a smokescreen in the pursuit of non-economic goals.
- Does a “Keynesian” policy meet the test as summarized above? Will it be timely, targeted, temporary, and large enough to have an impact?
- Keep an eye on incentives, as they are what drive a market economy. Will a proposed regulation throw sand in the gears of commerce at a time when we need as much economic activity as possible? Will a tax policy or law encourage investment by both business and individuals, or suppress it? Will it encourage risk taking and innovation or reduce the rewards of success to the point that investors aren’t willing to fund a venture and individuals are unwilling to go out on an economic limb?
- How much of a policy is economic and how much is social engineering? Is a policy designed to get the economy growing, or to change our society?
One last note; whether one agrees or disagrees with a particular social policy, it is extremely dangerous to add more uncertainty to a market economy that is already rife with fear. That is simply bad policy, whether it originates on the left or the right. | 1 | 2 |
<urn:uuid:72301f40-5967-42ed-9411-510f9e96381b> | Sec. Brain Health and Clinical Neuroscience
Volume 15 - 2021 | https://doi.org/10.3389/fnhum.2021.713316
Cortical Visual Impairments and Learning Disabilities
- 1Hôpital Fondation Adolphe de Rothschild, Paris, France
- 2INCC UMR 8002, CNRS, Université de Paris, Paris, France
- 3Department of Vision Science, Glasgow Caledonian University, Glasgow, United Kingdom
Medical advances in neonatology have improved the survival rate of premature infants, as well as children who are born under difficult neurological conditions. As a result, the prevalence of cerebral dysfunctions, whether minimal or more severe, is increasing in all industrialized countries and in some developing nations. Whereas in the past, ophthalmological diseases were considered principally responsible for severe visual impairment, today, all recent epidemiological studies show that the primary cause of blindness and severe visual impairment in children in industrialized countries is now neurological, with lesions acquired around the time of birth currently comprising the commonest contributor. The resulting cortical or cerebral visual impairments (CVIs) have long been ignored, or have been confused either with other ophthalmological disorders causing low vision, or with a range of learning disabilities. We present here the deleterious consequences that CVI can have upon learning and social interaction, and how these can be given behavioral labels without the underlying visual causes being considered. We discuss the need to train and inform clinicians in the identification and diagnosis of CVI, and how to distinguish the diagnosis of CVI from amongst other visual disorders, including the specific learning disorders. This is important because the range of approaches needed to enhance the development of children with CVI is specific to each child’s unique visual needs, making incorrect labeling or diagnosis potentially detrimental to affected children because these needs are not met.
Vision is fundamental to learning. Sight guides our limb and body movements. It also provides access to a vast range of information, and facilitates social interaction. Children are not only continually learning these skills, they also learn through these developing abilities.
Cortical or cerebral visual impairments (CVIs) include a wide range of visual dysfunctions that can impair learning and social interaction. The present review describes CVI and provides examples helpful to a range of professionals dealing with children with learning disabilities including pediatricians, child psychiatrists, and child neuropsychologists. Learning disabilities refer to brain conditions impairing the capacity to learn in several areas, for which the cause has yet to be identified. A learning disorder or difficulty is commonly “diagnosed” in children presenting significant delay in their development of several functions. Formal diagnoses include intellectual disabilities, specific learning disorders (affecting reading, writing, and mathematics) but also motor learning disorders (American Psychiatric Association, 2013). Crucially, these are often associated with neurodevelopmental conditions, making it urgent to identify and diagnose the underlying causes, and risk factors. This review offers an overview to consider how the diagnosis of CVI can potentially explain, at least in part, a wide range of learning difficulties, which can be overcome by appropriate management and educational strategies made accessible to the affected child.
The Differences Between Typical Vision and Cortical or Cerebral Visual Impairment
Picture a first-year schoolboy coming home from school, running into the kitchen, climbing onto a chair, and reaching into a tin for a biscuit. What part does vision play? He mentally envisions within his frontal territory what he wants to do and how (Buckner et al., 2008). He rapidly uses visual memory to navigate to the kitchen (Sanguinetti and Peterson, 2016). His eyes focus automatically by means of the lenses accommodating. His retinae turn the incoming imagery into unique patterns of electrical activity, with each glance capturing new imagery to integrate into a seamless pictorial flow (Churan et al., 2018). The optic nerves continuously transfer these signals via his lateral geniculate bodies to his occipital lobes, where analysis of the structure of the scene, in terms of extent, clarity (acuity), brightness, contrast and color takes place within about a tenth of a second (Lesniak et al., 2017), while the adjacent middle temporal lobes capture the flow of movement of the scene (Zihl and Dutton, 2015).
This processed information is immediately transferred to the temporal lobes via a bundle of nerve fibers on each side, called the inferior longitudinal fasciculi known functionally as the ventral stream (Bauer et al., 2015) dealing with local, detailed, visual processing, wherein a match with the coded library of past imagery brings about recognition of the tin. At the same time, the occipital lobes pass the processed image data to the posterior parietal lobes, via the superior longitudinal fasciculi (Bauer et al., 2015), functionally known as the dorsal stream dealing with global visual processing. This process is supported by the middle temporal lobes (which supplement the kinetic flow of the moving scene), and the deeper brain structures, the posterior thalamus and superior colliculi (Ptito et al., 2008), which together bring about non-conscious 3D mapped mental emulation of the scene, facilitating visual search and visual guidance of movement. There is evidence that the mapping of sound localization takes place in the same brain region (Thaler et al., 2016).
This mental visual construct enables the chair to be located and dragged to the right place, climbed onto, and the biscuit retrieved. The boy also recruits his cerebellum to modulate the timing of his actions, as well as his balance (or labyrinthine system) to climb onto the chair. In the inner ear there are balance receptors. Minute lumps of calcium linked to nerve endings to detect gravitational forces, act as plumblines, integrating with his vision (Jayakaran et al., 2018) through his semi-awareness of the horizontal edge of the kitchen wall cabinet, automatically ensuring his stability. In essence, through this highly efficient real-time process, the boy’s mind processes a continuously flowing emulation of the surrounding moving scene, mapped to his body, enabling him to recognize, and integrate and interact with his surroundings.
Disturbance of any element of these complex mental visual processes can occur in a range of patterns of CVI, unique to each child. These need to be identified, characterized and profiled to provide matched habilitational approaches designed to cater for each element of the resulting visual and associated disabilities.
Definitions, Epidemiology, Etiology of Cortical or Cerebral Visual Impairment
Cortical or cerebral visual impairment can be defined as “a verifiable visual dysfunction, which cannot be attributed to disorders of the anterior visual pathways or any potentially co-occurring ocular impairment” (Sakki et al., 2018). This broad consensus definition embraces the wide range of damage or dysfunction of the neural pathways, centers and networks involved in visual information processing. Children with CVI have been sub-classified into those who show selective visual perception and visuo-motor deficits, those with more severe and broader visual perception and visuo-motor deficits, and those with profound visual impairment (Lueck and Dutton, 2015b; Sakki et al., 2021).
These disorders compromise any of the following aspects of visual function in a range of combinations: central vision, peripheral vision (in all or part of the visual field), movement perception, gaze control, visual guidance of movement, visual attention, attentional orientation in space, visual analysis and recognition, visual memory and spatial cognition. Affected young children “know” their vision to be “normal,” yet the educational, developmental, and emotional personal and social impact of living with unreliable perception is commonly profound.
In epidemiological terms, CVI has become the leading cause of major visual impairment in industrialized countries (Kong et al., 2012). This change can be linked to improvement in the survival rates of children born prematurely and/or those with neurological damage, as well as better prevention of visual deficits of ocular origin. CVI in children is common, potentially affecting at least 3.4% of children but many affected children go unidentified (Cavezian et al., 2010a; Williams et al., 2021). The proportion of children with learning difficulties attending special schools who have CVI is high (Black et al., 2019) and may be greater than 50% (Williams et al., 2021).
As with other neurodevelopmental conditions (e.g., autism spectrum conditions, learning disabilities, ADHD), children born with complex neurological conditions are at risk of developing CVI. Indeed, complications of premature birth and perinatal cerebral anoxia (or hypoxia), are the most frequent causes of CVI (Fazzi et al., 2009). Other common etiologies include head injury, stroke, brain infection and genetic neurodevelopmental disorders (Lueck and Dutton, 2015a). CVI results from lesions affecting the posterior visual pathways, the optic chiasm, lateral geniculate bodies, optic radiations, primary visual cortices, the middle temporal lobes (serving movement perception) and the visual association areas. The visual functions of these structures can be affected to varying degree, either in isolation or in a variety of combinations. When the thalamus is involved, the lack of vision tends to be profound (Ricci et al., 2006). Moreover, the resulting visual impairment may be exacerbated by disorders of eye movement control (Fazzi et al., 2009; Boot et al., 2010; Ortibus et al., 2011; Lueck and Dutton, 2015b). CVI is therefore an umbrella term referring to visual deficits not specifically related to ocular, optic nerve or chiasmatic damage, but to pathology behind the chiasm, in particular affecting the visual brain areas involved in integration, identification, analysis and interpretation of static and moving visual information, as well as in visual control of directed movement in the environment.
Typical clinical features of CVI may be manifest in a child, despite undetected brain signatures. Even adults with visual field loss following stroke, can show normal MRI brain imaging in 30% of cases (Zhang et al., 2006a,b; Kelly et al., 2021), as can around 12% of children with Cerebral Palsy (CP) (Robinson et al., 2009; Towsley et al., 2011). It is therefore important to acknowledge that a report of a “normal” brain MRI in a child with neurovisual impairment does not exclude the diagnosis of CVI.
The term “minimal (or mild) brain injury” is sometimes used to refer to brain dysfunction in these children. Yet the consequences of the resulting visual difficulties and their impact on learning are far from “minimal” having far-reaching implications for the child’s learning, motor, cognitive and social development (Chokron and Dutton, 2016) with the effects on quality of life being akin to those of lack of primary visual functions (Mitry et al., 2016).
Optical, Ophthalmological and Neurological Disorders Associated With Cortical or Cerebral Visual Impairment
Cortical or cerebral visual impairment can occur in isolation or in association with eye or optic nerve damage (Jacobson and Dutton, 2000; Fazzi et al., 2007). Moreover, around 50% of children with CVI have refractive error or impaired focusing (hypoaccommodation), necessitating spectacle correction (Pehere et al., 2018), so all such children need to have their range of accommodation checked (by dynamic retinoscopy) and must be refracted and have their post-refraction vision checked with their salient spectacle correction for both near and distance to plan their habilitation.
Lesions affecting the optic radiations lead to detectable ganglion cell absence in predictable retinal areas owing to a process known as retrograde transynaptic degeneration (Lennartsson et al., 2014) with lack of the optic nerve fibers causing optic atrophy or optic disk cupping which can be misdiagnosed as glaucoma (Jacobson et al., 2020), when brain injury occurs in later pregnancy (Jacobson and Dutton, 2000) or optic nerve hypoplasia, as a sequel to earlier injury (Zeki et al., 1992).
Children with CVI are frequently observed to have oculomotor disorders, difficulties in visual fixation or visual pursuit, hypometric saccades, or a disorder of gaze strategy (Stiers et al., 2002; Fazzi et al., 2004), as well as nystagmus due to periventricular leukomalacia (Jacobson et al., 1998; Tinelli et al., 2020). These conditions are associated with reduced visual performance.
Some children with CVI show academic success similar to their typical peers, while others show significant learning disabilities. Early brain damage is commonly diffuse, so tends to affect multiple brain functions, leading to associated neurological disorders including epilepsy, intellectual disability and CP, which can compound the deleterious effects of CVI on development (Lowery et al., 2006; Duke et al., 2020). Several studies have been conducted in children with CP to identify and characterize their associated CVI (Stiers et al., 2002; Fazzi et al., 2004; Pehere et al., 2018). These investigations have shown that children with CP commonly have difficulties in visuo-perceptual, visuo-spatial and visuo-constructive activities, regardless of their level of visual acuity (West et al., 2021). The severity and patterns of the deficits closely correlate with the extent and distribution of reduction of white matter as well as impairment of the dorsal stream pathway, interfering with attentional, spatial and motor aspects of visual cognition as well as with global visual processing (Fazzi et al., 2004; Duke et al., 2020). MRI tractography has shown that when the inferior longitudinal fasciculi in periventricular temporal lobe white matter are affected, the ventral stream dysfunctions alter detailed visual processing and in this way, visual recognition (Ortibus et al., 2012), and when the superior longitudinal fasciculi are affected, the resulting dorsal stream dysfunctions impair visual mapping of the visual scene, leading to simultanagnostic vision limiting visual search, with lack of accuracy of visual guidance of movement (optic ataxia) (Bauer et al., 2014) (see below for a detailed description).
Patterns of Cortical or Cerebral Visual Impairment
Many patterns of visual disorder can be seen, with each affected child having their own unique form of vision (Philip and Dutton, 2014). Depending on the topography and extent of the pathology, the deficit may impair any aspect of visual function affecting central vision, peripheral vision (in all or part of the visual field), movement perception, gaze control, visual guidance of movement, visual attention, attentional orientation in space, visual analysis and recognition, visual memory and spatial cognition, as well as central vision, the visual fields, visual analysis, visual exploration, visual attention, or visual memory (Kelly et al., 2021), in any combination, or degree. Recognition or visual memory of an object, a face or a place, the act of processing a set of stimuli or a complex scene, or difficulties directing movement or gesture under the control of vision, can be impaired in a variety of combinations. When the optic tracts, lateral geniculate bodies, optic radiations, or primary visual cortices are affected by a lesion, the resulting CVI manifests as lack of vision for all or a portion of the visual field.
Considering central vision, corrected visual acuities of children with CVI can be normal, subnormal or profoundly impaired. Contrast sensitivity perception is often significantly impaired (Good et al., 2012), while anomalous light brightness appreciation is a likely cause of photophobia, but the effects of CVI on perception of color have yet to be systematically studied.
Observed visual field deficits range from cortical blindness (i.e., lack of all visual sensation despite the integrity of the eye) to scotoma (i.e., lack of visual sensation for a small portion of the visual field). Intermediate disorders include tunnel/tubular vision (i.e., concentric reduction of the visual field), or its opposite, retention of peripheral vision (i.e., loss of the central visual field, while the peripheral visual field is preserved), homonymous lateral hemianopia (i.e., loss of the contralesional visual field), or quadrantanopia (i.e., loss of a visual quadrant). Lower visual field impairment due to periventricular leukomalacia (often associated with premature birth or CP) can be peripheral or complete, or manifest as degraded clarity in the lower visual fields (Jacobson et al., 2006; Tinelli et al., 2020), and it can be combined with dorsal stream dysfunction, leading to a major deficit in global visual processing. These different disorders may exist as such or be observed successively in the same patient who may show a degree of recovery over time (Guzzetta et al., 2001b; Watson et al., 2007; Werth, 2008).
The lower visual field impairment (which if peripheral, may not be detected by classical central visual field testing) is characterized by adaptive strategies of walking with the head down, tripping over obstacles, reluctance to jump off a bench, holding onto clothing of an accompanying adult (while pulling down) to provide tactile guidance for the height of the ground ahead when walking over uneven ground, going down stairs by running the heel down the riser, and probing the ground ahead with the foot to check whether a floor boundary is a step or not (Lueck and Dutton, 2015a). Reaching in the upper intact visual field is often more accurate than in the lower visual field, when it is impaired. The accompanying dorsal stream dysfunction often leads to distress in crowded and noisy locations, inability to find an object in clutter, or a friend in a group, or to read, unless peripheral text is masked. While looking away from a face into an uncluttered area when listening to someone speaking, to facilitate auditory attention is also common (Zihl and Dutton, 2015; Dutton et al., 2017).
Expansion of the lateral ventricles into temporal lobe white matter in children with hydrocephalus can implicate a pattern of evident ventral stream dysfunctions, such as impaired processing of visual details, impaired visual recognition of faces and facial expressions, as well as difficulties with navigation, and object and word recognition. Shunted hydrocephalus, leading to CVI in 50% of cases, is a cause of this pattern of vision (Houliston et al., 1999; Andersson et al., 2006).
Children with quadriplegic CP can be similarly but more profoundly affected. Complete lower visual field impairment from severe posterior parietal injury, combined with hemianopia from asymmetric cerebral hemisphere injury, may culminate in a single intact upper visual field quadrant of intact vision only. This needs to be sought out and optimally utilized for communication and learning. Associated severe dorsal stream pathology due to bilateral posterior parietal pathology can result in apparent blindness owing to additional probable Balint syndrome (see section “Cortical or Cerebral Visual Impairment, Visuo-Motor Coordination and Gesture Production”). Yet, elimination of all visual and auditory “clutter” by enclosing such children in a monochrome “tent” for a succession of half hour periods can lead to visual behaviors gradually becoming manifest for the first time, even in older children, which can later be sustained even outside the tent (Little and Dutton, 2015).
Semiology of Cortical or Cerebral Visual Impairment in Children
Cortical Blindness, Visual Field Defects and Blindsight
Lesions in the optic tracts, lateral geniculate nucleus, optic radiations or primary visual cortex result in loss of vision in all or part of the visual field (depending on the location and severity of the lesion). The observed visual-field defects range from cortical blindness (i.e., loss of all visual sensation despite the integrity of the eye) to scotoma (i.e., loss of visual sensation in part of the visual field). Moderate impairments include tunnel vision (i.e., a concentric reduction in the visual field; see Figure 1) or conversely, peripheral vision (i.e., loss of central vision only). Some children are born with these impairments, whereas others acquire them at a later stage (Lueck and Dutton, 2015b).
Figure 1. Reading with a right sided visual field defect. Above: Upper arrows denote the position of the fovea with respect to the word when reading. Below: position of the right scotoma overlying the word when reading.
Visual-field defects among children are defined by a loss of visual sensation in all or part of the visual field. Unfortunately, there is little public awareness of visual-field defects and they are not usually tested for clinically, whereas, curiously, ocular damage is diagnosed and treated early on. Thus, there is a profound lack of knowledge on cerebral visual deficits in children. Unfortunately, in pediatric patients the sequelae of neonatal cortical blindness are far too frequently diagnosed late [often around the age of 10, by which time the child has already completed several years of school, and sometimes after testing for a pervasive developmental disorder (PDD)] (Lueck et al., 2019).
In terms of signs and symptoms, the initial phase of cortical blindness typically involves loss of all conscious visual sensation as well as loss of the blink reflex to light or to visual threat. Children with such deficits behave as if they are blind, avoiding obstacles and people; they cannot even make basic distinctions between light and dark or between motion and stillness. However, this situation only lasts a few weeks: the child eventually recovers basic visual function, although this can be limited to a diminished visual field, where they only detect high-contrast or moving visual stimuli. Due to delayed diagnosis, children who suffer cortical blindness are often examined only several years after onset of their lesion. They exhibit a less classical set of signs and symptoms than do adults in the acute phase. However, despite the time that has passed since lesion onset, these children typically show a lack of interest in visual stimuli and have marked difficulties in fixing their gaze on such stimuli.
For children who have grown up with cortical blindness that they acquired during the neonatal period but who are evaluated only at an older age, the term cortical blindness is generally inappropriate (Watson et al., 2007; Werth, 2008). In these children, the sequelae of cortical blindness manifests as partial bilateral visual field defects such as tunnel vision (perception within a 10–20° concentric area in the central visual field) or peripheral (absence of perception in the central visual field) often accompanied by other visual cognition disorders such as simultanagnosia, visuo-motor ataxia, disorder of orientation of attention in space (Kelly et al., 2021), as well as disorders of visual recognition of objects and/or faces. Children with profound CP suffering from cortical blindness who are first seen with visual difficulties years after the onset of their deficit (owing to lack of screening at birth) present a less clear picture than that of the adult in the acute phase of bilateral occipital infarction. Even long after presentation, a reduced interest in visual stimuli can be observed, with great difficulty in mobilizing gaze toward visual stimulation. In spite of this, light and sound stimuli presented in the dark can trigger eye movements or visual fixation, which is often not evident in ambient light. It is important to note that, similar to adult patients, children with CVIs can also exhibit a dissociation between their abolished conscious perception and a type of non-conscious perception, known as blindsight, which enables them to avoid obstacles and process visual information in their blind visual field without being aware of doing so (see Weiskrantz, 2004 for an extensive discussion on this phenomenon). According to Tinelli et al. (2013), conversely to children with acquired lesions and CVI, residual unconscious processing of position, orientation and motion of visual stimuli displayed in the scotoma of children with congenital lesions and CVI (Tinelli et al., 2013). We have occasionally seen children with sustained bilateral occipital lobe infarction, who sometimes manifest appropriate responses to smiles suggestive of affective blindsight (Celeghin et al., 2015). Other children can show remarkably good mobility despite their very low vision, probably due to intact middle temporal lobe function causing the Riddoch phenomenon as described in adults (Arcaro et al., 2019), allowing them to distinguish moving stimuli in the blind visual field (Boyle et al., 2005; De Agostini et al., 2005; Tinelli et al., 2013). Unfortunately, these visual field impairments can be completely missed, partly because the child is unaware of his deficit, and partly because the disorder is not visible to the clinician, and can only be identified by seeking it out (Pawletko et al., 2014).
Visual Cognition Deficits
Ventral and dorsal stream disorders give rise to more complex perceptual conditions, as described below.
Ventral stream dysfunction
Ventral stream dysfunction tends to result from temporal lobe pathology, leading to impaired visual analysis, visual recognition and route finding, while dysfunction of the middle temporal lobes can lead to impaired perception of movement (or dyskinetopsia), degrees of which are common in children born prematurely, especially if they have periventricular leukomalacia (Guzzetta et al., 2009).
Visual and spatial imagery disorders are commonly seen in clinical practice (for review see Tanet et al., 2010). These can be highlighted through tasks such as producing and copying geometric figures, arranging cubes, solving puzzles and mental imagery tasks (i.e., “visualizing a representation” needed to answer a question about the characteristics of the object). These approaches have yet to be systematically described in the literature.
Visual recognition disorders (known as visual agnosia in adults) are the result of damage to the occipito-temporal region and are not related to impaired verbal skills. Because learning visual information is almost impossible, the child has difficulty interpreting what is seen, but retains recognition through another sensory modality (e.g., touch). The most frequent recognition difficulties concern images and objects (Tanet et al., 2010; Pawletko et al., 2014). However, these difficulties may also concern faces and sometimes even reading and spelling (for review see Fazzi et al., 2009).
Dorsal stream dysfunction
Dorsal stream dysfunction results from posterior parietal pathology limiting parallel processing of multimodal mental mapping of the surroundings owing to attenuation of the superior longitudinal fasciculi (Bauer et al., 2015). This impairs visual exploration and limits attention through simultanagnostic vision (difficulty recognizing objects when presented simultaneously but with preserved ability to recognize them separately). It also impairs the dorsal stream non-conscious mental mapping of motoric space, leading to inaccurate visual guidance of movement (optic ataxia, which is characterized by the difficulty directing voluntary acts under the control of vision) leading to impaired visuo-motor coordination (Atkinson and Braddick, 2020). When severe, this manifests as Balint syndrome, which together with unilateral spatial neglect (also known as neglect or hemifield inattention) have been described in children (Gillen and Dutton, 2003; De Agostini et al., 2005; Drummond and Dutton, 2007; Philip et al., 2016).
Balint syndrome comprises three main clinical signs (Rizzo, 1993). First, what Balint called “psychic paralysis (or apraxia) of gaze” which refers to an inability to voluntarily redirect gaze to a nominated target. Second, simultanagnosia which corresponds to a restricted field of visual attention and finally, optic ataxia, that is a major deficit of visuo-motor coordination. Balint Syndrome is observed following bilateral parietal brain injury, but each of the features may be evident in less extensive lesions such as those resulting from subtle posterior superior periventricular leukomalacia, often associated with premature birth (Saidkasimova et al., 2007). This form of CVI is the commonest variant we have observed (Dutton et al., 2004).
Unilateral spatial neglect, most often evident when it affects the patient’s left-side as it tends to be more severe on this side, is characterized by difficulties in reacting to, or acting upon stimuli presented in the hemispace contralateral to the brain lesion. This deficit, in which the patient behaves as if half of space on one side does not exist, can be observed in visual and manual activities (e.g., searching and reaching), but also at the locomotor level (e.g., showing a tendency to turn only toward the non-neglected side) (De Agostini et al., 2005). Clinically, head and eye rotation to the left does not compensate for the resulting left sided inattention, but body rotation does, indicating that the posterior parietal map of the surrounding environment is egocentric (Chokron et al., 2007).
Blurred vision or lack of visual field due to CVI may also impair visuo-motor coordination because the low vision is insufficient to allow movement to be accurately visually guided, giving a false impression of clumsiness. Typically, affected children’s performance of gesture and action are more accurate when tactile and kinesthetic input is used in favor of vision (for review and discussion see Stiers et al., 2002).
Behavioral Expressions of Cortical or Cerebral Visual Impairment in Children
Not only are the CVIs not clearly visible because the ocular system appears normal, but also, children growing up with CVIs due to ante- or neonatal injury have no way of knowing their vision is not “normal.” This is not a genuine anosognosia, or a lack of awareness, but an actual inability for the child to recognize her vision is disordered. This means that CVI is not consciously symptomatic. As a result, CVI often goes unidentified. Rather the deleterious consequences on behavior, learning or interactions alert parents, teachers and clinicians that something is amiss. This undoubtedly explains why this disorder is under-diagnosed and why it can be confused with other conditions such as autism, coordination acquisition disorders or learning disabilities (Chokron and Dutton, 2016; Chokron et al., 2020). In turn, the existence of undiagnosed CVI also explains the inflation of other default diagnoses in these children, such as behavioral or learning disorders (Lueck and Dutton, 2015b; Zihl and Dutton, 2015). Table 1 summarizes a number of situations in which CVI can only be indirectly expressed (Chokron and Dutton, 2016).
Cortical or cerebral visual impairment can take various forms and is expressed in daily life in multiple ways, hindering development, social adaptation, learning and social interaction (Chokron and Démonet, 2010). Most often, clinicians and parents focus on these highly evident manifestations, which are the consequences of CVI, but not on the CVIs themselves. For example, a child can be mistaken for suffering from dyslexia if there is consequent difficulty reading, or as having developmental coordination disorder if fine visual-motor coordination interferes with tasks, without recognition that CVI is the origin of the reading or motor difficulties.
Cortical or Cerebral Visual Impairment and Learning Disabilities
No-one can learn from information they cannot perceive. Children with impaired vision are unaware of what they do not see. Visual deficits due to CVIs likely impair learning in many children worldwide, simply because their visual needs are not being catered for. We all adapt to the circumstances we find ourselves in, and children with CVI, (whether or not their CVI has been identified) are no exception. If we cannot see something, we cannot respond to it. If an event is stressful, frightening or objectionable, we react emotionally to it, but when we have the capacity to overcome circumstances, we adapt our behavior accordingly. Children with CVI are no different and manifest the self-same patterns of behavior, as the natural consequences of the way they perceive their worlds, which of course is their normal. Such conditions may be seen as “behavioral disorders,” when in fact they are signatures for the now well-known underlying diagnosis of CVI. Indeed, vision plays an essential role in the development of sensory-motor and cognitive abilities (Atkinson and Braddick, 2007; Chokron and Dutton, 2016). It provides the facility to coordinate all the sensory-motor systems (Fraiberg, 1977). Visual experiences are the first involved in the development of mental representations (Warren, 1994; Fazzi et al., 2010), which will later be crucial for the development of concepts and abstraction. Vision also allows the child to learn through imitation, a process essential to human development.
Although studies on the impact of CVI on the development of young children are rare, there is a large literature on the effect of ophthalmologic visual impairment on development. Studies conducted in blind children, for example, report a marked delay in all areas of motor development compared to sighted children (Fraiberg, 1977; Sonksen, 1993). In a very similar way, children with multiple neurological disorders, such as those with CP, often manifest delay in postural development (Fraiberg, 1977), as well as difficulties in acquiring object permanence, which can be interpreted as a marker of the level of cognitive development (Fazzi et al., 2011). In children with CVI, the problem is even more complex because their perceptual disorders are most often defined and “diagnosed” without questioning their origin, nature or severity.
While there is no question of attributing all learning or behavioral disorders to visual function disorders, it is obvious that, conversely, given the role of vision in development, children with CVI are at significant risk of developmental disorders affecting the entire cognitive and social sphere, as described below. It is therefore crucial to establish the differential diagnosis between CVI and learning disorders, even if this has yet to be rendered systematic policy. Indeed, apart from neuropsychological disorders directly related to neurological injury, CVIs are likely to hinder the development of different skills and learning, as well as interfering with the way the child interacts with the world. It is common to observe that a child suffering from CVI involving visual field, visual attention, or visual analysis, commonly manifests learning, behavioral and/or social interaction disorders as a consequence (Jacobson and Dutton, 2000; Fazzi et al., 2009; Pawletko et al., 2014). Impairments in these functions may manifest as difficulties in reading, in coordination and in social interaction.
Cortical or Cerebral Visual Impairment and Reading
Word identification during reading is possible thanks to the great clarity of our central vision, served by the foveal zones of the retinae. However, reading also involves the use of clues in the para-foveal zones, i.e., in the area adjacent to the central visual field. A visual field disorder affecting all or part of the para-foveal field will therefore inevitably alter the quality of reading (see Figure 1).
In fact, homonymous hemianopia is accompanied by a considerable slowdown and hesitancy in reading fluency as well as anomalies in the amplitude and latency of ocular saccades toward the two visual fields (contra and ipsilateral) (Fayel et al., 2014). On the other hand, several authors have shown the role of attention in reading skills, and even more so in learning, for which it has been shown that visual attentional skills are among the prime predictive factors (Plaza and Cohen, 2007). Thus, a massive attention deficit such as unilateral spatial neglect may be accompanied by neglect dyslexia, where reading errors will involve the neglected (usually left) part of the text and/or words (Laurent-Vannier et al., 2003; Lee et al., 2009; Chokron and Cavezian, 2011). Although reading disorders are not systematically associated with signs of unilateral spatial neglect, children with left-sided neglect may omit or substitute the left part of a text, the beginning of sentences, and have great difficulty returning to the line (Ellis et al., 1987).
Another attentional disorder that may impair reading skills is simultanagnosia. This deficit, in which the patient sees only singular elements, can limit the ability to group the letters seen, and consequently prevent the correct grouping of letters to make up the word.
Finally, CVI can also alter reading and learning due to the presence of a disorder in the recognition of spelling material. Apart from letter-by-letter reading that seems to be acquired, this recognition disorder seems to make it impossible to build up the lexical stock (by inability to recognize syllables and/or words) and is the cause of difficulty in learning to read. It is interesting to note that the case reported by O’Hare et al. (1998) shows that such a form of alexia may exist in children and that it may be the direct consequence of an occipital lesion.
Cortical or Cerebral Visual Impairment, Visuo-Motor Coordination and Gesture Production
Processing of visual information plays a key role in the design, control and execution of movement, especially manual skills (Costini et al., 2014a). Indeed, vision serves as the first support for learning postural control, and it is only at the next stage of learning that the child comes to use tactile and vestibular information (Guzzetta et al., 2001a). Consequently, a CVI may alter an individual’s psychomotor skills in the form of optic ataxia (Hay et al., 2020) and can easily be confused with a practical disorder or dyspraxia (Chokron and Dutton, 2016). At the same time, just as unilateral spatial neglect has been associated with motor difficulties such as akinesia or hypokinesia in adults, current evidence suggests that CVI in children, and in particular unilateral spatial neglect, are most often associated with motor neglect as well as with praxis disorders (Gaudry et al., 2010; Chokron and Dutton, 2016). In addition, optic ataxia is defined as a specific difficulty in directing a ballistic gesture under the control of vision. This disorder therefore specifically affects visuo-manual and visuo-motor coordination and is characterized by difficulties in directing voluntary and coordinated acts under the control of vision (Gillen and Dutton, 2003), particularly for pointing and grasping activities.
Finally, it should be noted that the term “visuo-spatial dyspraxia” has tended to render the differential diagnosis between dyspraxia and CVI almost impossible to achieve. Indeed, visuo-spatial dyspraxia (Mazeau, 2005) includes a certain number of CVIs (reduction of visual field, attention and spatial organization disorders) which are themselves thought to be responsible for gestural clumsiness (Costini et al., 2014a,b). Indeed, in children with CVI, the use of vision may interfere with motor achievement, whereas verbal instructions or performing a task without visual control tends to improve performance (Mazeau, 2005; Chokron and Dutton, 2016), which may explain why children with CVI often choose to reach to the side of where they are looking. At present, it is therefore necessary to review the concept of “visuo-spatial dyspraxia,” since logically, visuo-spatial disorders alone can explain motor awkwardness. Therefore, as recently proposed by some authors (Costini et al., 2014b), the term “visuo-spatial dyspraxia” should no longer be used in children with clear visual and/or spatial cognitive impairments that may alter their gestural production. Instead, in these children, it is essential to assess neurovisual and gestural disorders independently, with and without visual control (i.e., eyes open or closed), and to reserve the term “practical disorders” only for children whose gestural production is similar under both conditions.
Cortical or Cerebral Visual Impairment, Social Interactions and Emotional Reactions
In the healthy individual, social interactions are based not only upon the exchange of verbal information but also non-verbal information, especially cues mainly expressed through eye contact, gestures and facial expressions. Even in the context of typical development, it is more difficult for a child than for an adult to analyze and give meaning to facial expressions that convey emotion, but this is even more difficult for those with a CVI. From the first months of life, the visual system thus allows the development of tools that are indispensable for interactions with others (Chokron and Streri, 2012). These include the implementation of purely visual communication and then joint ocular attention, which informs the baby from the age of 9 months (Baron-Cohen et al., 1997) about the location and direction of the individuals in front of him. According to Itier and Batty (2009), joint attention, or shared attention of two individuals on the same object, is one of the prerequisites for the development of Theory of Mind, which allows us to make causal inferences about the behaviors of others, being mature around the age of 4–5 years (Mitchell and Lacohee, 1991).
There is a particular challenge for parents who have to interact with a child whose CVI they are frequently unaware of. Indeed, parents of a child with ophthalmologic visual dysfunctioning are warned of future visual difficulties at an early stage and can adapt their behavior accordingly, by using auditory or haptic modalities instead of visual ones. On the contrary, in the child with a CVI, lack of knowledge of the disorder by the medical profession, the family and the child herself, does not allow the stakeholders to interpret the child’s particular behavior in terms of a potential visual cause, and therefore take appropriate action to cater for the causative visual disabilities (McConnell et al., 2021).
Interacting with a child who does not look at you, does not follow you with her eyes, does not recognize you, and does not smile in response to your smile, without being able to relate this set of behaviors to disorder of visual function, is extremely difficult, and likely to alter early relationships. Unlike the visually impaired child with ocular disorders, whose healthy occipital cortex will progressively reorganize itself to process other sensory information to compensate for the visual disorder (touch to see, air friction analysis, echolocation etc.) (Martin et al., 2016; Norman and Thaler, 2019), the child with CVI does not have healthy unused cortical areas that can directly compensate for the visual function disorder. Adaptation to the CVI cannot therefore take place spontaneously, but is dependent upon targeted customized adapted education and re-education, which can only be put in place once the diagnosis and characteristics of the underlying visual disorder have been established.
Numerous studies have shown that blindness or severe congenital visual dysfunctioning are frequently accompanied by autistic features (with a much higher occurrence than that observed in the general population), raising the question of the link between visual dysfunction and autism (Jambaqué et al., 1998; Sonksen and Dale, 2002). In a very similar way, Garcia-Filion and Borchert (2013) have recently found a considerably higher prevalence of autism spectrum conditions in a population of visually impaired subjects, being up to 25%, compared to the estimated occurrence of 0.6% in the general population. A recent study by Jure et al. (2016) confirms this hypothesis. All these studies strongly suggest that it is not the etiology of blindness that seems to be the cause, but rather the absence of visual perception from birth or very early in life.
Cortical or cerebral visual impairment can interfere with any or all aspects of visual processing, from detection to attention, orientation, exploration, search, spatial localization or recognition of objects, scenes, places or faces (Kelly et al., 2021). As a result, disorders of cerebral visual cognition such as those impairing face recognition, perception of facial expressions, gestures, movement, and the environment in general also hinder development of social and emotional interaction by impairing many of the processes necessary for communication, including acquisition of related language skills (Pawletko et al., 2014).
Although rarely mentioned in the literature, autistic-like conditions may exist in children with CVI and vice versa (Freeman, 2010; Tanet et al., 2010; Fazzi et al., 2019). These manifestations can lead to the official diagnosis of PDD despite the known presence of a brain lesion and neurovisual symptomatology. The question of differential diagnosis between sequelae of the spectrum of CVI to cortical blindness and PDD is thus increasingly being raised (Jambaqué et al., 1998; Freeman, 2010; Tanet et al., 2010; Pawletko et al., 2014; Chokron and Dutton, 2016; Fazzi et al., 2019) and it is now necessary to inform practitioners how to elicit the differential diagnosis between these conditions.
Some children with CVI may underestimate or overestimate certain facial expressions, especially negative ones, such as fear, anger or disgust, or confuse them with each other. Some children with facial recognition problems (e.g., prosopagnosia) may sometimes misrecognize, and so behave with strangers as if they know them, or conversely, fail to react appropriately to people they know, such as friends, and even siblings or relatives (Fazzi et al., 2009). These face recognition disorders can lead to serious problems in social interaction, especially if they are misunderstood by others, who interpret the lack of reaction as disinterest and not as a visual disorder, resulting in a genuine interaction difficulty. For some children with CVI, these difficulties in recognition and analysis can be so severe and disabling that they can lead them to isolate, reinforcing the image of withdrawal seen in autistic syndromes. According to recent studies (Freeman, 2010; Pawletko et al., 2014), CVIs have such a significant impact on social skills that it can lead many affected children to be misdiagnosed as having PDD, Asperger syndrome or autism (conditions now labeled under the autism spectrum term) (Jambaqué et al., 1998; Fazzi et al., 2019).
It therefore seems essential to be able to search early and systematically for CVIs in at risk children, in order to be able to treat them as quickly as possible, and avoid the occurrence of interaction and/or cognitive and/or behavioral disorders (Lueck and Dutton, 2015b; McConnell et al., 2021). The ability to make the best possible differential diagnosis between CVI and autism would also help to identify the most appropriate targeted intervention for each child, to bring about salient school adaptations, while providing useful parental guidance to optimally stimulate and teach these children as effectively as possible (Fazzi et al., 2019).
Implications for Early Detection of Cortical or Cerebral Visual Impairment in Children
The recognition of CVI in children is vital to offer parents, educators and stakeholders, management advice aimed at optimizing motor, cognitive and social development as well as school learning. The need to recognize visual function disorders, whether they are ophthalmological or neurological in origin, is now well established (Fazzi et al., 1999) and must result in the implementation of early interventions to improve the future for these children (Chokron and Dutton, 2016; Rossi et al., 2017; Chang and Borchert, 2020).
Unfortunately, in children with a CVI, a large number of behavioral manifestations can be neglected or misinterpreted if visual disorders of central origin are not considered nor taken into account (Lueck and Dutton, 2015a). At present, failure at school leading to relational difficulties is likely to be systematically interpreted in terms of specific cognitive or behavioral disorders, without consideration of CVI, particularly in the children whose visual acuities are normal (Lowery et al., 2006; Pawletko et al., 2014). At the start of schooling, is often the focus upon the child’s activities, rather than the adequacy of their supporting visual processing, that can delay diagnosis (Cavezian et al., 2010b). The situation for children with additional associated neurological or ophthalmological disorders is equally problematic (West et al., 2021). In particular, the visual pathology can be “the tree that hides the forest.” In this scenario, the child who is already being followed-up for ophthalmic disorders, may not necessarily be subject to a complementary neurovisual assessment (Ego et al., 2015; Lueck and Dutton, 2015b; Chang and Borchert, 2020; McConnell et al., 2021). The diagnosis of CVI is crucial, as this can impact the child’s whole development and their future, because the diagnosis can be confused with other conditions, thereby delaying or even preventing appropriate care. The diagnostic approach includes in-depth structured history taking, precise visual assessment and regular evaluation of those affected to identify the condition, assess the evolution and best adapt management to cater for specific needs at school (Ysseldyke et al., 2009; Salvia et al., 2013; Chang and Borchert, 2020). The assessment process thus varies according to the goal, to identify and characterize other disorders, implement targeted interventions, and make decisions concerning the provision of optimal appropriate educational or vocational services (McConnell et al., 2021).
The neuropsychological approach combined with structured history taking for CVI allows us to finely describe visual function disorders as well as to characterize their deleterious effect on cognitive, social and motor development. Optimal management (that we are not covering here) is founded on this finely profiled description, and aims at truly enhancing all the capacities of detection, discrimination, analysis, memory, and visual attention, as well as the processes involved in the mental organization and representation of space (Zihl and Dutton, 2015; Chokron, 2018; Chang and Borchert, 2020; McConnell et al., 2021), complemented by skilled teaching of parents and teachers about the unique visual difficulties of each child, and the salient actions that they need to take. Future research in this field will aim to standardize both assessment and management, tenable collection of comprehensive data on the subject, and dissemination of diagnostic and rehabilitative methodologies for the dynamic assessment and management of CVI to bring about optimal learning and development. While awaiting the dissemination of these tools, clinicians must finely assess the visual skills of children, taking care to distinguish between primary disorders on the one hand, and their consequences on the overall cognitive sphere on the other, thus avoiding diagnostic confusion, particularly with autism and intellectual disability. This is crucial as it allows distinction between PDD/ASD and/or intellectual disability as a consequence of a primary CVI. A better understanding or such etiological mechanisms is central to propose appropriate solution as early as possible and to spread knowledge on CVI to all professionals caring for children (McConnell et al., 2021).
Twenty years ago, CVI in children was rarely considered or mentioned. Recently, this condition in its many forms has been extensively researched. It is to be hoped that the coming years will see optimal diagnosis and management, especially for children born in a high-risk context (prematurity, neo-natal hypoxia, and neo-natal stroke and non-accidental head injury) for whom targeted screening for CVI is likely to prove effective and worthwhile.
SC: design of the manuscript, writing, and editing. KK: editing final draft. GD: writing and editing. All authors contributed to the article and approved the submitted version.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Available at: https://doi.org/10.1176/appi.books.9780890425596
Andersson, S., Persson, E. K., Aring, E., Lindquist, B., Dutton, G. N., and Hellstrom, A. (2006). Vision in children with hydrocephalus. Dev. Med. Child Neurol. 48, 836–841. doi: 10.1017/S0012162206001794
Arcaro, M. J., Thaler, L., Quinlan, D. J., Monaco, S., Khan, S., Valyear, K. F., et al. (2019). Psychophysical and neuroimaging responses to moving stimuli in a patient with the Riddoch phenomenon due to bilateral visual cortex lesions. Neuropsychologia 128, 150–165. doi: 10.1016/j.neuropsychologia.2018.05.008
Atkinson, J., and Braddick, O. (2007). Visual and visuocognitive development in children born very prematurely. Prog. Brain Res. 164, 123–149. doi: 10.1016/S0079-6123(07)64007-2
Atkinson, J., and Braddick, O. (2020). “Visual development,” in Handbook of Clinical Neurology, eds A. Gallagher, C. Bulteau, D. Cohen, and J. L. Michaud (Amsterdam: Elsevier), 121–142. doi: 10.1016/B978-0-444-64150-2.00013-7
Baron-Cohen, S., Baldwin, D. A., and Crowson, M. (1997). Do children with autism use the speaker’s direction of gaze strategy to crack the code of language? Child Dev. 68, 48–57. doi: 10.1111/j.1467-8624.1997.tb01924.x
Bauer, C. M., Heidary, G., Koo, B. B., Killiany, R. J., Bex, P., and Merabet, L. B. (2014). Abnormal white matter tractography of visual pathways detected by high-angular-resolution diffusion imaging (HARDI) corresponds to visual dysfunction in cortical/cerebral visual impairment. J. AAPOS 18, 398–401. doi: 10.1016/j.jaapos.2014.03.004
Bauer, C. M., Yazzolino, L., Hirsch, G., Cattaneo, Z., Vecchi, T., and Merabet, L. B. (2015). Neural correlates associated with superior tactile symmetry perception in the early blind. Cortex 63, 104–117. doi: 10.1016/j.cortex.2014.08.003
Black, S. A., Mcconnell, E. L., Mckerr, L., Mcclelland, J. F., Little, J. A., Dillenburger, K., et al. (2019). In-school eyecare in special education settings has measurable benefits for children’s vision and behaviour. PLoS One 14:e0220480. doi: 10.1371/journal.pone.0220480
Boot, F. H., Pel, J. J., Van Der Steen, J., and Evenhuis, H. M. (2010). Cerebral Visual Impairment: which perceptive visual dysfunctions can be expected in children with brain damage? A systematic review. Res. Dev. Disabil. 31, 1149–1159. doi: 10.1016/j.ridd.2010.08.001
Boyle, N. J., Jones, D. H., Hamilton, R., Spowart, K. M., and Dutton, G. N. (2005). Blindsight in children: does it exist and can it be used to help the child? Observations on a case series. Dev. Med. Child Neurol. 47, 699–702. doi: 10.1017/S0012162205001428
Buckner, R. L., Andrews-Hanna, J. R., and Schacter, D. L. (2008). The brain’s default network: anatomy, function, and relevance to disease. Ann. NY. Acad. Sci. 1124, 1–38. doi: 10.1196/annals.1440.011
Cavezian, C., Gaudry, I., Perez, C., Coubard, O., Doucet, G., Peyrin, C., et al. (2010a). Specific impairments in visual processing following lesion side in hemianopic patients. Cortex 46, 1123–1131. doi: 10.1016/j.cortex.2009.08.013
Cavezian, C., Vilayphonh, M., De Agostini, M., Vasseur, V., Watier, L., Kazandjian, S., et al. (2010b). Assessment of visuo-attentional abilities in young children with or without visual disorder: toward a systematic screening in the general population. Res. Dev. Disabil. 31, 1102–1108. doi: 10.1016/j.ridd.2010.03.006
Celeghin, A., De Gelder, B., and Tamietto, M. (2015). From affective blindsight to emotional consciousness. Conscious Cogn. 36, 414–425. doi: 10.1016/j.concog.2015.05.007
Chang, M. Y., and Borchert, M. S. (2020). Advances in the evaluation and management of cortical/cerebral visual impairment in children. Surv. Ophthalmol. 65, 708–724. doi: 10.1016/j.survophthal.2020.03.001
Chokron, S. (2018). Prise en charge des troubles neurovisuels d’origine centrale. Revue Orthophonique 35, 17–31.
Chokron, S., and Cavezian, C. (2011). De la Négligence aux ‘Dys’. Marseille: Solal.
Chokron, S., and Démonet, J.-F. (2010). Approche Neuropsychologique des troubles des apprentissages. Marseille: Solal.
Chokron, S., and Dutton, G. N. (2016). Impact of cerebral visual impairments on motor skills: implications for developmental coordination disorders. Front. Psychol. 7:1471. doi: 10.3389/fpsyg.2016.01471
Chokron, S., and Streri, A. (2012). Comment voient les bébés ?. Paris: Le Pommier.
Chokron, S., Dupierrix, E., Tabert, M., and Bartolomeo, P. (2007). Experimental remission of unilateral spatial neglect. Neuropsychologia 45, 3127–3148. doi: 10.1016/j.neuropsychologia.2007.08.001
Chokron, S., Kovarski, K., Zalla, T., and Dutton, G. N. (2020). The inter-relationships between cerebral visual impairment, autism and intellectual disability. Neurosci. Biobehav. Rev. 114, 201–210. doi: 10.1016/j.neubiorev.2020.04.008
Churan, J., Von Hopffgarten, A., and Bremmer, F. (2018). Eye movements during path integration. Physiol. Rep. 6:e13921. doi: 10.14814/phy2.13921
Costini, O., Remigereau, C., Roy, A., Faure, S., and Le Gall, D. (2014a). Troubles visuo-spatiaux dans la dyspraxie : Peut-on encore parler de dyspraxie. Approche Neuropsychol. Apprentissages Enfant (ANAE) 26, 127–136.
Costini, O., Roy, A., Faure, S., and Le Gall, D. (2014b). La dyspraxie développementale : actualités et enjeux. Revue Neuropsychol. 5, 200–212. doi: 10.3917/rne.053.0200
De Agostini, M., Chokron, S., and Laurent-Vannier, A. (2005). “Approche neuropsychologique de l’organisation de l’espace chez l’enfant: influence des facteurs biologiques et culturels,” in Neuropsychologie de l’enfant et troubles du développement, eds C. Hommet, I. Jambaqué, C. Billard, and P. Gillet (Marseille: Solal).
Drummond, S. R., and Dutton, G. N. (2007). Simultanagnosia following perinatal hypoxia: a possible pediatric variant of Balint syndrome. J. AAPOS 11, 497–498. doi: 10.1016/j.jaapos.2007.03.007
Duke, R. E., Nwachukuw, J., Torty, C., Okorie, U., Kim, M. J., Burton, K., et al. (2020). Visual impairment and perceptual visual disorders in children with cerebral palsy in Nigeria. Br. J. Ophthalmol. 2:317768. doi: 10.1136/bjophthalmol-2020-317768
Dutton, G. N., Chokron, S., Little, S., and Mcdowell, N. (2017). Posterior parietal visual dysfunction: an explanatory review. Vision Dev. Rehabil. 3, 10–22. doi: 10.31707/VDR2017.3.1.p10
Dutton, G. N., Saaed, A., Fahad, B., Fraser, R., Mcdaid, G., Mcdade, J., et al. (2004). Association of binocular lower visual field impairment, impaired simultaneous perception, disordered visually guided motion and inaccurate saccades in children with cerebral visual dysfunction-a retrospective observational study. Eye (Lond) 18, 27–34. doi: 10.1038/sj.eye.6700541
Ego, A., Lidzba, K., Brovedani, P., Belmonti, V., Gonzalez-Monge, S., Boudia, B., et al. (2015). Visual-perceptual impairment in children with cerebral palsy: a systematic review. Dev. Med. Child Neurol. 57, 46–51. doi: 10.1111/dmcn.12687
Ellis, A. W., Flude, B. M., and Young, A. W. (1987). “Neglect dyslexia” and the early visual processing of letters in words and nonwords. Cogn. Neuropsychol. 4, 439–464. doi: 10.1080/02643298708252047
Fayel, A., Chokron, S., Cavezian, C., Vergilino-Perez, D., Lemoine, C., and Dore-Mazars, K. (2014). Characteristics of contralesional and ipsilesional saccades in hemianopic patients. Exp. Brain Res. 232, 903–917. doi: 10.1007/s00221-013-3803-y
Fazzi, E., Bova, S. M., Uggetti, C., Signorini, S. G., Bianchi, P. E., Maraucci, I., et al. (2004). Visual-perceptual impairment in children with periventricular leukomalacia. Brain Dev. 26, 506–512. doi: 10.1016/j.braindev.2004.02.002
Fazzi, E., Bova, S., Giovenzana, A., Signorini, S., Uggetti, C., and Bianchi, P. (2009). Cognitive visual dysfunctions in preterm children with periventricular leukomalacia. Dev. Med. Child Neurol. 51, 974–981. doi: 10.1111/j.1469-8749.2009.03272.x
Fazzi, E., Lanners, J., Danova, S., Ferrarri-Ginevra, O., Gheza, C., Luparia, A., et al. (1999). Stereotyped behaviours in blind children. Brain Dev. 21, 522–528. doi: 10.1016/S0387-7604(99)00059-5
Fazzi, E., Micheletti, S., Galli, J., Rossi, A., Gitti, F., and Molinaro, A. (2019). Autism in children with cerebral and peripheral visual impairment: fact or artifact? Semin. Pediatr. Neurol. 31, 57–67. doi: 10.1016/j.spen.2019.05.008
Fazzi, E., Rossi, M., Signorini, S., Rossi, G., Bianchi, P. E., and Lanzi, G. (2007). Leber’s congenital amaurosis: is there an autistic component? Dev. Med. Child Neurol. 49, 503–507. doi: 10.1111/j.1469-8749.2007.00503.x
Fazzi, E., Signorini, S. G., and Lanners, J. (2010). “The effect of impaired vision on development,” in Visual impairment in children due to damage to the brain. Clinics in Developmental Medicine, eds G. N. Dutton and M. Bax (London: MacKeith Press), 194–204.
Fazzi, E., Signorini, S. G., Bomba, M., Luparia, A., Lanners, J., and Balottin, U. (2011). Reach on sound: a key to object permanence in visually impaired children. Early Hum. Dev. 87, 289–296. doi: 10.1016/j.earlhumdev.2011.01.032
Fraiberg, S. (1977). Insights from the blind: Comparative studies of blind and sighted infants. New York, NY: Basic Books.
Freeman, R. D. (2010). “Psychiatric considerations in cortical visual impairment,” in Visual impairment in children due to damage to the brain. Clinics in Developmental Medicine, eds G. N. Dutton and M. Bax (London: Mac Keith Press).
Garcia-Filion, P., and Borchert, M. (2013). Optic nerve hypoplasia syndrome: a review of the epidemiology and clinical associations. Curr. Treat Options Neurol. 15, 78–89. doi: 10.1007/s11940-012-0209-2
Gaudry, I., Perez, C., Cavézian, C., Vilayphonh, M., and Chokron, S. (2010). “Dyspraxies et troubles neurovisuels,” in Approche Neuropsychologique des troubles des apprentissages, eds S. Chokron and J.-F. Démonet (Marseille: Solal).
Gillen, J. A., and Dutton, G. N. (2003). Balint’s syndrome in a 10-year-old male. Dev. Med. Child Neurol. 45, 349–352. doi: 10.1111/j.1469-8749.2003.tb00407.x
Good, W. V., Hou, C., and Norcia, A. M. (2012). Spatial contrast sensitivity vision loss in children with cortical visual impairment. Invest Ophthalmol. Vis. Sci. 53, 7730–7734. doi: 10.1167/iovs.12-9775
Guzzetta, A., Mercuri, E., and Cioni, G. (2001b). Visual disorders in children with brain lesions: 2. Visual impairment associated with cerebral palsy. Eur. J. Paediatr. Neurol. 5, 115–119. doi: 10.1053/ejpn.2001.0481
Guzzetta, A., Cioni, G., Cowan, F., and Mercuri, E. (2001a). Visual disorders in children with brain lesions: 1. Maturation of visual function in infants with neonatal brain lesions: correlation with neuroimaging. Eur. J. Paediatr. Neurol. 5, 107–114. doi: 10.1053/ejpn.2001.0480
Guzzetta, A., Tinelli, F., Del Viva, M. M., Bancale, A., Arrighi, R., Pascale, R. R., et al. (2009). Motion perception in preterm children: role of prematurity and brain damage. Neuroreport 20, 1339–1343. doi: 10.1097/WNR.0b013e328330b6f3
Hay, I., Dutton, G. N., Biggar, S., Ibrahim, H., and Assheton, D. (2020). Exploratory study of dorsal visual stream dysfunction in autism: a case series. Res. Autism Spectr. Disord. 69:101456.
Houliston, M. J., Taguri, A. H., Dutton, G. N., Hajivassiliou, C., and Young, D. G. (1999). Evidence of cognitive visual problems in children with hydrocephalus: a structured clinical history-taking strategy. Dev. Med. Child. Neurol. 41, 298–306. doi: 10.1017/S0012162299000675
Itier, R. J., and Batty, M. (2009). Neural bases of eye and gaze processing: the core of social cognition. Neurosci. Biobehav. Rev. 33, 843–863. doi: 10.1016/j.neubiorev.2009.02.004
Jacobson, L., and Dutton, G. N. (2000). Periventricular leukomalacia: an important cause of visual and ocular motility dysfunction in children. Surv. Ophthalmol. 45, 1–13. doi: 10.1016/S0039-6257(00)00134-X
Jacobson, L., Flodmark, O., and Martin, L. (2006). Visual field defects in prematurely born patients with white matter damage of immaturity: a multiple-case study. Acta Ophthalmol. Scand 84, 357–362. doi: 10.1111/j.1600-0420.2006.00636.x
Jacobson, L., Lennartsson, F., and Nilsson, M. (2020). Retinal ganglion cell topography predicts visual field function in spastic cerebral palsy. Dev. Med. Child Neurol. 62, 1100–1106. doi: 10.1111/dmcn.14545
Jacobson, L., Ygge, J., and Flodmark, O. (1998). Nystagmus in periventricular leucomalacia. Br. J. Ophthalmol. 82, 1026–1032. doi: 10.1136/bjo.82.9.1026
Jambaqué, I., Mottron, L., Ponsot, G., and Chiron, C. (1998). Autism and visual agnosia in a child with right occipital lobectomy. J. Neurol. Neurosurg. Psychiatry 65, 555–560. doi: 10.1136/jnnp.65.4.555
Jayakaran, P., Mitchell, L., and Johnson, G. M. (2018). Peripheral sensory information and postural control in children with strabismus. Gait Posture 65, 197–202. doi: 10.1016/j.gaitpost.2018.07.173
Jure, R., Pogonza, R., and Rapin, I. (2016). Autism Spectrum Disorders (ASD) in blind children: very high prevalence, potentially better outlook. J. Autism Dev. Disord. 46, 749–759. doi: 10.1007/s10803-015-2612-5
Kelly, J. P., Phillips, J. O., Saneto, R. P., Khalatbari, H., Poliakov, A., Tarczy-Hornoch, K., et al. (2021). Cerebral Visual Impairment characterized by abnormal visual orienting behavior with preserved visual cortical activation. Invest Ophthalmol. Vis. Sci. 62:15. doi: 10.1167/iovs.62.6.15
Kong, L., Fry, M., Al-Samarraie, M., Gilbert, C., and Steinkuller, P. G. (2012). An update on progress and the changing epidemiology of causes of childhood blindness worldwide. J. AAPOS 16, 501–507. doi: 10.1016/j.jaapos.2012.09.004
Laurent-Vannier, A., Pradat-Diehl, P., Chevignard, M., Abada, G., and De Agostini, M. (2003). Spatial and motor neglect in children. Neurology 60, 202–207. doi: 10.1212/01.WNL.0000048201.26494.0B
Lee, B. H., Suh, M. K., Kim, E. J., Seo, S. W., Choi, K. M., Kim, G. M., et al. (2009). Neglect dyslexia: frequency, association with other hemispatial neglects, and lesion localization. Neuropsychologia 47, 704–710. doi: 10.1016/j.neuropsychologia.2008.11.027
Lennartsson, F., Nilsson, M., Flodmark, O., and Jacobson, L. (2014). Damage to the immature optic radiation causes severe reduction of the retinal nerve fiber layer, resulting in predictable visual field defects. Invest Ophthalmol. Vis. Sci. 55, 8278–8288. doi: 10.1167/iovs.14-14913
Lesniak, A., Klimek, M., Kubatko-Zielinska, A., Kobylarz, J., Nitecka, M., Dutkowska, G., et al. (2017). Ocular outcomes in 4-year old prematurely born children. Przegl Lek 74, 1–7.
Little, S., and Dutton, G. N. (2015). Some children with multiple disabilities and cerebral visual impairment can engage when enclosed by a ‘tent’: Is this due to Balint syndrome? Br. J. Vis. Impair. 33, 66–73. doi: 10.1177/0264619614553860
Lowery, R. S., Atkinson, D., and Lambert, S. R. (2006). Cryptic cerebral visual impairment in children. Br. J. Ophthalmol. 90, 960–963. doi: 10.1136/bjo.2006.094250
Lueck, A. H., and Dutton, G. N. (2015b). Vision and the brain: understanding cerebral visual impairment in children. New York: AFB Press.
Lueck, A. H., and Dutton, G. N. (2015a). “Impairment of vision due to disorders of the visual brain in childhood: A practical approach,” in Vision and the Brain: Understanding Cerebral Visual Impairment in Children, eds A. H. Lueck and G. N. Dutton (New York: AFB Press).
Lueck, A. H., Dutton, G. N., and Chokron, S. (2019). Profiling children with cerebral visual impairment using multiple methods of assessment to aid in differential diagnosis. Semin. Pediatr. Neurol. 31, 5–14. doi: 10.1016/j.spen.2019.05.003
Martin, M. B., Santos-Lozano, A., Martin-Hernandez, J., Lopez-Miguel, A., Maldonado, M., Baladron, C., et al. (2016). Cerebral versus ocular visual impairment: the Impact on developmental neuroplasticity. Front. Psychol. 7:1958. doi: 10.3389/fpsyg.2016.01958
Mazeau, M. (2005). Neuropsychologie et troubles des apprentissages : du symptôme à la rééducation. Paris: Masson.
McConnell, E. L., Saunders, K. J., and Little, J. A. (2021). What assessments are currently used to investigate and diagnose cerebral visual impairment (CVI) in children? A systematic review. Ophthalmic Physiol. Opt. 41, 224–244. doi: 10.1111/opo.12776
Mitchell, P., and Lacohee, H. (1991). Children’s early understanding of false belief. Cognition 39, 107–127. doi: 10.1016/0010-0277(91)90040-B
Mitry, D., Williams, C., Northstone, K., Akter, A., Jewel, J., Khan, N., et al. (2016). Perceptual visual dysfunction, physical impairment and quality of life in Bangladeshi children with cerebral palsy. Br. J. Ophthalmol. 100, 1245–1250. doi: 10.1136/bjophthalmol-2015-307296
Norman, L. J., and Thaler, L. (2019). Retinotopic-like maps of spatial sound in primary ‘visual’ cortex of blind human echolocators. Proc. Biol. Sci. 286:20191910. doi: 10.1098/rspb.2019.1910
O’Hare, A. E., Dutton, G. N., Green, D., and Coull, R. (1998). Evolution of a form of pure alexia without agraphia in a child sustaining occipital lobe infarction at 2 1/2 years. Dev. Med. Child Neurol. 40, 417–420. doi: 10.1111/j.1469-8749.1998.tb08218.x
Ortibus, E. L., De Cock, P. P., and Lagae, L. G. (2011). Visual perception in preterm children: what are we currently measuring? Pediatr. Neurol. 45, 1–10. doi: 10.1016/j.pediatrneurol.2011.02.008
Ortibus, E., Verhoeven, J., Sunaert, S., Casteels, I., De Cock, P., and Lagae, L. (2012). Integrity of the inferior longitudinal fasciculus and impaired object recognition in children: a diffusion tensor imaging study. Dev. Med. Child Neurol. 54, 38–43. doi: 10.1111/j.1469-8749.2011.04147.x
Pawletko, T., Chokron, S., and Dutton, G. N. (2014). “Considerations in behavioral diagnoses of CVI: issues, cautions, and potential outcomes,” in Impairment of vision due to disorders of the visual brain in childhood: a practical approach, eds A. Hall Lueck and G. N. Dutton (USA: AFB).
Pehere, N., Chougule, P., and Dutton, G. N. (2018). Cerebral visual impairment in children: Causes and associated ophthalmological problems. Indian J. Ophthalmol. 66, 812–815. doi: 10.4103/ijo.IJO_1274_17
Philip, S. S., and Dutton, G. N. (2014). Identifying and characterising cerebral visual impairment in children: a review. Clin. Exp. Optom. 97, 196–208. doi: 10.1111/cxo.12155
Philip, S. S., Mani, S. E., and Dutton, G. N. (2016). Pediatric Balint’s syndrome variant: a possible diagnosis in children. Case Rep. Ophthalmol. Med. 2016:3806056. doi: 10.1155/2016/3806056
Plaza, M., and Cohen, H. (2007). The contribution of phonological awareness and visual attention in early reading and spelling. Dyslexia 13, 67–76. doi: 10.1002/dys.330
Ptito, M., Schneider, F. C., Paulson, O. B., and Kupers, R. (2008). Alterations of the visual pathways in congenital blindness. Exp. Brain Res. 187, 41–49. doi: 10.1007/s00221-008-1273-4
Ricci, D., Cowan, F., Pane, M., Gallini, F., Haataja, L., Luciano, R., et al. (2006). Neurological examination at 6 to 9 months in infants with cystic periventricular leukomalacia. Neuropediatrics. 37, 247–252. doi: 10.1055/s-2006-924581
Rizzo, M. (1993). ‘Balint’s syndrome’ and associated visuospatial disorders. Baillieres Clin. Neurol. 2, 415–437.
Robinson, M. N., Peake, L. J., Ditchfield, M. R., Reid, S. M., Lanigan, A., and Reddihough, D. S. (2009). Magnetic resonance imaging findings in a population-based cohort of children with cerebral palsy. Dev. Med. Child Neurol. 51, 39–45. doi: 10.1111/j.1469-8749.2008.03127.x
Rossi, A., Gnesi, M., Montomoli, C., Chirico, G., Malerba, L., Merabet, L. B., et al. (2017). Neonatal Assessment Visual European Grid (NAVEG): Unveiling neurological risk. Infant. Behav. Dev. 49, 21–30. doi: 10.1016/j.infbeh.2017.06.002
Saidkasimova, S., Bennett, D. M., Butler, S., and Dutton, G. N. (2007). Cognitive visual impairment with good visual acuity in children with posterior periventricular white matter injury: a series of 7 cases. J. AAPOS 11, 426–430. doi: 10.1016/j.jaapos.2007.04.015
Sakki, H. E. A., Bowman, R., Sargent, J., Kukadia, R., and Dale, N. (2021). Visual function subtyping in children with early-onset cerebral visual impairment. Dev. Med. Child Neurol. 63, 303–312. doi: 10.1111/dmcn.14710
Sakki, H. E. A., Dale, N. J., Sargent, J., Perez-Roche, T., and Bowman, R. (2018). Is there consensus in defining childhood cerebral visual impairment? A systematic review of terminology and definitions. Br. J. Ophthalmol. 102, 424–432. doi: 10.1136/bjophthalmol-2017-310694
Salvia, J., Ysseldyke, J. E., and Bolt, S. (2013). Assessment in special and inclusive education. Wadsworth: CengageLearning.
Sanguinetti, J. L., and Peterson, M. A. (2016). A behavioral task sets an upper bound on the time required to access object memories before object segregation. J. Vis. 16:26. doi: 10.1167/16.15.26
Sonksen, P. M. (1993). The assessment of vision in the preschool child. Arch. Dis. Child 68, 513–516. doi: 10.1136/adc.68.4.513
Sonksen, P. M., and Dale, N. (2002). Visual impairment in infancy: impact on neurodevelopmental and neurobiological processes. Dev. Med. Child Neurol. 44, 782–791. doi: 10.1111/j.1469-8749.2002.tb00287.x
Stiers, P., Vanderkelen, R., Vanneste, G., Coene, S., De Rammelaere, M., and Vandenbussche, E. (2002). Visual-perceptual impairment in a random sample of children with cerebral palsy. Dev. Med. Child Neurol. 44, 370–382. doi: 10.1111/j.1469-8749.2002.tb00831.x
Tanet, A., Cavezian, C., and Chokron, S. (2010). “Troubles neurovisuels et développement de l’enfant,” in Approche neuropsychologique des troubles des apprentissages, eds S. Chokron and J.-F. Démonet (Marseille: Solal).
Thaler, L., Paciocco, J., Daley, M., Lesniak, G. D., Purcell, D. W., Fraser, J. A., et al. (2016). A selective impairment of perception of sound motion direction in peripheral space: A case study. Neuropsychologia 80, 79–89. doi: 10.1016/j.neuropsychologia.2015.11.008
Tinelli, F., Cicchini, G. M., Arrighi, R., Tosetti, M., Cioni, G., and Morrone, M. C. (2013). Blindsight in children with congenital and acquired cerebral lesions. Cortex 49, 1636–1647. doi: 10.1016/j.cortex.2012.07.005
Tinelli, F., Guzzetta, A., Purpura, G., Pasquariello, R., Cioni, G., and Fiori, S. (2020). Structural brain damage and visual disorders in children with cerebral palsy due to periventricular leukomalacia. Neuroimage Clin. 28:102430. doi: 10.1016/j.nicl.2020.102430
Towsley, K., Shevell, M. I., Dagenais, L., and Consortium, R. (2011). Population-based study of neuroimaging findings in children with cerebral palsy. Eur. J. Paediatr. Neurol. 15, 29–35. doi: 10.1016/j.ejpn.2010.07.005
Warren, D. H. (1994). Blindness and children: A developmental differences approach. New York, NY: Cambridge University Press. doi: 10.1017/CBO9780511582288
Watson, T., Orel-Bixler, D., and Haegerstrom-Portnoy, G. (2007). Longitudinal quantitative assessment of vision function in children with cortical visual impairment. Optom. Vis. Sci. 84, 471–480. doi: 10.1097/OPX.0b013e31806dba5f
Weiskrantz, L. (2004). ”Roots of blindsight,” in Progress in Brain Research. Amsterdam: Elsevier, 227–241. doi: 10.1016/S0079-6123(03)14416-0
Werth, R. (2008). Cerebral blindness and plasticity of the visual system in children. A review of visual capacities in patients with occipital lesions, hemispherectomy or hydranencephaly. Restor. Neurol. Neurosci. 26, 377–389.
West, M. R., Borchert, M. S., and Chang, M. Y. (2021). Ophthalmologic characteristics and outcomes of children with cortical visual impairment and cerebral palsy. J. AAPOS 25, 223.e1–223.e6. doi: 10.1016/j.jaapos.2021.03.011
Williams, C., Pease, A., Warnes, P., Harrison, S., Pilon, F., Hyvarinen, L., et al. (2021). Cerebral visual impairment-related vision problems in primary school children: a cross-sectional survey. Dev. Med. Child Neurol. 63, 683–689. doi: 10.1111/dmcn.14819
Ysseldyke, J. E., Salvia, J., and Bolt, S. (2009). Assessment in special and inclusive education. Belmont, CA: Wadsworth Publishing.
Zeki, S. M., Hollman, A. S., and Dutton, G. N. (1992). Neuroradiological features of patients with optic nerve hypoplasia. J. Pediatr. Ophthalmol. Strabismus 29, 107–112. doi: 10.3928/0191-3913-19920301-11
Zhang, X., Kedar, S., Lynn, M. J., Newman, N. J., and Biousse, V. (2006a). Homonymous hemianopia in stroke. J. Neuro-Ophthalmol. 26, 180–183. doi: 10.1097/01.wno.0000235587.41040.39
Zhang, X., Kedar, S., Lynn, M. J., Newman, N. J., and Biousse, V. (2006b). Homonymous hemianopias: Clinical–anatomic correlations in 904 cases. Neurology 66, 906–910. doi: 10.1212/01.wnl.0000203913.12088.93
Zihl, J., and Dutton, G. N. (2015). Cerebral visual impairment in children: Visuoperceptive and visuocognitive disorders. New York, NY: Springer. doi: 10.1007/978-3-7091-1815-3
Keywords: learning disability, assessment, differential diagnosis, CVI, occipital
Citation: Chokron S, Kovarski K and Dutton GN (2021) Cortical Visual Impairments and Learning Disabilities. Front. Hum. Neurosci. 15:713316. doi: 10.3389/fnhum.2021.713316
Received: 22 May 2021; Accepted: 08 September 2021;
Published: 13 October 2021.
Edited by:Corinna M. Bauer, Massachusetts Eye and Ear Infirmary, Harvard Medical School, United States
Reviewed by:Rachel F. Pilling, University of Bradford, United Kingdom
Barry Kran, New England College of Optometry, United States
Copyright © 2021 Chokron, Kovarski and Dutton. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Sylvie Chokron, [email protected] | 1 | 56 |
<urn:uuid:ff767843-5186-4c0f-90e1-ef7aea8963a1> | Hacking is the practice of entering a computing system and exploring its weaknesses, including both hardware and software. This exploration is intended to either improve the system’s weaknesses or exploit them and harm the end user.
Hacking takes a variety of forms, including:
- Eavesdropping and intercepting Wi-Fi network sessions
- Entering networks through unsecured hardware, such as IoT devices
- Sending links that contain malware, which downloads onto a device once the link is clicked
- History of hacking
- Internet hacking techniques
- Android hacking
- iOS and Mac hacking
- Ethical hacking and penetration testing
- How to prevent hacking
History of hacking
Initially, the term “hack”, which originated in the 1960s, meant delving deeply into electronic systems to improve them. Tech companies still use this positive connotation for their explorative ventures today. Telephone hacking wasn’t commonly called “hacking” in the 1970s, rather “phreaking.” It constituted electronic manipulation of telephone systems to make independent calls without paying for services.
Two movies made the concept of hacking more popular: Tron, 1982, which described breaking into a computer system as “hacking” into it, and War Games, 1983, in which a teenager navigates to a high-security computer system that manages U.S. nuclear weapons. In real life, two famous teenage hacking groups—the Inner Circle and the 414s—breached significant medical, financial, and government organizations’ computer systems during the 1980s. Companies hacked included the Los Alamos National Laboratory, the Sloan-Kettering Cancer Center, and Security Pacific Bank.
The 1980s provided personal computers to ordinary people, not just businesses or government agencies, and few security protocols had been yet developed. That availability allowed young computer enthusiasts to find their way to unauthorized information and data.
In 1986, Congress enacted the Computer Fraud & Abuse Act. This law attempted to emphasize the legal consequences that hackers would experience if they broke into computer systems, since criminality and punishment hadn’t been fully established yet.
The concept of hacking continued to increase in popularity and featured in films and television programs. Hacking has also become a persistent threat to not only governments and large enterprises but also small businesses and all computer and mobile device users.
Because so many organizations are now heavily dependent on computing, the right hacking method can significantly damage nationwide business processes. In 2014, years ago, 783 data breaches were reported. In 2020, according to Atlas VPN, 63 percent of cyberattacks were intended for financial gain, and 81 percent of those were ransomware attacks. The average ransomware attack cost businesses $4.44 million in 2020, Atlas said.
Internet hacking techniques
Web browser-based hacking ranges from infection by malicious download to sophisticated Internet session interception. The following examples are some of the most common methods of Internet hacking:
- Fake WAP (wireless access point)—a Wi-Fi network created by a hacker, typically intended to impersonate a legitimate Wi-Fi source
- Bait and switch—a tactic to purchase legitimate advertising space and then switch the good link to a malicious link once the ad space has been approved by the selling company
- Credential reuse—theft of an Internet user’s session details (such as login credentials) to use for a later session
- Credential stuffing—entry of many stolen credentials into an application portal until the right one works
- SQL injection—false SQL commands made through an insecure database that’s connected to the Internet
- Browser locking—the false indication that a user’s entire browser has been locked for alleged illegal activity by the user
- Cookie theft—a form of session hijacking in which a hacker accesses a computer or device using the cookies installed by a web application or browser
- IoT attacks—access to a network gained by using an IoT device, such as a smart speaker or smart home system, which are less likely to be secured than computers or phones
- DDoS attacks—a method of temporarily disabling a web server by flooding it with an absurd number of IP requests
- DNS spoofing (or cache poisoning)—insertion of false Domain Name System (DNS) data to reroute a computer user to a different IP address than the one initially requested
- Browser hijacking—information, such as advertisements, placed into a user’s browser without their permission
- Ransomware—the unwanted encryption of an individual or a company’s data, in which the victim doesn’t have the decryption key and the hacker demands a ransom to retrieve the data
- Trojan horses—legitimate-looking malware that, once downloaded, can make its way through a computer system and influence application behavior
- Viruses—unwanted code installed onto a user’s device, often through insecure websites or downloaded software that contains malware
- Worms—malware that can move between computer systems independently, uncontrolled by a hacker once they are released
- Phishing—a broad range of hacking and social engineering techniques that attempt to trick users into giving information
The Android operating system and its code are based on open source. Because of this, it’s easier for attackers to find weaknesses in the operating system and in the mobile devices using it.
For Android devices, security partly depends on the device manufacturers and how they design security within the hardware. Android users are able to customize their operating system better, but that same flexibility also opens the door for more threats. Androids are particularly vulnerable to malware, especially trojans. Trojans sent to Android devices through SMS contain links with malicious code.
iOS and Mac hacking
iOS (for Apple phones) and macOS (for Macbook computers) have heavy built-in security from Apple. Apple’s operating systems are built on proprietary, secured code; Apple has control over all updates and code. Users can only download apps that Apple permits, though that doesn’t mean that suspicious ones won’t slip through the cracks.
It’s more difficult for attackers to hack Apple operating systems, but it’s also hard to discover when a threat has entered the system. According to Malwarebytes, Apple’s unwillingness to work with outside developers and its extremely tight and inflexible systems can make threat detection very challenging once a threat actually does infiltrate an Apple device.
Macs have seen cyberattacks, too: in 2017, a phishing campaign emailed a Trojan horse in the form of links to European computer users. Vulnerabilities Meltdown and Spectre both affect Macs, too. Calisto, a variant of Proton malware that attacked macOS, went undetected for two years within Mac operating systems.
Ethical hacking and penetration testing
Ethical hacking is an exploration of computer systems to locate their vulnerabilities and then improve them using that knowledge. The organizations that hire ethical hackers delineate how they can enter and monitor a system. Often, hackers need to be certified to perform these jobs.
Since the 1980s, when teenage hacking groups accessed government and corporate Internet servers, hackers have sometimes had an edge over those whose resources they attack. Security systems and software for computers are defensive mechanisms, intended to prevent an attack that is assumed to come. Ethical hacking is a way for businesses to take a more offensive approach to security and potentially put them ahead of cyber criminals.
Penetration testing is a common method of ethical hacking. Penetration testing, or pen testing, can also focus on the employees of the company, since people are one of an organization’s greatest cybersecurity threats. A hired pen tester might test a company’s network, often by planting planned phishing attacks such as malicious emails, to see how employees respond.
How to prevent hacking
Prevention methods differ somewhat between private device users and organizations. Some of the basics that everyone should employ are:
- Use strong account passwords and strictly limit password reuse.
- Avoid websites that don’t have https (look for a lock at the beginning of the URL; if it says http or the browser alerts you that it’s insecure, don’t enter any personal information on that site).
- Download only trusted PDFs and software from reliable, secured websites; make sure that the site is popular and reputable and look for software reviews if relevant.
- Using password management software to store and safely share credentials
- Using a firewall (this is the bare minimum; enterprises should employ stricter security controls aside from a firewall)
- Maintaining anti-malware or antivirus software on employee computers using reliable MDM solutions
- Performing patches and updates on software and operating systems for all company owned devices and servers within offices and data centers.
- Requiring two-factor authentication for important company applications
- Employing a zero trust architecture within the entire company network
- Creating a strict bring your own device policy for all employees
- Avoiding any open Wi-Fi networks while accessing company resources
Other prevention methods, particularly for phones and computers, include:
- Turning off Bluetooth capabilities
- Clearing browser history regularly
- Using stronger passwords and passcodes
Security companies that offer anti-malware and anti-spyware programs include:
- Norton Security
Password management systems for handling passwords more securely include:
- Sticky Password
Also see hacker.
This article was updated June 2021 by Jenna Phipps.
This article includes research and content contributed by Nina Rankin. | 1 | 5 |
<urn:uuid:83199c41-6ea7-4835-9e29-e803b63c049f> | - Open Access
Understanding the impact of the economic crisis on child health: the case of Spain
International Journal for Equity in Health volume 14, Article number: 95 (2015)
The objectives of the study were to explore the effect of the economic crisis on child health using Spain as a case study, and to document and assess the policies implemented in response to the crisis in this context.
Serial cross-sectional data from Eurostat, the Spanish Health Interview Survey, and the database of childhood hospitalisation were analysed to explore impacts on child health, and key determinants of child health. A content analysis of National data sources/government legislation, and Spanish literature was used to describe policies implemented following the crisis.
Unemployment rates in the general population (8.7 % in 2005 and 25.6 % in 2013), and children living in unemployed families (5.6 % and 13.8 %) increased in the study period. The percentage of children living under the poverty line, and income inequalities increased 15–20 % from 2005 to 2012. Severe material deprivation rate has worsened in families with Primary Education, while the number of families attending Non-Governmental Organisations has increased. An impact on children’s health at the general population level has not currently been detected; however an impact on general health, mental health and use of healthcare services was found in vulnerable groups. Investment in social protection and public policy for children showed a reduction as part of austerity measures taken by the Spanish governments.
Despite the impact on social determinants, a short-term impact on child health has been detected only in specific vulnerable groups. The findings suggest the need to urgently protect vulnerable groups of children from the impact of austerity.
Understanding the health impact of the Great recession on specific vulnerable groups such as children is important to inform policies at national and international levels. The current crisis has affected the whole European economy, but the potential impact on health in each country depends on several factors including the starting point; mechanisms of social protection and social transfers; and the measures adopted by governments to deal with the crisis .
Three phases of the crisis have been described . The first wave (economic impact) was characterised by job losses and reduced household incomes in many countries. The second wave (social impact) was characterised by a high levels of unemployment particularly affecting younger people, increasing the numbers neither in employment nor in education or training (NEETs). The third wave (unequal recovery), that authorities and some media are claiming started in 2014, has been characterized by a slow, uneven return of growth to trend with some areas recovering quickly but other areas remaining in recession.
Spain has been hit hard by the Great recession . The Welfare state in Spain was created more recently than in other western European countries, after the period of dictatorship. The Spanish society and economy inherited a protected and conservative financial sector, an insufficient and regressive fiscal system, and scarcity of social protection and benefits. Consequently, the pre-crisis welfare state was less comprehensive than in other European countries . In this situation any reduction of the welfare state is likely to result in even weaker social protection than in other contexts [5, 6].
Although there are many potential differences between countries, recessions pose risks to health of the general population . Mental health problems, infectious diseases and suicides are becoming more common in countries affected by the economic crisis .
It is universally recognised that children represent a particularly vulnerable population group. Inequalities in early child development have been identified as a major contributing factor to inequalities in adult health, depending on the balance of adverse exposures and protective factors in early life [9, 10]. Few studies have been published to date on the impact of the current crisis on children’s health [11, 12] and the responses of specific governments , particularly in terms of the potentially mitigating or harmful effects of public policies affecting family economic security and social conditions . The objectives of the present work were therefore to explore effect of the crisis on child health using Spain as a case study, and to document and assess the policies that have been implemented in response to the crisis in this context.
A descriptive and exploratory study was conducted using mixed-methods approach. Routinely available data pre-and post crisis was analysed to monitor social determinants of child health. Periodic Spanish National Health Interview Survey (NHIS) and the Minimum Data Set of Hospital Discharge (MDHD) was used to check changes in health behaviours and mental health indicators during the study period, and synthesis of data on key policies that have influenced families with children was analysed to describe government responses. We sought to analyse trends in key social determinants affecting children (poverty and material deprivation); and child health outcomes. A content analysis of the data sources on legislation and a recent supplement published by the Spanish Society of Public Health was used to describe austerity measures. Where possible we sought to identify any differential effects of policies on the basis of socioeconomic status, to test the hypothesis that more vulnerable groups have been disproportionately affected by the crisis, and some of the policy responses in Spain.
Source of data and variables
Key social determinants of child health
Serial cross-sectional databases from Eurostat, OECD, and the Spanish National Institute of Statistics (http://www.ine.es/en/welcome.shtml) were analysed to describe unemployment, child poverty, material deprivation, and measures of income inequality. The source of these data was the Economically Active Population Survey (EAPS) that is a quarterly continuous research focusing on families, whose main purpose is obtaining data on workforce and its several categories (employed, unemployed), as well as on population out of the labour force (economically inactive population). The initial sample is about 65,000 families per quarter, which equals approximately 180,000 persons. Annually data since 2005 to the latest available data was included in the analysis.
Unemployment rate was analysed at the general population level and in the population younger than 25 years, as well as the rate of young people living in unemployed families. Child poverty (%) was defined as the percentage of children living in households with income below 60 % of the median. Indicator of material deprivation was the percentage of children under 17y with unmet basic needs according to the European Union- Survey of Income and Living conditions (EU-SILC, coming from the same registries), stratified by family level of education (Table 1). Income inequality was assessed by the quintile share ratio S80/S20 that puts the income of the top 20 % of the population in relation to that of the bottom 20 %.
Specific reports on vulnerable families looking for help from Non-Governmental Organisations (NGO) such as Caritas were analysed to assess if austerity measures instituted affect vulnerable and low income families more than those on higher incomes.
Population health impacts on families and children
Variables collected to analyse the impact on child health were focused on nutrition habits and violence against children, informed by the results of a previous systematic review .
The NHIS is a Spanish nationally representative health interview survey conducted every 4 to 6 years on behalf of the Spanish Ministry of Health (http://www.msssi.gob.es/estadEstudios/estadisticas/encuestaNacional/ense.htm). The sample is independent and representative of each autonomous community (an organizational division of the Spanish territory). The present study included the last two surveys, that were conducted in 2006/07 (pre-crisis) and 2011/12 (after started the crisis). Both surveys were a multistage stratified sample. Within each household with children or adolescents (aged 0–15 years), one child was randomly selected for the children’s questionnaire that was administered to a proxy-respondent (mainly mothers). Study sample was n = 6838, and n = 4595 in the NHIS 2006/07 and 2011/12, respectively. Variables analysed were general health, mental health, and specific behaviours such as not having breakfast before leaving home, in a period before and after the start of the crisis. Mental health was analyzed by means of the parent version of the Strengths and Difficulties Questionnaire (SDQ, www.sdq.org) administered to the NHIS sample of 4–15 year old children. The range of scores in this scale is 0 to 40, and higher score means worse mental health.
The MDHD database on childhood hospitalisation from the Spanish Ministry of Health (https://www.msssi.gob.es/) due to unintentional injuries (ICD-9 Group 17) and maltreatment (ICD-9 codes 995.50 to 995.59, and 301.51) were analysed for the period 2000–2012. Hospitalisation rates were computed for each year taking into account the number of children hospitalised and census data as denominator.
Impacts on vulnerable groups
The data from a study on vulnerable families affected by eviction or at risk of eviction was compared with the NHS 2011–12 to analyse the impact on vulnerable families with children. This data was collected during 2012 and included 177 families with children participating in the study.
Policies implemented in Spain since the crisis
National data sources/government legislation and Spanish literature on the institution of economic and policy measures were used to analyse the measures taken by the Spanish government, based on a previous review , with a focus on those measures with impacts on families and children. A narrative review of the content was carried out attempting to assess the potential impact of each measure specifically on poor and vulnerable families or at the general population level.
We present trends in cross-sectional data on unemployment, child poverty, material deprivation and income inequalities. Analysis of repeated cross-sectional data by means of joint regression was used to assess trends in hospitalisation during the study period. Joint regression assesses the existence of a significant change of trend in each year and to quantify the annual percentage change (APC) and its statistical significance. The level of statistical significance was established at p <0.05.
All procedures were carried out following the data protection requirements of the European Parliament (Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data). The ethical and legal requirements in Spain were also adhered to.
Impacts on social determinants of child health
Unemployment rates in the general economically active population increased from 8.7 % in 2005 to 25.6 % in 2013 (Fig. 1a). Unemployment in the population younger than 25 years increased from 19.6 % in 2005 to 55.5 % in 2013, and the figures for children living in unemployed families increased from 5.6 to 13.8 % in the study period. Income inequalities increased 20 % during the period 2005–2013, and the percentage of children living under the poverty line increased 15 % from 2005 to 2012. Severe material deprivation rate according to parental level of education increased in families with Primary Education from 9.9 % in 2005 to near 15 % in 2013, and from 1.6 to 2.3 % in families with University degree (Fig. 1b).
Some NGO such as the Food bank federation assisted 700,000 beneficiaries in 2007 and 1,5 million people in 2012, most of them families with children. The Caritas report notes that in 2007, 370,251 people attended and in 2011 this figure was 1,015,276. Four percent of the Spanish population lacks resources to meet their basic daily food. A specific aspect of the impact of the crisis on infant feeding has to do with school meals. School meals have become unaffordable for many families and this might be associated with nutritionally poorer diets (Table 2).
General health, family and childhood mental health
Perceived health as poor according the NHIS improved from 11 % in 2006 to 6.8 % in 2012 (Table 2). Results of mental health of children from the NHIS show that the total difficulties score of the SDQ was lower (better) in 2012 compared to 2006 for the total population, with slightly worse scores for those children with all family members unemployed.
Unintentional injuries and child maltreatment
Hospitalisations due to unintentional injuries showed an improvement with an annual percentage of change (APC) of −0.12 in the population <5y (with approximately 10,000 hospitalisations/year) (Fig. 2a). No changes were found in trends of hospitalisations due to maltreatment. The APC was 0.23 from 2000 to 2013, with rates of approximately 10/100,000 in children younger <1y (Fig. 2b).
Health of specific vulnerable populations
There were 177 children in the vulnerable group attending Caritas, one subgroup required the Direct Emergency Attention due to the need for immediate re-housing and another subgroup of Housing Mediation Services that needed help to negotiate their debt (Table 2). Fair or poor health was higher in the direct emergency attention, and the housing mediation service groups. The probability of suffering a mental health problem was more than 10 times in the group of boys in greater need and 5 times higher in girls, and also similar differences were found in not having breakfast, comparing with data from the NHIS.
Policies implemented in Spain since the crisis
Table 3 shows the laws and regulations implemented by successive Spanish governments following the crisis, with commentary on the potential impacts on children and families. The main structural and budgetary changes in Spain started in 2010 with an energetic policy of fiscal stability, the fundamental savings of which would be in the expenditure side (austerity) rather than the income. Successive governments used the law via decree enacting a series of initial austerity measures such as reducing wages and the Spanish Constitution was reformed to give prominence to the budgetary stability over other commitments, in accordance with the requirements of the European Union.
Most measures were aimed at reducing spending (austerity) with no attempt to ensure social protection for children. In a previous analysis comparing investments in social protection for children in the years 2007, 2010 and 2013 in constant Euros, the latter data showed a reduction of 6.8 % over 2007 and 14.6 % since 2010. Investment in public policies for children in Spain was 1.4 % of GDP the year 2012 vs. 2.2 % in the EU28 in 2011. The budget cuts in public education have affected child pre-schooling, among other sectors. Even before the crisis, those families evicted from their houses were still liable for mortgage debts. This policy has negative impact on evicted families and/or families with difficulties in maintaining their houses with precarious jobs or without jobs. The number of evicted families has doubled between 2007 and 2012.
Access and use of healthcare services
Although the population of children 0–18y theoretically continues to have universal healthcare coverage by law, some cases of barriers to access were detected and also a variability in the implementation of recent policy measures, with a great potential impact on vulnerable children. Moreover, the breaking of universal healthcare coverage in the adult population was also related to cases of healthcare exclusion in children who were refused treatment by hospitals, and parental fear of having to pay for the visit (Table 2).
The study results show a significant deterioration in the social determinants of child health since the crisis and over the period during which austerity measures have been implemented, with increasing social inequality and child poverty. We do not find any immediate effect on child health at general population level but there is evidence of an impact on vulnerable groups.
Among the limitations of the present study should be mentioned the lack of updated and disaggregated data on child population to study the impact on small areas or vulnerable groups. Moreover, the analysis of population average indicators can mask inequalities in specific population subgroups. Secondly, there are difficulties in establishing a causal association between austerity measures and health outcomes. Available data do not allow analysis of the association between social determinants and health outcomes because these aggregate data come from different sources and cannot be easily combined. Further studies using small area or individual level data are recommended to address these questions. Furthermore, longer term follow-up is required to assess the plausible long-run impacts of a deterioration in the social determinants of child health.
The budget cuts in public education have affected the possibility of promoting a more equitable growth and development and early socialisation of children have been reduced in areas with greater economic deprivation . The budget constraint has also affected school canteens. This fact is associated with increased difficulty in providing adequate food especially to the most vulnerable groups, and possibly perpetuates existing inequalities in childhood obesity according to family education level [12, 19].
Difficulties to maintain housing have deteriorated alarmingly, either to pay the mortgage or rent for families with scarce resources. The Spanish laws before the start of the current crisis unfairly penalised those who cannot pay their mortgages. This fact has been worsened by rising unemployment and precarious working conditions of many families due to the economic crisis. Moreover, the increase in the price of energy has increased the difficulties of families to meet their basic needs for water, electricity, heating, etc. These factors have shown a large negative impact on child health of these vulnerable groups .
Spain is one of the developed countries with highest increase in the percentage of children at risk of poverty . One in three children in Spain lives at risk of poverty, according to the latest available data. In the face of cuts to welfare support, NGOs have seen a dramatic increase in demand for services since the onset of the economic crisis. Thus, NGOs have tripled the number of families attended since the beginning of the crisis by problems of housing, food or social support while public investment in child protection has declined [21, 22]. Romania, Spain, Bulgaria, Greece and Italy share the highest rates on child poverty and low impact of family and childhood benefits. Moreover, it has been shown a greater impact in children compared with other general population groups , similar to what happens in other countries [23, 24].
Social determinants of child health have deteriorated and social inequalities affecting children have increased in Spain with the economic crisis, a fact known to be associated with worse health outcomes in the medium and long term, according to the evidence from previous studies . As part of the response to the crisis, successive Spanish governments have established austerity measures and structural changes in labour protection, as well as in social and health care systems . These measures are affecting families with fewer economic resources, families with all members unemployed, long-term unemployed families, and single parent unemployed families.
The childhood population under 18 years old legally continues with universal access to healthcare services after the promulgation of Decree Law 16/2012, which breaks the universality of access and changes the paradigm of the Spanish healthcare system . However, some cases of healthcare exclusion in children have been reported and variability in the application of the above mentioned decree has been found. Some regions of Spain continue with the previous system without barriers to access while in other regions the decree is variably applied. These barriers to population subgroups, such as migrants in an irregular situation, are associated with an indirect impact on the child population and insecurity in families in these conditions, which are usually those who need more support and access to preventive measures and healthcare services. Likewise, the increase in copayments generates greater difficulty in the most vulnerable groups. The results of the present study are consistent with those of another study that analysed the austerity measures in Europe and showed that they can exacerbate the short-term public health effect of economic crisis .
The finding that, despite the impact of austerity in Spain on the social determinants of child health, a short-term impact on child health has been detected only in specific population subgroups and not in the general child population is counter-intuitive and challenges the hypothesis that austerity is detrimental to the health of child populations. Previous studies found poor mental health in the adult population with an increase of 19 % in the percentage of mood disorders, 8 % in anxiety, and 5 % in alcohol abuse after the crisis started, and these increases were associated with unemployment and housing problems. Increased incidence of suicide attempts in the general population was also found . We have not detected changes in children’s mental health trends nor an increase in child maltreatment, effects that were found in a systematic review of the impact of the current crisis in child health . It is likely that, besides the scarcity of data, families play a protective role in Spain, maybe greater than in other contexts. It could be that families take a palliative and resilient role against the negative effects of the crisis. However, if family stress becomes chronic it can cause depletion of family resources. Moreover, according to the conceptual model and the available evidence, the effect of worsening social inequalities in childhood has consequences for the distribution of health in adulthood , and specific changes in law that counteract growing income inequalities through the tax, labour, and welfare systems could have sizable benefits for population health and health disparities.
In conclusion, this study aimed to summarise the evidence about the impact of the economic crisis and austerity measures on social determinants and child health. The impact of austerity policies has likely increased child poverty and deprivation with likely effects on child health especially among vulnerable groups. The findings of this case study suggest the need to urgently protect vulnerable groups of children from the impact of austerity.
Dávila-Quintana CD, González L-VB. Economic crisis and health. Gac Sanit. 2009;23:261–5.
Social exclusion Task Force. Learning from the past: working together to tackle the social consequences of the recession. London: Social exclusion Task Force; 2009.
Cortés I, López-Valcárcel B. Crisis económico-financiera y salud en España. Informe SESPAS 2014. Gac Sanit. 2014;28(Supl 1):1–6.
Navarro V, Torres López J, Garzón EA. Hay alternativas. Propuestas para crear empleo y bienestar en España. Madrid: Sequitur; 2011.
Muntaner C, Borrell C, Ng E, Chung H, Espelt A, Rodriguez-Sanz M, et al. Review article: politic welfare regimen and population health: controversies and evidences. Soc Health Illness. 2011;33:946–64.
Esping-Andersen G. The three worlds of welfare capitalism. Cambridge: Polity Press; 1990.
Karanikolos M, Mladovsky P, Cylus J, Thomson S, Basu S, Stuckler D, et al. Financial crisis, austerity, and health in Europe. Lancet. 2013;381:1323–31.
Gili M, García Campayo J, Roca M. Crisis económica y salud mental. Informe SESPAS 2014. Gac Sanit. 2014;28(Supl 1):104–8.
Early Child Development Knowledge Network (ECDKN). Early child development: a powerful equalizer. Final report of the Early Child Development Knowledge Network of the Commission on Social Determinants of Health. Geneva: World Health Organization; 2007.
Pillas D, Marmot M, Naicker K, Goldblatt P, Morrison J, Pikhart H. Social inequalities in early childhood health and development: a European-wide systematic review. Pediatr Res. 2014;76:418–24.
Rajmil L, de Sanmamed MJ F, Choonara I, Faresjö T, Hjern A, Kozyrskyj AL, et al. Impact of the 2008 economic and financial crisis on child health: a systematic review. Int J Environ Res Public Health. 2014;11:6528–46. doi:10.3390/ijerph110606528.
De Curtis M. Economic recession and maternal and child health in Italy. Lancet. 2014;383:1546–7.
Komro K, Burris S, Wagenaar AC. Social determinants of child health: concepts and measures for future research. Health Behavior Policy Rev. 2014;1:432–45.
Equipo de estudios Cáritas Española. Empobrecimiento y desigualdad social. VIII informe del Observatorio de la realidad Social. Madrid: Cáritas; 2013.
Novoa AM, Ward J, Malmusi D, Díaz F, Darnell M, Trilla C, et al. Condicions de vida, habitatge i salut. Mostra de persones ateses per Càritas Diocesana de Barcelona. Barcelona: Càritas Diocesana de Barcelona; 2013.
Repullo JR. Cambios de regulación y de gobierno de la sanidad. Informe SESPAS 2014. Gac Sanit. 2014;28(Suppl 1):62–68.
Kim HJ, Fay MP, Feuer EJ, Midthune DN. Permutation tests for joinpoint regression with applications to cancer rates. Stat Med. 2000;19:335–51.
Flores M, García-Gomez P, Zunzunegui MV. Crisis económica, pobreza e infancia. ¿Qué podemos esperar en el corto y largo plazo para los “niños y niñas de la crisis”? Informe SESPAS 2014. Gac Sanit. 2014;28(S1):132–6.
Rajmil L, López-Aguilà S, Mompart-Penina A. Calidad de vida relacionada con la salud y factores asociados al sobrepeso y la obesidad en la población infantil de Cataluña. Med Clin (Barc). 2011;137(Supl 2):37–41.
UNICEF Office of Research. Children of the recession: the impact of the economic crisis on child well-being in rich countries. Innocenti Report Card 12. UNICEF Office of Research: Florence; 2014.
Leahy A, Healy S, Murphy M. The European crisis and its human cost. A call for fair alternatives and solutions. Caritas Social Justice: Ireland; 2014.
Save the children. Child poverty and social exclusión. Brussels: Save the children; 2014.
Taylor-Robinson D, Whitehead M, Barr B. Great leap backwards. BMJ. 2014;349:g7350. doi:10.1136/bmj.g7350.
Taylor-Robinson D, Rougeaux E, Harrison D, Whitehead M, Barr B, Pearce A. Malnutrition and economic crisis. Rise of food poverty in the UK. BMJ. 2013;347:f7157.
Picket KE, Wilkinson RG. Income inequalities and health: a systematic review: Soc Sci Med. 2015; http://dx.doi.org/10.1016/j.socscimed.2014.12.031
Cantó Sánchez O, Ayala CL. Políticas públicas para reducir la pobreza infantil en España: análisis de impacto. Madrid: UNICEF Comité Español; 2014.
Rajmil L, de Sanmamed MJ F. Universal health-care coverage in Europe. Lancet. 2012;380:1644.
Córdoba-Doña JA, San Sebastián M, Escolar-Pujolar A, Martínez-Faure JE, Gustafsson PE. Economic crisis and suicidal behaviour: the role of unemployment, sex and age in Andalusia, southern Spain. Int J Equity Health. 2014; doi: 10.1186/1475-9276-13-55
Antentas JM, Vivas E. Impacto de la crisis en el derecho a una alimentación sana y saludable. Informe SESPAS 2014. Gac Sanit. 2014;28(Supl 1):58–61.
Síndic de Greuges de Catalunya (Ombudsman). Report on childhood malnutrition. Report on the request to the Parliament. Barcelona: Sindic de Greuges 2014. http://www.sindic.cat/site/unitFiles/3506/Informe%20malnutricio%20infantil%20catala.pdf.
González-Bueno G, Bello A. La infancia en España 2014. El valor social de los niños: hacia un Pacto de Estado por la Infancia. Madrid: UNICEF; 2014.
Authors would like to thank the comments received from Louise Seguin and Takis Panagiotopoulos to a previous version of the manuscript.
The authors declare that they have no competing interests.
LR, NS, and DTR conceptualised and designed the study, analysed the data, and drafted the initial manuscript. AS participated in the analysis, and contributed to the first draft of the manuscript. All authors approved the final manuscript as submitted.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Rajmil, L., Siddiqi, A., Taylor-Robinson, D. et al. Understanding the impact of the economic crisis on child health: the case of Spain. Int J Equity Health 14, 95 (2015). https://doi.org/10.1186/s12939-015-0236-1
- Income Inequality
- Social Determinant
- Vulnerable Group
- National Health Interview Survey
- Social Protection | 1 | 2 |
<urn:uuid:1ef7659d-d861-46de-a8c7-3760864cffa4> | The Irish in America: 'old' history and the 'new'
edited by: Andy Bielenberg
London, Longman, 2000; 368pp.; Price: £14.99
edited by: Kevin Kenny
London, Pearson, 2000; 341pp.; Price: £17.99
Adam Cohen, Elizabeth Taylor
Boston, Little, Brown and Company, 2000; 614pp.; Price: £25.00
edited by: Michael Glazier
Notre Dame, University of Notre Dame Press, 2000; 1009pp.; Price: £89.95
University of Ulster
Date accessed: 27 March, 2023
Scholars continue to find new things to say about the Irish Diaspora. For many of them-especially those in Ireland and America-the term Diaspora, when applied to the Irish, has a deep, politicised meaning. We can see this point exemplified in two observations. First, the term Diaspora once was used mainly to describe the Jewish experience; only occasionally (but with increasing frequency lately) has it been applied to other groups with traumatic migration histories, such as the victims of the African slave trade or the Armenians who fled before the Turks. Secondly, the application of the term Diaspora to the Irish is (at least in part) shaped by a particular critique of British rule in Ireland and of the traumatic Great Famine. For nationalist scholars, the hunger that accompanied famine is seen to have been exacerbated unnecessarily by British callousness; the flight from Ireland thus becomes 'exile' not 'emigration' and the connection with Africans or Jews becomes complete.
This increasing deployment of the term Diaspora 1 may be a good thing; the term itself may provide historians and social scientists with some of the points of reference they need to plot what was a global phenomenon. It certainly makes scholars think in comparative terms-and this is no bad thing. However, there is a potential downside. By allowing a broader usage of the term Diaspora and by deploying the term for an increasing number of groups, there is a sense in which all migrant groups suddenly seem to be locked into a competition of relative victimhood. Whether or not this might affect the utility of the term, depends very much on the reader's political viewpoint. Whatever that viewpoint, though, there is a sense in which Diaspora studies represents a return to the 'emigration as trauma' school which dominated American writing on migration from Thomas and Znaniecki in the aftermath of World War 1 to Oscar Handlin in the Fifties.2 This is despite the important work of scholars of migration such as Frank Thistlethwaite and John Bodnar who have stressed the more constructive (and complicated) nature of migration in the Atlantic world.3
What is perhaps most worrying, however, is the fact that most writers do not actually attempt to define the term Diaspora even though they use it with abandon. This is certainly true of Andy Bielenberg's collection of essays, The Irish Diaspora, which is the end product of a conference held at University College Cork in the summer of 1997. It should perhaps be pointed out at this stage that the introduction is actually written by Piaras Mac Éinrí, Director of the Centre for Migration Studies, Cork, rather than by the editor himself. Nevertheless, no attempt is made to explain how the contributors use the term. When we read the book, in fact, we find that most of the authors don't use it all. As with so many studies, then, an opportunity is lost and the term simply becomes a collective noun rather than an element of social theory.
That this is the case does not diminish the value of individual contributions. While very few authors seek to place what they are writing into the wider context of this book, there is some very good work on offer. Certainly, one cannot help but note the variety and breadth of research that is currently being conducted under the banner of the Irish Diaspora. The editor has been assiduous in putting together essays that range broadly over both chronology and area. Britain, America and the former colonies all receive considerable coverage. Indeed, the inclusion of the latter enables interesting papers from Bielenberg himself and Michael Holmes to provide coverage of aspects of the Irish Diaspora that most scholars will not be familiar with. Similarly, Bielenberg has also conjured up essays that are historical and sociological; some which are (near) contemporary; and others covering the pre-famine period. Breda Gray's study of 1980s London is an interesting example of how our growing interest in the more recent Irish migration is formulating new research questions. The inclusion of work demonstrating new methodologies and important new research findings also adds to the important parts of the volume. Ruth-Ann Harris's discussion of her missing friends research, using The Boston Pilot column which for years sought to bring separate migrants back together, is a very good example of this.
When thinking 'Diaspora', we surely must think in comparative terms. Yet few of the essays in this volume address the Irish in more than one polity. The exceptions are Malcolm Campbell's study of migrants in rural Minnesota and New South Wales and Enda Delaney's wide-ranging attempt to place post-war Irish migration to Britain into European perspective. Both writers succeed well. Campbell's piece is doubly stimulating for, in addition to considerable the comparative aspects, he draws upon the rural world. In so doing, he demonstrates that (as Donald Akenson has been saying for years)4 there is more to the Irish than slum-dwelling and machine politics. The Irish could make a fist of agricultural work in foreign countries, irrespective of the fact that a majority ended up in towns and cities. Delaney, by considering the Irish alongside migrants from comparable economies, particularly in Mediterranean countries, brings fresh new ways of thinking to bear to the problems of language and the issue of return migration. While Mac Éinrí makes the point about Irish migration being unique in the period between the Great Famine and the mid-20th century, Delaney clearly asserts that this was anything but the case in the later period.. The implications of Delaney's and Campbell's work is that one of the last great barriers to our understanding of the Irish Diaspora (and indeed of any Diaspora) is our weakness with the comparative method. More research of this type is needed.
The part of the Irish Diaspora which we know best is America. New books on this aspect of Irish migration and settlement continue to flow apace. Kevin Kenny's new book, The American Irish: a History, therefore, stands out as an important contribution, offering a compelling narrative for the specialist and the general reader alike, as well as being a must for students. In offering such a volume, he demonstrates a formidable ability to synthesis a vast body of monographs and articles. But Kenny deserves far more credit than the mere implication that he is a bag carrier for someone else's scholarship. As author of one of the best books on an Irish-American theme, Making Sense of the Molly Maguires (1998), Kenny is well placed to make sense of a literature running to thousands of titles. The American Irish provides vital context for understanding the breadth and depth of issues underpinning Irish American society as it has emerged, changed and developed over the past three hundred years. This is the route map through Irish America we have long needed.
There is something distinctly un-American about Kenny's book, which might be explained by his Irish, rather than American-Irish, nativity. This book is not a celebratory anthology in the style of some of the old classics: populist works like some of those written in the era of the Kennedys or before.5 Nor is it a collection of quirky anecdotes about boozers and boxers, priests and politicos, womanisers and gangsters. Kenny's study is underpinned by a solid theoretical strength. It moves forward with a sense of period that is much stronger than readers will find in some of the more eclectic early general studies, such as Carl Wittke's Irish in America (1956), or the unashamedly one-sided approach of Lawrence McCaffrey's The Irish Diaspora in America. Too many books on Irish America in the past have written in black and white terms about the Irish experience of migration, stressing the Catholicism, poverty and oppression of what was in fact a much more variegated transatlantic population movement. English colonial evil (a viewpoint undoubtedly endorsed by some degree of truth) has dominated so many books on the subject that they cannot be recounted here. On the other side, too many American ethnic histories have celebrated the achievements immigrants in the new Republic in a rather teleological way, the aim seemingly being to recount how this or that group made good in America, land of opportunity, before contributing uniquely to whatever it was that came out of the melting pot. All too often such books ignored the complexities of the migrants' experiences, successful or otherwise (and here I refer to more than just American Irish writers and histories).
The thirty years or so after 1914 saw an gradual closing of the 'Golden Door' and, as a counterweight, came a great outpouring of books offering near-biblical tales of a variety of immigrants groups as they strove to imprint their signature on American life.6 The story being told in those days was of the migrants' value-added contribution: the Scandinavian contribution in the mid-West; what the German did for brewing; how the Irish ran the church, etc. Of course, in the new nation the struggle for recognition was, in a sense, even more important than in the Old World: political and civic fluidity meant there was more to play for, with potentially higher rewards round the corner. The notion of American sidewalks being paved with gold grew from a sense of hope far more apparent, far more locked into working-class folk-lore, than was ever the case with the Irish (or any other incoming group) in Britain. Yet the common logic of these early immigrant histories was the fact that American society was beginning to reject the very people who were being written about. Immigrant histories in those days were, in some respects, an attempt to re-impose the notion that a cosmopolitan culture was a central strength of the American self-image.
The immigrants' story in America is as much a part of American passions as class is in Britain. Yet, there has long been (in this reviewer's opinion) a need for more books which emphasises a traditional socio-economic approach to the Irish in America, of the type shown (admittedly in case-study form) in Burchell's excellent monograph on the San Francisco Irish, or in some of Donald Akenson's works on the Irish in Canada.7 It is his attempt to quantify, objectify and to assess in the round, which prompts praise for Kenny's marking out of the terrain in general terms.
Kenny's book takes a chronological approach, starting with the eighteenth century (and therefore, importantly, with Protestants), moving through the period of the Great Famine and beyond, beyond World War II. In assessing each of these periods, Kenny's relegates celebrations of ethnic achievement in favour of a multi-dimensional approach to the way in which immigrant and indigenous cultures feed off each other. This is not simply a book about how the Irish made America; it is also very much a study of how America re-made the Irish. Perhaps most striking of all is Kenny's presentation of important historical context on Ireland itself. Again, an observation can me made to the effect that far too many scholars embark on studies on immigrants in particular places (America, Britain, Australia) without knowing very much about the land from which they were sent forth. This has resulted in some curiously naïve and myth-laden writings on the Irish dimension of the American immigrant story. A rather simplistic paradigm of cruel landlordism, British colonial brutality, and the much-bandied concept of 'anti-Irish racism' are too often used as the backdrop to the migration story. Elements of truth, of course, underpin such conceptualisations, for no one could begin to imagine Irish history in this period without some sense of Britain's (or England's) wrongs. But too often there has been a tendency to caricature the true complexity of social relations and economic fortunes in Ireland.8 Expressions of chagrin about the fact that Irish 'peasants' 'had no vote and no stake in government' may strike an American of the 1950s as odd and unfair.9 But it would have been of no surprise whatsoever to the Chartist, Samuel Holberry, who was walked to death on a York gaol treadmill for planning a rising in Sheffield, nor to the Tolpuddle Martyrs who were transported to Australia for forming a union bound by a secret oath. Irish 'peasants' or English labourers: neither group enjoyed much political power in the 1830s and 1840s.
Kenny attempts a more dispassionate analysis of the Irish side of things, and this is crucial. It is rare for a scholar of the Irish in America to demonstrate such a keen appreciation of the Irish backdrop to the emigrant saga. Each chapter contains a series of passages explaining vital aspects of Irish history, at the given point in time, as they relate to the emigrant experience of those heading to the United States. Landholding systems (cottier, runrig/rundale, etc) are outlined; the Penal Laws are indicated where relevant; the famine is explained rather than enshrined. Indeed, some of these pages on Irish history offer as succinct an insight into the socio-economic conditions of Irish life as anyone could provide in the space allowed. This material is perfect for the student reader, and not just undergraduates.
The chapters on the eighteenth century and on the post-war period must have been the most difficult to write. There is so much material on the nineteenth century, that one could not begin to imagine covering it all. By contrast, the age of the Scotch-Irish migration has attracted less scholarly interested (though a formidable body still exists), while more recent times are so crowded with contemporary images and unfinished business that they are difficult to encapsulate. What Kenny says about the eighteenth century is more than synthesis; he manages to capture some of the most important aspects of colonial America history and to see them through the prism of ethnicity. Indian fighting might have been heroic, but it was often far from gallant. Butchery and the frontier went hand-in-hand, and the Scotch-Irish group was involved in many of the major skirmishes. Kenny also captures the cultural imprint of the Scotch-Irish in way that will not be familiar to many people. Words that theScotch-Irish introduced to the dialects of Trans-Appalachia are the subject of a fascinating discussion; the Irish contribution to American country music provides another valuable source of cultural transplantation and adaptability that Kenny handles well. These are very things, in fact, that were brought to life in one of the episodes of the recent TV series, The Irish Empire, which looked at the cultural impress of the Irish abroad. The nineteenth century-that century of so many millions of poor emigrants-is detailed with an imperious control. Again, the interweaving of ethnic and indigenous cultures works well: sections on Nativism and the Know-Nothing movement; labour and gender; and the recurring theme of nationalism deliver to the reader, time and again, the duality of being 'ethnic' and 'national', 'Irish' and 'American'.
One of the sections which most fascinated this reviewer is Kenny's short but sharp discussion of the 'wages of whiteness', a controversy which has been brewing for quite some time in America. There is some particularly interesting work, by historians such as David Roediger and Noel Ignatiev,10 discussing the role of immigrants, particularly the Irish, in propagating American racism. The idea is that the Irish and blacks competed with each other, and therefore harboured particularly acute animosities. The hinge upon which the Irish dimension of the 'wages of whiteness debate turns is the claim that the Irish, when they arrived as poor, starving, outcast, wretches, were accorded honorary black status. That is, they were despised, sneered at, and people felt superior to them. Their progress into a position of acceptance by white America constituted the next important phase.
Were the Irish and blacks comparable? In the past, anecdotal evidence of blacks denigrating the Irish has been used to endorse a romantic, politicised notion of the melancholy story of Irish exile and to emphasise the English colonialism which drove them from Erin's shores. Ignatiev, for example, argues in his book, How the Irish Became White, that the life of the Irish peasant was similar to that of the black slave in the southern states of America. While this is an exaggerated and simplistic conceptualisation, it is nevertheless true that the Irish were often presented in the receiving countries as the lowest of the low. It is often said (admittedly from eighteenth- and early-nineteenth-century anecdotes), that a plantation owner in the south would rather use a gang of Irish workers to clear a dangerous swamp than to risk a squad of slaves-his own personal property and therefore of real monetary value-to do the job. The 'Condition of England' debates about Thomas CarlJyle's 'Wild Milesians are not very much different from the suggestion that poor Irish Catholics were undermining 'village green America' and threatened the essential democracy of the young republic. This latter point, after all, was one of the things which prompted the huge 'Know Nothing' development of the mid-1850s, when, for a while, Nativism achieved political dominance in states such as Massachusetts.11
The idea that Irish were perhaps even lowlier than America's blacks has provided a useful way of increasing the sorrowful image of Irish emigration and exile. More recently, this question of Irish-black relations has become an issue of more widespread scholarly study, and new light suggests a greater degree of tolerance between the pre-Famine Irish emigrant and his freeman black neighbour in the big cities of the North.12 At the heart of this debate is the invented-ness of race and the sense in which it is an ascribed, mythological label rather than an objective fact. Kenny's contribution cuts through much of the half-truth surrounding debates about the Irish, blacks and race. He places the Irish in their correct position-that is, as whites who 'presumably shared to some extent the general European propensity to attach negative connotations to "blackness", even if they had not yet encountered racial oppression in its distinctively American form'. The simplistic notion that low-level social improvement might equate with a serious degree of acceptance of a whole political structure called whiteness, is rejected by Kenny in favour of a more realistic model:
Picture the case of an impoverished Irishman living with his family in an infested cellar in Manhattan's Sixth Ward. If he took a job on the docks once held by an African American, so that he could move his family up to a tiny, windowless room on the floor above, had he really 'opted for whiteness' in any meaningful sense? Or had be taken an action, which because of the racial structure of the United States, had important racial consequences?
Kenny goes on to discuss the more issue of collective action, often violent, against blacks:
Those Irishmen who drove black workers from the docks [e.g. in New York] and excluded them from labour organizations knew what they were doing, and they doubtless advanced their assimilation by doing so. But the American Irish did not create the social and racial hierarchy into which they came, and to expect them to have overturned this hierarchy in the course of putting food on their tables is surely unrealistic.
The essential point, as with most history, is that the right answer does not lie on one extreme or the other. The Irish were not the wholly racist fiends that their arraigners might have us believe; nor were they every remotely as oppressed as the blacks, which will disappoint some at the other pole.
Yet there is no question that Irish workers, as with all groups of whites, at times displayed traits that we would call racist. One reading of Irish political behaviour, especially among the urban bosses, would be to say that racism, and racially motivated policy enactment, played at least some part in the developing political culture. The problem is that evidence to the contrary can always be found for a subject as vexing as racism across something as complicated as two hundred years. The balance between racism, on the one hand, and doing deals, on the other, is captured a thousand times over by the realities of city life in America over the past two centuries. Those who would lionise Irish American politicos for their remarkable ability to grab and make use of the Democratic political machine, fundamentally underestimate the sense in which both the achievement of that power and its maintenance was the product of clever negotiation as well as strong-arm tactics. Irish political power was interrupted by defeats as well as cemented by victories. Some Irish politicians did deals with blacks and non-Irish ethnic groups at the same time as others worsted them very badly. Irish politics, as Cohen and Taylor's stimulating biography of the Mayor Daley of Chicago demonstrates, was as subtle or as tough as conditions dictated.
If New York's Tammany Hall is the symbol of American Irish political power, we should look to Chicago, and to Mayor Richard J. Daley, for the greatest wielding of power by any single individual of Irish parentage. As with other Irish leaders (Al Smith or Robert F. Wagner in New York), Daley's power spread beyond his city, county or state; his power, like theirs was national, but perhaps more so. Daley used circumstances, the mass media and a certain personal talent to become a man whose name was associated, in the minds of millions of Americans beyond Illinois, with a particular brand of conservative political behaviour. But his background, and his early-life show of talents, were unlikely markers for what was ultimately to be an astounding achievement: a vice-like grip on power.
The story of the Daley ethos and of his rise is revealing of an acute collective consciousness among Irish immigrants in America. In this sense, too, Daley was a typical Irishman from a typical community. There were, though, differences. First, his family was small (he was an only child) and his mother and father were quiet. Daley was noted for not being a drinker in his youth, and he worked incredibly hard to make the most of his modest talents (even in his youth he uttered the malapropism that would draw much comment later). He went to night school to study law, following a solid Catholic education, which included a spell at De La Salle College. He fell into the Irish political machine at its lowest level, working for Big Joe McDonough at a time, in the 1930s), when a Czech American, Anton Cermak-the man who died taking a bullet aimed at Franklin D. Roosevelt-was running the show, albeit briefly.
By the end of his twenty-odd year, six-term hold on the mayoral office, Chicago had been transformed. University campuses, O'Hare international airport, a rejuvenated central business district (including what was then the world's largest building, the Sears Tower)-these were just some of his achievements. There were others, some of them controversial. The creation of housing projects such as Cabrini Green helped to staunch the flow of whites out of the city by containing the extent of black Chicago, but the cost was in the creation of black-only neighbourhoods. The Dan Ryan Expressway, then the world's widest, acted as a border between working-class black and white districts of Chicago's south side, including the neighbourhood where Daley grew up. Daley's Chicago was just about as segregated as some of the southern cities which, in the mid-1960s, were feeling the heat from Martin Luther King Jnr. Indeed, Cohen and Taylor make the point forcibly that 'Daley's modern Chicago was built . on an unstated foundation: commitment to racial segregation.' This is why King made Chicago his focus and temporary home when, in 1966, he took the campaign against racism north.
Daley's battle against King was conducted in a way that typified his political abilities. He refused to allow himself to become a fall guy for the black freedom struggle; this, and his other acts of conservatism, cast him into the public eye across America. As well as opposing King, Daley also stood against President Johnson's Great Society programme, and he loathed and fought against the Hippie tendency and the anti-war movement. Daley was a classic product of the ethnic ghetto, yet the English would understand him equally as a nineteenth-century Gladstonian Liberal: he believed in a religious morality that underpinned good social behaviour, and welcomed the social role of churches as a boon. He also stressed loyalty and bootstrap-tugging self-help. He was considered 'dollar honest', although he ignored the corruption of those around him. Daley was faithful to his wife. Long days and nights in Springfield, in the execution of duties for the state legislature, turned many men to gambling and prostitutes: but not Daley. While others were making hot money, sleeping around and getting drunk, Daley was demonstrating a remarkable aptitude for the tedious actuarial side of politics. Moreover, while he did not trouser millions in ill-gotten gains, he earned enough legitimate money from his numerous political jobs to raise a large family, to build a big house and to school his children well.
Daley bore all the hallmarks of a nineteenth-century boss politician displaced into the wrong century. His patch was the neighbourhood into which he was born and where his first political allegiances had been forged. His team was the White Sox. Chicago was his city. But, perhaps by being a man who seemed out of his time, he was able to be more effective than if this had not been the case. The Chicago he inherited needed to find itself a new role: it was no longer the boom-time city standing at the crossroads of American civilisation; this image was giving way, as with most Mid-West towns, to the appeal of the rejuvenated south and the wider Sun Belt. But Daley set about revitalising Chicago by rebuilding it. Government took a lead and the physical map of Chicago was changed massively under his tutelage. Despite the negative racial connotations of so much of what he masterminded, there is no doubt that the revitalised Chicago of the post-war epoch is a far cry from other decaying Mid-West cities, such as Detroit and St Louis, which atrophied consistently in the generation after 1945. As boss politicians fell by the wayside in the post-war years, the old-fashioned Daley machine lived on.
Daley rescued his Chicago and rebuilt it. Daley was the last big city boss. At the end of his life, it is said, he was recognising the frailty of his old-style political machine. He was losing ground, but not enough for any opponent to reap the ultimate reward, not in Chicago at least. Daley managed to win his sixth term in the year of his death, 1976, but at the same time he lost out in a race to work closely with Jimmy Carter. Big city bossism had a limited utility beyond its bedrock; bosses such as Al Smith of New York had realised this in the 1920s. As a rule, machine politics does not go down well in the rural expanses of other parts of America. Some would argue Daley got lucky in not being able to get closer to Carter, but it is an irrelevance to say so because death intervened anyway. While this failure struck him hard, changing times enabled one of Daley's sons to realise his father's dream-but twenty years later and with a very different presidential candidate. The son is Richard M. Daley and the presidential hopeful was Al Gore. Yet something else that brings us full circle, and what proves the limitations of any attempt to render Richard Daley Snr as a dinosaur, is the proof, found in Chicago, that political dynasties are stronger in America than almost anywhere. In 1999, Richard M. Daley was elected to his fourth term as Mayor of Chicago, an office he has held since his father's death in 1976. Two Daleys, both named Richard, have held the city's top job for ten terms and nearly half-a-century.
Not even the Daley lineage in Chicago is as telling an indictment of the power of Irish America as the very existence of the fourth item under review, Michael Glazier's Encyclopaedia of the Irish in America. Advance praise from a variety of notable scholars adorns the back cover: Glazier's is, we cannot doubt, the book Irish America has been waiting for. Given the success of Bayor and Meagher's collection of essays, The New York Irish, it is not surprising to lean that Irish America is confident in its ability to do itself justice. And Glazier's is an impressive effort. It looks beautiful and is good value for money. Moreover, the coverage is catholic, not just Catholic. The range and quality of each of the essays are very good. There is also a remarkable consistency to the text, for which both Glazier and Notre Dame University Press deserve our thanks. The choice of subject matter is largely uncontroversial, although one could quibble with certain inclusions. I am not sure that some of the Irish-born contemporary figures are necessarily 'Irish in America' in quite the way we are asked to believe. It is impossible to imagine that, if this volume had been produced in 1865, John Mitchel would have been excluded: even though he was born and died in Ireland, he spent important years in America developing, among other things, a sympathy with slavery. The thematic entries really are excellent: for example, short histories of the Irish in each major city and every state is provides a resource of unparalleled utility for scholars trying to come to terms with the huge variety of Irish communities in America. Readers will not be surprised to learn that New York, Boston and Chicago get large entries. The historical characters, each with a short contextual biography, are very well rendered. Helpful lists of further reading come with each entry. Reading this book enforces the view that the Irish Diaspora is alive and well-and that most members seem to be writing books! In future, however, one would imagine that projects such as Glazier's will appear on CD-Rom.
These four books demonstrate the old and new traits of Irish studies in America. Glazier's Encyclopaedia captures the balance of styles most dramatically: hero-worship, as mode of writing, has itself passed into history, yet the sense of communal pride remains. Alongside biographies of the great and the good of Irish America are some wonderful 'new' historical approaches to the key themes-nationalism, Catholicism, urban history, city politics, and so on. Cohen and Taylor's biography of Mayor Daley demonstrates both the utility and the limitations of bossism; it is a warts and all study that might serve as an emblem of for that period of American history when Irishmen ran the great cities. The nature of urban politics; the fragility of real democracy in a world of such corruption, and the role of grace and favour, are all perfectly balanced against the degree to which the big city boss could, and could not, extend himself beyond the home turf. Not many Irish bosses became President of the United States; yet no president had the degree of control over his political turf that Mayor Daley had in Chicago.
But it is Kenny's book that points the way forward for American Irish history. His is an inclusive history. It does no disservice to the grand Catholic narrative of so many other studies. Yet it manages to introduce a much less fleeting image of the Scots-Irish than is portrayed via the usual long list of Irishmen of Scots descent who made America (from Daniel Boone-who was in fact part Devon Quaker-through Andrew Jackson to Neal Armstrong and any other Scots-Irish on the moon we might mention). Irish history in America grew out of sectarian competition in the nineteenth century, something that was taken to America in the cultural baggage of the emigrants. The Scots-Irish myth was developed in the second-half of the nineteenth century as some sort of antidote to the hugely important nationalist tradition focusing on St Patrick's Day and the various movements for home rule and independence. It is this sense of connection to the old country's unfinished business, more than anything else, which has made Irish American history so important to people whose Irish roots are in the distant past. But the time has come for us really to learn about the Irish in America and to move beyond a prosopography of Irish success. Kenny has provided an important marker.
1. Kurds, Italians and South Asians are just two of the groups recently to have received treatment in books bearing the word 'Diaspora' in the titles. See Crispin Bates (ed.), Community, Empire and Migration: South Asians in Diaspora (Palgrave, 2001).
2. William Thomas and Florian Znaniecki, The Polish Peasant in Europe and America (1918-20); Oscar Handlin, The Uprooted: The epic study of the Great Migrations that made the America People (1951).
3. F. Thistlethwaite, `Migration from Europe overseas in the nineteenth and twentieth centuries' in H. Moller (ed.), Population Movements in Modern European History (New York, 1964) and J. Bodnar, The Transplanted: A History of Immigrants in Urban America (Bloomington, 1985).
4. Donald Akenson, The Irish in Ontario: A Study in Rural History 2nd edn (2000).
5. The American Irish: A Political and Social Portrait (Boston, 1963; 2nd edn 1989).
6. Many of the early studies were sociological in their orientations, with studies such as Henry P, Fairchild, Greek Immigration to the United States (1911), Thomas Burgess, Greeks in America (1913), Kenneth Babcock, The Scandinavian Element in the United States (1914),) and Robert E. Foerster, Italian Emigrants of Our Times (1919) offering a variety of perspectives of the way in which people left Europe and established themselves in America.
7. R.A. Burchell, The Irish in San Francisco, 1848-80 (Manchester, 1979); Akenson, Irish in Ontario.
8. We might point to the example of Shannon's doom-laden description of Irish history in his American Irish, ch.1 or that given in D. Clark, The Irish in Philadelphia: Ten Generations of Urban Experience (Philadelphia, 1973).
9. Wittke, Irish in America, p.6.
10. David R. Roediger, The Wages of Whiteness: Race and the Making of the American Working Class (London and New York, 1991) and N. Ignatiev, How the Irish Became White (New York and London, 1995).
11. It should be remembered, too, that the anti-immigrant/anti-Catholic/anti-Irish feelings of the Know Nothings were shaped as much by a fear that America democracy was under threat from immigrants who, it was argued, had no experience of upholding such cherished political traditions. That, and the sense in which the Catholic Church (and, by default, the Irish) were thought to be supporters of slavery helps to explain why such inhospitable views developed in Massachusetts, spiritual home of abolition. See Tyler Anbinder, Nativism and Slavery: The Northern Know Nothings and the Politics of the 1850s (New York and Oxford, 1992).
12. G. Hodges, '"Desirable companions and lovers": Irish and African Americans in the Sixth Ward, 1830-70', in R.H. Bayor and T.J. Meagher (eds.), The New York Irish (Baltimore Md, 1996).
I greatly appreciate Donald MacRaild's comprehensive and enthusiastic review of The American Irish: A History .1 Such a detailed and laudatory review needs no direct response, let alone a challenge or refutation, on my part. Instead I would like to say a few words about my general perspective on Irish-American history, about the evolution and structure of the book, and about the concept of "diaspora" as applied to Irish history. My comments on these three related questions will serve as a complement to the various lines of inquiry opened up by MacRaild's reflective and wide-ranging review essay.
The type of history I write and teach is best called "transatlantic." It deals with Irish history in both Ireland and North America simultaneously, examining patterns of migration, of cultural continuity and change, and of economic and political interaction. My first attempt to write this sort of history was a doctoral dissertation in U.S. history called "Making Sense of the Molly Maguires," which eventually became a book of the same name.2 On one level, the approach was quite narrow, telling the story of a group of Irish mine workers in Pennsylvania the 1860s and 1870s, twenty of whom were hanged for sixteen murders committed, according to the authorities, as part of a conspiracy imported directly from Ireland. On another level, the approach was very broad, for the story contained at its heart the principal themes in both Irish and Irish-American history in the mid-nineteenth century: land, famine, and emigration on the Irish side and, on the American, industrialization, the Civil War, and immigration.
The actions of the "Molly Maguires" in Pennsylvania, it became clear, would make little sense unless they were placed in an Irish as well as an American context. In Ireland, the socio-economic structure of rural society in general, and of specific regions like the north-western and north-central counties, needed close attention, as did the long history of agrarian violence embodied by such shadowy groups as the "Ribbonmen," the "Whiteboys"-and, indeed, the "Molly Maguires," who first emerged in north-central Ireland in the 1840s and 1850s. Making sense of the American phase of the violence, in turn, required a proper understanding of patterns of immigration, labor, and religious devotion, along with the politics of anti-immigrant nativism and the origins and impact of the Civil War. The "Molly Maguires" in Pennsylvania were a rare transatlantic example of a form of violent protest deeply rooted in the Irish countryside. Bringing the Irish and American strands of their story together in a single narrative resulted in the form of history that I later began to call "transatlantic." And that is the approach I have taken ever since.
While my work on the Molly Maguires examined a single dramatic episode, The American Irish applies the transatlantic approach on a much broader scale over a much longer period. The book examines Irish-American history from beginning to end, star ting with the Ulster migrations of the eighteenth century and ending with the evolution of the Northern Ireland peace process, in both Ireland and the United States, in the 1990s. And, of course, it deals not just with labor protest but with all aspects of the Irish-American past. As part of the series "Studies in Modern History," edited by David Cannadine and John Morrill, the book offers a synthesis spanning three centuries, it is based mainly on secondary rather than primary sources, and it is intended primarily for students and general readers, while still being of considerable use to the specialist. It is the first synthesis of its field in a generation, and unlike any other previous book it covers not only the classic period from the 1820s to the 1920s but the entire eighteenth and twentieth centuries as well. Thus, while so many traditional accounts of the American Irish, and so much popular understanding of their history, begin with the Famine and end in the 1920s, my own book extends the analysis backward by more than a century and forward by three or four generations, integrating the entire 300-year period into a single history.
Putting a book of this sort together is a challenge. But the beauty of writing this particular book, especially given its intended audience, is that I was able to write it by teaching it. At the University of Texas I offered three undergraduate seminar s on the subject, which allowed me to determine the principal themes of the book: emigration, immigration, labor, religion, politics, and nationalism, with the analytical categories of race, class, and gender deployed as appropriate. After three semesters , then, I had my themes; the only problem was that I still had not written a word (or, to more precise, I had written six, one for the name of each theme). So I converted the class into a course of lectures, first at the University of Texas and then at Boston College, inviting the students to critique each week's material as vociferously as they wished and integrating their concerns and demands into the re-worked versions of the lectures that made their way into the evolving manuscript. There is no passage in The American Irish that was not at some point discussed in a classroom. I can therefore feel quite confident that I have written the right book for my intended audience.
Armed with my six thematic categories, I did at one point deceive myself into thinking that the book would more or less write itself. All I would have to do, I told myself, was to write a chapter on each of my six themes, with every chapter beginning in 1700 and ending in 2000. Of course, I realized soon enough that this analytical framework, while very useful for research and organization, would produce a book that was at best unwieldy and repetitive. Interestingly, the same thing had happened when I wrote my first book: I began with a thematic model for purposes of research and organization but, when it came time to write, I abandoned this model in favor of a chronological approach, interweaving the analysis into a narrative history. In my own work at least, telling the story of change over time has provided the most compelling mode of historical explanation. The American Irish is unabashedly chronological in structure, its six chapters bearing the following titles: "The Eighteenth Century," "Before the Famine," "The Famine Generation," "After the Famine," "Irish America, 1900-1940," and "Irish America Since the Second World War." Moreover, as the titles of the middle chapters are intended to convey, the Great Famine stands at the heart of the narrative.
Within the overall chronological framework, each the six chapters examines the six basic themes of the book as a whole. Thus, as MacRaild notes, all of the chapters begin with a detailed account of the conditions in Ireland that led to mass emigration, without which the history of the Irish in America can make little sense. The remainder of each chapter examines such themes as immigrant settlement patterns, social and geographical mobility, labor and class, race and gender, and religion, politics, and nationalism, the relative weight of each theme varying by period. And every chapter (except the last) incorporates into the narrative a debate between historians, a critical point on which interpretations of history have differed. From the synthetic historian's point of view, it can be deeply satisfying to review and adjudicate controversies of this kind. But I present these often very lively debates primarily for the reader's pleasure, not my own, and I do so in the conviction (gleaned from teaching and repeatedly endorsed by my students) that excursions into historiography and interpretation are a strong incentive, rather than a distraction or a bore, to undergraduates and general readers studying history.
The debates integrated into the first two chapters concern questions of ethnic and racial identity in the United States. Chapter 1 examines (very briefly) and refutes the "Celtic Thesis," whereby the population of the new United States in 1790 has been divided into so-called "Celts" and "Saxons," the former including the Ulster Irish. According to this rather strange theory, America to the south of Philadelphia was settled largely by Celts and to the north by Saxons, deter mining in large measure the course of American history, including the Civil War, when southern Celts and Cavaliers were bested by northern Saxons and Roundheads (the innate tension between them providing a causal explanation that conveniently deflects attention away from slavery).3 Chapter 2, as MacRaild mentions, considers the recent, very influential debate over white racial formation ("how the Irish became white"), that had its origins in Irish-American and American labor history. My critique of the historiography in this case suggests that both the degree of Irish racial subjugation and the degree of Irish responsibility in altering the course of American race relations are open to considerable exaggeration. At the same time, much greater attention is needed in this area to the history of women and to the Irish culture from which the migrants came.4
The contentious historiography of the great Irish potato famine is considered in Chapter 3. Between 1846 and 1855 ("the famine decade") an estimated 1.1 million people died in Ireland and another 2.1 million emigrated, amounting to more than one-third of the pre-famine population. The famine and its memory remain the defining moment in Irish-American history and, contrary to the efforts of a recent generation of historians, I place it very much at the heart of Irish domestic history as well. The debate presented in Chapter 3 counterpoises the now familiar "romantic nationalist" and "revisionist" interpretations of the famine, concluding that the latter, in standing the former on its head, unwittingly reproduces a slanted and extreme position of its own (protestations of dispassionate objectivity notwithstanding). Instead of choosing between these two antagonistic and perhaps outmoded interpretations, I endorse an emerging school of historiography that, for want of a better word, I call "post-revisionist." This new interpretation concedes much of the revisionist case, for example that various demographic, socio-economic, and cultural changes (concerning population decline, language use, and patterns of landholding, marriage, and migration) actually have their origins in the early nineteenth century and beyond and not simply in the great upheaval of the 1840s. That upheaval, nonetheless, greatly magnified and accelerated these changes, such that the famine can still be seen as modern Ireland's great watershed event. The "post-revisionist" perspective rejects all talk of deliberate genocide, but points to a pervasive providentialist belief among British officials and opinion-makers that the famine represented an opportunity for re-making Ireland. The British government, moreover, bore direct responsibility for the actions it did and did not take to avert the catastrophe. In endorsing this interpretation, I have opened myself to the charge of one recent reviewer that my approach to Irish history is not only "bleak" but "old-fashioned." So it bears repeating here that "post-revisionism" in this case is not simply a euphemism for unreconstructed romantic nationalism. Far from being old-fashioned, it represents the latest and most sophisticated phase of Irish famine scholarship.5
The historians' debates in the remaining chapters have to do with questions of labor and gender, nationalism, and politics. Chapter 4 yields two debates, one on the social bases of support for Irish-American nationalism and the other on the nature of domestic service, the primary occupation for Irish-American women. In the former debate I reinsert the question of social radicalism into the traditional polarity between physical force republicanism and constitutional nationalism.6 In the latter I challenge previous historians' conception of service as a launching pad to liberation, emphasizing instead the nature of the relationship between mistress and servant.7 Although service could indeed be a training ground in American middle-class morality, it was also by definition a servile form of labor in a republican democracy whose more privileged members frowned upon servility. In Chapter 5, I examine competing theories of the rise, functions, and decline of Irish-American urban machine politics.8 Only Chapter 6, which deals with the contemporary era, lacks a historiographical controversy, reflecting the undeveloped state of the scholarship on that period.
Let me close, as promised, by saying a few words on "diaspora," the critical new concept (new in the Irish case, that is) which MacRaild discusses in opening his review essay. The term "diaspora," as he suggests, is potentially a useful one for historians of Irish migration. If a single theme dominates current historical writing, at least in the United States, it is the need to transcend the boundaries of nation-states and write global histories. The history of American immigration is no exception; indeed, it lends itself perfectly to a transnational approach. So too does the history of Irish migration, which despite the centrality of North America has always been global in scope. Given that Irish migration was a genuinely global phenomenon, moreover, it has become increasingly clear that the story of the Irish in one part of the world can no longer be told without reference to the Irish elsewhere. Historians of the American Irish, for example, clearly have much to learn from the history of the Irish in Britain, Canada, or Australia. In seeking to encompass the global dimension to their subject, historians of the Irish-like historians all over the world in the last ten years-have turned increasingly to the concept of diaspora.9
Yet, as MacRaild points out, it is striking how few historians say precisely what they mean by this term. "Diaspora" has entered academic discourse with a vengeance, but all too often as a synonym for every type of population movement, migration, or displacement, or (even more vaguely) for minority status, postcolonial identity, and the processes of globalization. Unless one makes some effort to give the term a meaning, it actually means very little; and if the term is intended simply as a loose substitute for, say, migration, then it is not clear why one would need to use the term at all. Moreover, the historian who decides to investigate the possible meanings of the term immediately confronts a wide array of theoretical literature fraught with more than its share of disagreements, contradictions, and incompatibilities. The term "diaspora" has no agreed-upon meaning. Some scholars define "diaspora" very narrowly, insisting that it be reserved for the Jews and, possibly, for African slaves and Armenian refugees.10 But where does one draw the line? A second, increasingly popular group of theorists defines the term very broadly to include "imperial," "trade," and "labor" diasporas as well as those based more traditionally on conquest, catastrophe, or exile.11 And a third group, avowedly "postmodern" by persuasion, objects to all typologies, seeing the "diasporic" as a state of mind and a form of discourse (which amount to the same thing).12
The point here is not to endorse one position or another, but simply to echo MacRaild's comment that the meaning of the term "diaspora" is not self-evident and that using it unreflectively may be worse than not using it at all. The future of Irish migration historiography undoubtedly lies in the global arena, beyond the confines of individual nation states. But until the theoretical and practical possibilities of "diaspora" have been clarified, scholars could do worse than to remember that the very nation-states we are today so busily transcending were the essential building blocks of modern history. If our object of inquiry is the past rather than the present, it might be better to compare and contrast the history of migrant groups within nation states, no matter how arbitrarily they were constructed and defined, than to transcend these real national and state differences by ignoring them. Treating the global Irish as if they all belonged to a single diaspora runs the risk of impov erishing a rich and complex history.
1. Kevin Kenny, The American Irish: A History (London and New York: Longman, 2000).
2.Kevin Kenny, Making Sense of the Molly Maguires (New York: Oxford University Press, 1998).
3. See, for example, Ellen Shapiro McDonald and Forrest McDonald, "The Ethnic Origins of the American People, 1790," William and Mary Quarterly, 3rd ser., XXXVII (January 1980): 179-99, with communications by Francis Jennings and Rowl and Berthoff and a reply by the McDonalds, 3rd ser., XXXVII (October 1980), 700-3; Forrest McDonald and Grady McWhiney, "Celtic Origins of Southern Herding Practices," The Journal of Southern History, 51 (1985): 165-82; Grady M cWhiney, Cracker Culture: Celtic Ways in the Old South (Tuscaloosa, Alabama: University of Alabama Press, 1988); and, for an excellent critique, Rowland Berthoff, "Celtic Mist Over the South," Journal of Southern History, 52 (Novem ber 1986): 523-46.
4. See Theodore W. Allen, The Invention of the White Race, Vol. 1., Racial Oppression and Social Control (New York: Verso, 1994) and Vol. II., The Origin of Racial Oppression in Anglo-America (New York: Verso, 1997); David Roediger , The Wages of Whiteness: Race and the Making of the American Working Class (New York: Verso, 1992); Noel Ignatiev, How the Irish Became White (New York: Routledge, 1995); Matthew Frye Jacobson, Whiteness of a Different Color: European Im migrants and the Alchemy of Race (Cambridge, MA: Harvard University Press, 1998).
5. As examples of the "revisionist" perspective on the famine I cite R. Dudley Edwards and T. Desmond Williams, The Great Famine: Studies in Irish History, 1845-52 (Dublin: Browne and Nolan, 1956); Mary Daly, The Famine in Ireland (Dublin: Dublin Historical Association, 1986) and "Revisionism and Irish History: The Great Famine," in D. George Boyce and Alan O'Day, eds., The Making of Modern Irish History (London and New York: Routledge, 1996); and R.F. Foster, < I>Modern Ireland, 1600-1972 (London: Allen Lane, 1988), 318-44. My examples of "post-revisionism" are Peter Gray, The Irish Famine (London: Thames and Hudson, 1995) and Famine, Land and Politics: British Government and Irish Society, 1843-1850 (Dublin: Irish Academic Press, 1999), both of which are especially good on providentialism; and, on famine relief among other questions, Cormac O'Gráda, Ireland: A New Economic History, 1780-1939 (Oxford: Clarendon Press, 19 94), especially Chapter 8, and Black '47 and Beyond. The Great Irish Famine: History, Economy, and Memory (Princeton: Princeton University Press, 1999), especially Chapter 2. Evidence of a "romantic nationalist" perspective is harder to f ind, as there was never any such school of professional history. The target of much revisionism was popular rather than academic history, as exemplified by John Mitchel's infamous comment that "The almighty indeed sent the potato blight, but the Engl ish created the Famine."
6. See in particular Thomas N. Brown, Irish-American Nationalism, 1870-1890 (Philadelphia: Lippincott, 1966); Eric Foner, "Class, Ethnicity, and Radicalism in the Gilded Age: The Land League in Irish-America," in Eric Foner, Politic s and Ideology in the Age of the Civil War (New York: Oxford University Press, 1980); Victor A. Walsh, "'A Fanatic Heart': The Cause of Irish-American Nationalism in Pittsburgh During the Gilded Age," Journal of Social History, 15 (19 81): 187-204.
7. See in particular Hasia Diner, Erin's Daughters in America: Irish Immigrant Women in the Nineteenth Century (Baltimore, MD: The Johns Hopkins University Press, 1983); Janet Nolan, Ourselves Alone: Women's Emigration from Ireland, 1885-1920 (Lexington, KY: The University Press of Kentucky, 1989).
8. By far the best account is Steven P. Erie, Rainbow's End: Irish Americans and the Dilemmas of Urban Machine Politics, 1840-1985 (Berkeley, CA: University of California Press, 1988).
9. I have spent much of the last year working on a historiographical review essay critically examining the concept of diaspora and its applicability to the Irish case.
10. This, indeed, was its standard meaning before the 1960s.
11. Perhaps the most influential are William Safran, "Diasporas in Modern Societies: Myths of Homeland and Return," Diaspora, 1 (Spring 1991): 83-99, and Robin Cohen, Global Diasporas: An Introduction (Seattle: University of Washington Press, 1997).
12. See especially James Clifford, "Diasporas," Cultural Anthropology, 9 (1994): 302-38. | 1 | 19 |
<urn:uuid:66c2f589-0a74-41be-9b8b-8b81044564a6> | Plant-based diets are good for you, for animals and for the environment. But the strictest of them, veganism, can be challenging to follow, especially when you need to meet a higher calorie goal. While it's true that animal-based foods typically contain more calories than plant foods, there are plenty of options for vegans looking to gain weight or build muscle. The key is to find healthy, whole-food sources rather than relying on vegan "junk food."
Focusing on calorie-dense foods such as nuts, oils and avocado can help you get enough calories on a vegan diet.
Vegan Diet Foods
If you're just starting out on your vegan diet, it's helpful to first know what foods you can and can't eat. Then you can zero in on the foods that will give you the most calorie bang for your buck.
Video of the Day
Vegan foods fall into six general categories:
- Fruits and vegetables
- Legumes, nuts and seeds, such as chickpeas, lentils, chia seeds and almonds
- Grains, such as bread, pasta, rice, quinoa and bulgur
- Tofu, seitan and tempeh
- Plant-based dairy substitutes, such as nut and coconut milks and yogurt
- Vegan products, including meat substitutes, vegan mayo and vegan ice cream
Vegans do not eat:
- Fish and shellfish
- Dairy products
- Eggs and anything made with eggs
- Honey (bees make honey)
- White sugar (it may be processed with bone char)
- Marshmallows, gummy candies and anything else made with gelatin (derived from animal byproducts)
- Salad dressings, which may contain lecithin (an emulsifier often derived from animal tissues or egg yolks)
- Most beer (may be processed with fish gelatin, egg whites or seashells)
Read more: 12 Classic Comfort Foods Made Vegan
Calorie-Dense Vegan Foods
From that list, you can identify calorie-dense and healthy vegan foods to focus on in your diet. Here are some examples, per the USDA National Nutrient Database, and how they compare to animal foods:
- One-half avocado weighing 3.5 ounces contains 160 calories — about the same amount as a cup of whole milk.
- One ounce of walnuts provides 180 calories, which is slightly more than 1.5 ounces of cheddar cheese.
- Two tablespoons of creamy peanut butter have 190 calories — 25 calories more than a 3.5-ounce chicken breast without skin.
- One tablespoon of olive oil provides 119 calories — about the same as 3 ounces of sockeye salmon.
You can see now that there are plenty of plant foods that provide calories similar to animal foods, and in even smaller portions. Other calorie-dense foods for vegans include:
- Quinoa: 222 calories per cup cooked
- Dried fruit: 247 calories per half-cup
- Black beans: 227 calories per cup cooked
- Sweet potatoes: 180 calories per cup cooked
- Brown rice: 216 calories per up cooked
- Coconut oil: 232 calories in 2 tablespoons
High-Fat Vegan Foods
The most caloric vegan foods are high in fat, such as nuts, oils and avocado. But unlike animal fats, high-fat vegan foods contain mostly monounsaturated and polyunsaturated fats. These plant fats, consumed in moderation, can actually benefit your heart, in contrast to the saturated fat found in animal foods like meat and dairy.
Both types of unsaturated fats can help lower your low-density lipoprotein, or LDL, cholesterol, according to Harvard Health Publishing. LDL is the unhealthy cholesterol that can clog or block your arteries and contribute to heart disease.
High-Protein Vegan Foods
Another misconception about a vegan diet is that it's difficult to get enough protein, especially if you're interested in building muscle. But just check out vegan competitive bodybuilders Torre Washington and Hin Chun Chui. Grains, legumes, nuts, seeds and even some vegetables are rich in healthy plant-based protein.
According to Healthline, some examples include:
- Seitan: 25 grams per 3.5 ounces
- Lentils: 18 grams per cooked cup
- Chickpeas: 15 grams per cooked cup
- Hempseed: 10 grams per ounce
- Spirulina: 8 grams per 2 tablespoons
- Green peas: 9 grams per cooked cup
- Nutritional yeast: 14 grams per ounce
As long as you plan your meals right, it's easy to eat a high-calorie, high-protein vegan diet.
High-Calorie Vegan Foods
A trap a lot of vegans fall into is relying on processed and prepared foods to get by. Just like non-vegan foods, there are good foods and bad foods. Even vegan foods can be highly processed and full of sugar and refined grains that spike your blood sugar. So it's crucial to make sure you avoid these foods, even if they're high in calories.
Meat substitutes and frozen vegan meals are go-to protein sources that taste good and are easy to prepare. But they aren't always healthy. Some brands are high in sodium and contain added sugar, additives and preservatives. They aren't terrible for you, but they are processed foods, so they don't provide a lot of nutrition.
Vegan Junk Foods
Lots of sugary, salty snacks you can find in bags and boxes in the aisles of any supermarket are vegan and high in calories. For example, many of these foods are vegan "by accident":
- Fruit snacks
- Sugary granola bars
- French fries
- Chocolate peanut butter cups
- Frozen pies
Not all brands of these foods are vegan, but many of them are. Don't attempt to add calories to your diet with these foods. You will get little nutrition, and eating too much junk food can negatively affect your energy levels, cause you to gain weight and damage your overall health.
Vegan Tips for More Calories
When you're planning your meals, make a list of all the healthy high-calorie ingredients you now know about. Then try to include some in each meal. There are lots of creative vegan recipes to try, so you shouldn't ever run out of delicious high-calorie meals. However, you may have to eat more meals during the day.
Trainer and author Karina Inkster, MA, PTS, fits in almost 4,000 vegan calories in a day by eating two breakfasts, two lunches and two dinners for six meals a day. Do the math and you'll see that each of these six meals adds up to about 650 calories. Eating 650 calories every few hours is a lot easier than eating 1,000 calories at each meal.
Inkster recommends logging your food so you can see where you are at and where you need to make changes if you're not getting enough calories or enough of certain nutrients. It also keeps you accountable for making healthy choices and not falling prey to the lure of empty calories.
- Vegan Heaven: What Do Vegans Eat?
- Dr. Karen S. Lee: What Vegans Don't Eat
- Healthline: 11 High-Calorie Vegan Foods for Healthy Weight Gain
- Harvard Health Publishing: The Truth About Fats: The Good, the Bad, and the In-Between
- Forks Over Knives: How I Fuel Myself With a Plant-Based Diet as a Competitive Bodybuilder
- South China Morning Post: Vegan Hong Kong Bodybuilder Hin Chun Chui Wrestles Protein Myths and Shows You Don’t Need Meat or Dairy to Be a Winner
- Healthline: The 17 Best Protein Sources for Vegans and Vegetarians
- Sweet Earth Foods: Cauliflower Mac
- PETA: Top 20 Accidentally Vegan Foods
- Karina Inkster: Food Logging Part 4: My 3000+ Calorie per Day Vegan Diet and What, Exactly, I Eat
- Karina Inkster: Food Logging Part 1: Why You Should Log Your Food, Especially If You're Vegan
- USDA: Basic Report: 09037, Avocados, Raw, All Commercial Varieties
- USDA: Full Report (All Nutrients): 45282262, Whole Milk Vitamin D, Upc: 070784006024
- USDA: Basic Report: 12157, Nuts, Walnuts, Dry Roasted, with Salt Added
- USDA: Basic Report: 01270, Cheese, Cheddar, Sharp, Sliced
- USDA: Basic Report: 16398, Peanut Butter, Smooth Style, Without Salt
- USDA: Basic Report: 05064, Chicken, Broilers or Fryers, Breast, Meat Only, Cooked, Roasted
- USDA: Basic Report: 04053, Oil, Olive, Salad or Cooking
- USDA: Basic Report: 15085, Fish, Salmon, Sockeye, Raw | 1 | 7 |
<urn:uuid:db8a931e-dc93-4ca4-982e-38127e7828a3> | Overview of Smart Grid Technology And Its Operation and Application (For Existing Power System) Nowadays, the electric power system is facing a radical transformation in worldwide with the decarbonise electricity supply to replace aging assets and control the natural resources with new information and communication technologies (ICT). A smart grid technology is an essential to provide easy integration and reliable service to the consumers. A smart grid system is a self-sufficient electricity network system based on digital automation technology for monitoring, control, and analysis within the supply chain. This system can find the solution to the problems very quickly in an existed system that can reduce the workforce and it will targets sustainable, reliable, safe and quality electricity to all consumers. Overview of Smart Grid Technology The smart grid can be defined as a smart electrical network that combines electrical network and smart digital communication technology. A smart grid has capable of providing electrical power from multiple and widely distributed sources, like from wind turbines, solar power systems, and perhaps even plug-in hybrid electric vehicles. Overview of Smart Grid Technology Smart Grid Components To achieve a modernized smart grid, a wide range of technologies should be developed and must be implemented. These technologies generally grouped into following key technology areas as discussed below. Intelligent Appliances: Intelligent appliances have capable of deciding when to consume energy based on customer pre-set preferences. This can lead to going away along toward reducing peak loads which have an impact on electricity generation costs. For example, smart sensors, like temperature sensor which is used in thermal stations to control the boiler temperature based on predefined temperature levels. Smart Power Meters: The smart meters provide two-way communication between power providers and the end user consumers to automate billing data collections, detect device failures and dispatch repair crews to the exact location much faster. Smart Grid Components Smart Substations: substations are included monitoring and control non-critical and critical operational data such as power status, power factor performance, breaker, security, transformer status, etc. substations are used to transform voltage at several times in many locations, that providing safe and reliable delivery of energy. Smart substations are also necessary for splitting the path of electricity flow into many directions. Substations require large and very expensive equipment to operate, including transformers, switches, capacitor banks, circuit breakers, a network protected relays and several others. Smart Substations Super Conducting Cables: These are used to provide long distance power transmission, and automated monitoring and analysis tools capable of detecting faults itself or even predicting cable and failures based on real-time data weather, and the outage history. Super Conducting Cables Integrated communications: The key to a smart grid technology is integrated communications. It must be as fast as enough to real-time needs of the system. Depending upon the need, Many different technologies are used in smart grid communication like Programmable Logic Controller (PLC), wireless, cellular, SCADA (Supervisory Control and Data Acquisition), and BPL.Key Considerations for Integrated Communication. SCADA Key Considerations for Integrated Communication Ease of deployment Latency Standards Data carrying capacity Secure Network coverage capability Key Considerations for Integrated Communication Phasor Measurement Units (PMU): This is used to measure the electrical waves on an electricity grid using a common time source for synchronization. The time synchronizer allows synchronized real-time measurements of multiple remote measurement points on the grid. Benefits of Smart Grid Integrate isolated technologies: smart grid enables better energy management Protective management of electrical network during emergency situation Better demand, supply/ demand response Better power quality Reduce carbon emissions Increased demand for energy: Requires more complex and critical solutions with better energy management Renewables Integration Disadvantages of Smart Grid Privacy Problems The biggest concern is Security in a smart grid system. Grid system uses some smart meters, which are automated and provides communication between power provider and customer. Here some type of the smart meters can be easily hacked and they may control the power supply of a single building or an entire neighborhood. Grid Volatility Smart Grid network has much intelligence at its edges; that is, at the entry point and at the end user’s meter. But the grid has insufficient intelligence in the middle, governing the switching functions. This lack of integrated development makes the grid a volatile network. Engineering resources have been poured into power generation and consumer energy consumption, which are the edges of the network. However, if too many nodes are added to the network before developing the software intelligence to control it, the conditions will lead to a volatile smart grid. Applications of Smart Grid Smart grid plays an important role in modern smart technologies. Following are the most common applications of smart grid technology. Future Applications and Services Real Time Market Business and customer care Application data flow to/ from end-user energy management systems Smart charging of PHEVs and V2G Application data flow for PHEVs Distributed generation and storage Monitoring of distributed assets Grid optimization Self-healing grid: fault protection, outage management, dynamic control of voltage, weather data integration, centralized capacitor bank control, distribution and substation automation, advanced sensing, automated feeder reconfiguration. Demand response Advanced demand maintenance and demand response, load forecasting, and shifting. AMI (Advanced metering infrastructure) Provides remote meter reading, theft detection, customer prepay, mobile workforce management Software Requirements Keil compiler, Language: Embedded C or Assembly Hardware Requirements Pre-programed Microcontroller (AT89C51/S52), Energy Meter, Max232, Resistors, GSM module, LCD (16×2), LED, Crystal Oscillator, Capacitors, Diodes, Transformer, Regulator, and Load. IOT Based Electricity Energy Meter Reading Through Internet The main objective of this project is to develop an IOT (internet of things) based energy meter reading displayed for units consumed and cost for consumption, over the internet in the chart and gauge format. In this project, we had taken a digital energy meter whose blinking LED signal is interfaced to a microcontroller of 8051 families through an LDR. Per 1 unit, The blinking LED flashes 3200 times. The LDR sensor gives an interrupt to the programmed microcontroller, at each time of the meter LED flashes. Block Diagram of Smart Energy Meter IoT-based Energy Meter The microcontroller takes this reading and displays it on an LCD duly interfaced to the microcontroller. This reading of the energy meter is also sent to a GSM modem being fed by the microcontroller via level shifter IC and RS232 link. A SIM used in the modem being internet enabled transmits the data directly to a dedicated web page for display or to the customer mobile phone, anywhere in the world in multi-level graphical format Thus, this is all about an overview of smart grid technology. We hope that you have got a better understanding of this concept. Furthermore, any queries regarding this concept or to implement any electrical projects, please give your valuable suggestions by commenting in the comment section below. Here is a question for you, What are the advantages of using smart grid technology? Share This Post: Facebook Twitter Google+ LinkedIn Pinterest Post navigation ‹ Previous Difference Between Full Wave Bridge Rectifier and Full Wave Center Tap RectifierNext › How to Use Transistor as a Switch Related Content Semiconductor Fuse : Construction, HSN code, Working & Its Applications Displacement Transducer : Circuit, Types, Working & Its Applications Photodetector : Circuit, Working, Types & Its Applications Portable Media Player : Circuit, Working, Wiring & Its Applications Comments are closed. | 1 | 2 |
<urn:uuid:b3c12f12-8c8a-4721-afc6-673c534a27e6> | |Acute angle closure glaucoma of a person's right eye (shown at left). Note the mid-sized pupil, which is non-reactive to light, and redness of the white part of the eye.|
|Usual onset||Gradual, or sudden|
|Risk factors||Increased pressure in the eye, family history, high blood pressure|
|Diagnostic method||Dilated eye examination|
|Differential diagnosis||Uveitis, trauma, keratitis, conjunctivitis|
|Treatment||Medication, laser, surgery|
Glaucoma is a group of eye diseases that result in damage to the optic nerve (or retina) and cause vision loss. The most common type is open-angle (wide angle, chronic simple) glaucoma, in which the drainage angle for fluid within the eye remains open, with less common types including closed-angle (narrow angle, acute congestive) glaucoma and normal-tension glaucoma. Open-angle glaucoma develops slowly over time and there is no pain. Peripheral vision may begin to decrease, followed by central vision, resulting in blindness if not treated. Closed-angle glaucoma can present gradually or suddenly. The sudden presentation may involve severe eye pain, blurred vision, mid-dilated pupil, redness of the eye, and nausea. Vision loss from glaucoma, once it has occurred, is permanent. Eyes affected by glaucoma are referred to as being glaucomatous.
Risk factors for glaucoma include increasing age, high pressure in the eye, a family history of glaucoma, and use of steroid medication. For eye pressures, a value of 21 mmHg or 2.8 kPa above atmospheric pressure (760 mmHg) is often used, with higher pressures leading to a greater risk. However, some may have high eye pressure for years and never develop damage. Conversely, optic nerve damage may occur with normal pressure, known as normal-tension glaucoma. The mechanism of open-angle glaucoma is believed to be the slow exit of aqueous humor through the trabecular meshwork, while in closed-angle glaucoma the iris blocks the trabecular meshwork. Diagnosis is achieved by performing a dilated eye examination. Often, the optic nerve shows an abnormal amount of cupping.
If treated early, it is possible to slow or stop the progression of disease with medication, laser treatment, or surgery. The goal of these treatments is to decrease eye pressure. A number of different classes of glaucoma medication are available. Laser treatments may be effective in both open-angle and closed-angle glaucoma. A number of types of glaucoma surgeries may be used in people who do not respond sufficiently to other measures. Treatment of closed-angle glaucoma is a medical emergency.
About 70 million people have glaucoma globally, with about two million patients in the United States. It is the leading cause of blindness in African Americans. It occurs more commonly among older people, and closed-angle glaucoma is more common in women. Glaucoma has been called the "silent thief of sight", because the loss of vision usually occurs slowly over a long period of time. Worldwide, glaucoma is the second-leading cause of blindness after cataracts. Cataracts caused 51% of blindness in 2010, while glaucoma caused 8%. The word "glaucoma" is from the Ancient Greek glaukos, which means "shimmering." In English, the word was used as early as 1587 but did not become commonly used until after 1850, when the development of the ophthalmoscope allowed doctors to see the optic nerve damage.
Signs and symptoms
As open-angle glaucoma is usually painless with no symptoms early in the disease process, screening through regular eye exams is important. The only signs are gradually progressive visual field loss and optic nerve changes (increased cup-to-disc ratio on fundoscopic examination).
About 10% of people with closed angles present with acute angle closure characterized by sudden ocular pain, seeing halos around lights, red eye, very high intraocular pressure (>30 mmHg (4.0 kPa)), nausea and vomiting, suddenly decreased vision, and a fixed, mid-dilated pupil. It is also associated with an oval pupil in some cases. Acute angle closure is an emergency.
Opaque specks may occur in the lens in glaucoma, known as glaukomflecken.
Ocular hypertension (increased pressure within the eye) is the most important risk factor for glaucoma, but only about 50% of people with primary open-angle glaucoma actually have elevated ocular pressure. Ocular hypertension—an intraocular pressure above the traditional threshold of 21 mmHg (2.8 kPa) or even above 24 mmHg (3.2 kPa)—is not necessarily a pathological condition, but it increases the risk of developing glaucoma. One study found a conversion rate of 18% within five years, meaning fewer than one in five people with elevated intraocular pressure will develop glaucomatous visual field loss over that period of time. It is a matter of debate whether every person with an elevated intraocular pressure should receive glaucoma therapy; currently, most ophthalmologists favor treatment of those with additional risk factors.
Open-angle glaucoma accounts for 90% of glaucoma cases in the United States. Closed-angle glaucoma accounts for fewer than 10% of glaucoma cases in the United States, but as many as half of glaucoma cases in other nations (particularly East Asian countries).
No clear evidence indicates that vitamin deficiencies cause glaucoma in humans. As such, oral vitamin supplementation is not a recommended treatment. Caffeine increases intraocular pressure in those with glaucoma, but does not appear to affect normal individuals.
Many people of East Asian descent are prone to developing angle closure glaucoma because of shallower anterior chamber depths, with the majority of cases of glaucoma in this population consisting of some form of angle closure. Higher rates of glaucoma have also been reported for Inuit populations, compared to White populations, in Canada and Greenland.
Positive family history is a risk factor for glaucoma. The relative risk of having primary open-angle glaucoma (POAG) is increased about two- to four-fold for people who have a sibling with glaucoma. Glaucoma, particularly primary open-angle glaucoma, is associated with mutations in several genes, including MYOC, ASB10, WDR36, NTF4, TBK1, and RPGRIP1, although most cases of glaucoma do not involve these genetic mutations. Normal-tension glaucoma, which comprises one-third of POAG, is also associated with genetic mutations (including OPA1 and OPTN genes).
Various rare congenital/genetic eye malformations are associated with glaucoma. Occasionally, failure of the normal third-trimester gestational atrophy of the hyaloid canal and the tunica vasculosa lentis is associated with other anomalies. Angle closure-induced ocular hypertension and glaucomatous optic neuropathy may also occur with these anomalies and has been modelled in mice.
Other factors can cause glaucoma, known as "secondary glaucoma", including prolonged use of steroids (steroid-induced glaucoma); conditions that severely restrict blood flow to the eye, such as severe diabetic retinopathy and central retinal vein occlusion (neovascular glaucoma); ocular trauma (angle-recession glaucoma); and inflammation of the middle layer of the pigmented vascular eye structure (uveitis), known as uveitic glaucoma.
The underlying cause of open-angle glaucoma remains unclear. Several theories exist on its exact etiology. However, the major risk factor for most glaucomas and the focus of treatment is increased intraocular pressure. Intraocular pressure is a function of production of liquid aqueous humor by the ciliary processes of the eye, and its drainage through the trabecular meshwork. Aqueous humor flows from the ciliary processes into the posterior chamber, bounded posteriorly by the lens and the zonules of Zinn, and anteriorly by the iris. It then flows through the pupil of the iris into the anterior chamber, bounded posteriorly by the iris and anteriorly by the cornea. From here, the trabecular meshwork drains aqueous humor via the scleral venous sinus (Schlemm's canal) into scleral plexuses and general blood circulation.
In open/wide-angle glaucoma, flow is reduced through the trabecular meshwork, due to the degeneration and obstruction of the trabecular meshwork, whose original function is to absorb the aqueous humor. Loss of aqueous humor absorption leads to increased resistance and thus a chronic, painless buildup of pressure in the eye.
In close/narrow-angle, the iridocorneal angle is completely closed because of forward displacement of the final roll and root of the iris against the cornea, resulting in the inability of the aqueous fluid to flow from the posterior to the anterior chamber and then out of the trabecular network. This accumulation of aqueous humor causes an acute increase in pressure and pain.
Degeneration of axons of the retinal ganglion cells (the optic nerve) is a hallmark of glaucoma. The inconsistent relationship of glaucomatous optic neuropathy with increased intraocular pressure has provoked hypotheses and studies on anatomic structure, eye development, nerve compression trauma, optic nerve blood flow, excitatory neurotransmitter, trophic factor, retinal ganglion cell/axon degeneration, glial support cell, immune system, aging mechanisms of neuron loss, and severing of the nerve fibers at the scleral edge.
Screening for glaucoma is usually performed as part of a standard eye examination performed by optometrists and ophthalmologists. Testing for glaucoma includes measurements of the intraocular pressure using tonometry, anterior chamber angle examination or gonioscopy as well as examination of the optic nerve to discern visible damage, changes in the cup-to-disc ratio, rim appearance and vascular change. A formal visual field test is performed. The retinal nerve fiber layer can be assessed with imaging techniques such as optical coherence tomography, scanning laser polarimetry or scanning laser ophthalmoscopy (Heidelberg retinal tomogram). Visual field loss is the most specific sign of the condition, though it occurs later in the course of the disease.
As all methods of tonometry are sensitive to corneal thickness, methods such as Goldmann tonometry may be augmented with pachymetry to measure the central corneal thickness (CCT). A thicker cornea can result in a pressure reading higher than the true pressure but a thinner cornea can produce a pressure reading lower than the true pressure.[medical citation needed]
Because pressure-measurement error can be caused by more than just CCT (such as by corneal hydration or elastic properties), it is impossible to adjust pressure measurements based only on CCT measurements. The frequency-doubling illusion can also be used to detect glaucoma with the use of a frequency-doubling technology perimeter.
Examination for glaucoma is assessed with attention given to gender, race, history of drug use, refraction, inheritance and family history.
|What the test examines||Eye drops used||Physical contact with the eye||Procedure|
|Tonometry||Inner eye pressure||Maybe||Maybe||Eye drops may be used to numb the eye. The examiner then uses a tonometer to measure the inner pressure of the eye through pressure applied by a puff of warm air or a tiny tool.|
|Ophthalmoscopy (dilated eye examination)||Shape and color of the optic nerve||Yes||No||Eye drops are used to dilate the pupil. Using a small magnification device with a light on the end, the examiner can examine the magnified optic nerve.|
|Perimetry (visual field test)||Complete field of vision||No||No||The patient looks straight ahead and is asked to indicate when light passes the patient's peripheral field of vision. This allows the examiner to map the patient's field of vision.|
|Gonioscopy||Angle in the eye where the iris meets the cornea||Yes||Yes||Eye drops are used to numb the eye. A hand-held contact lens with a mirror is placed gently on the eye to allow the examiner to see the angle between the cornea and the iris.|
|Pachymetry||Thickness of the cornea||No||Yes||The examiner places a pachymeter gently on the front of the eye to measure its thickness.|
|Nerve fiber analysis||Thickness of the nerve fiber layer||Maybe||Maybe||Using one of several techniques,[clarification needed] the nerve fibers are examined.|
Glaucoma has been classified into specific types:
Primary glaucoma and its variants
Primary glaucoma (H40.1-H40.2)
- Primary open-angle glaucoma, also known as chronic open-angle glaucoma, chronic simple glaucoma, glaucoma simplex
- High-tension glaucoma
- Low-tension glaucoma
- Primary angle closure glaucoma, also known as primary closed-angle glaucoma, narrow-angle glaucoma, pupil-block glaucoma, acute congestive glaucoma
- Acute angle closure glaucoma (aka AACG)
- Chronic angle closure glaucoma
- Intermittent angle closure glaucoma
- Superimposed on chronic open-angle closure glaucoma ("combined mechanism" – uncommon)
Variants of primary glaucoma
- Pigmentary glaucoma
- Exfoliation glaucoma, also known as pseudoexfoliative glaucoma or glaucoma capsulare
- Primary juvenile glaucoma
Primary angle closure glaucoma is caused by contact between the iris and trabecular meshwork, which in turn obstructs outflow of the aqueous humor from the eye. This contact between iris and trabecular meshwork (TM) may gradually damage the function of the meshwork until it fails to keep pace with aqueous production, and the pressure rises. In over half of all cases, prolonged contact between iris and TM causes the formation of synechiae (effectively "scars").
These cause permanent obstruction of aqueous outflow. In some cases, pressure may rapidly build up in the eye, causing pain and redness (symptomatic, or so-called "acute" angle closure). In this situation, the vision may become blurred, and halos may be seen around bright lights. Accompanying symptoms may include a headache and vomiting.
Diagnosis is made from physical signs and symptoms: pupils mid-dilated and unresponsive to light, cornea edematous (cloudy), reduced vision, redness, and pain. However, the majority of cases are asymptomatic. Prior to the very severe loss of vision, these cases can only be identified by examination, generally by an eye care professional.
Once any symptoms have been controlled, the first line (and often definitive) treatment is laser iridotomy. This may be performed using either Nd:YAG or argon lasers, or in some cases by conventional incisional surgery. The goal of treatment is to reverse and prevent contact between the iris and trabecular meshwork. In early to moderately advanced cases, iridotomy is successful in opening the angle in around 75% of cases. In the other 25%, laser iridoplasty, medication (pilocarpine) or incisional surgery may be required.
Primary open-angle glaucoma is when optic nerve damage results in a progressive loss of the visual field. This is associated with increased pressure in the eye. Not all people with primary open-angle glaucoma have eye pressure that is elevated beyond normal, but decreasing the eye pressure further has been shown to stop progression even in these cases.
The increased pressure is caused by trabecular meshwork blockage. Because the microscopic passageways are blocked, the pressure builds up in the eye and causes imperceptible very gradual vision loss. Peripheral vision is affected first, but eventually the entire vision will be lost if not treated.
Diagnosis is made by looking for cupping of the optic nerve. Prostaglandin agonists work by opening uveoscleral passageways. Beta-blockers, such as timolol, work by decreasing aqueous formation. Carbonic anhydrase inhibitors decrease bicarbonate formation from ciliary processes in the eye, thus decreasing the formation of aqueous humor. Parasympathetic analogs are drugs that work on the trabecular outflow by opening up the passageway and constricting the pupil. Alpha 2 agonists (brimonidine, apraclonidine) both decrease fluid production (via inhibition of AC) and increase drainage.
Developmental glaucoma (Q15.0)
- Primary congenital glaucoma
- Infantile glaucoma
- Glaucoma associated with hereditary or familial diseases
Secondary glaucoma (H40.3-H40.6)
- Inflammatory glaucoma
- Uveitis of all types
- Fuchs heterochromic iridocyclitis
- Phacogenic glaucoma
- Angle-closure glaucoma with mature cataract
- Phacoanaphylactic glaucoma secondary to rupture of lens capsule
- Phacolytic glaucoma due to phacotoxic meshwork blockage
- Subluxation of lens
- Glaucoma secondary to intraocular hemorrhage
- Hemolytic glaucoma, also known as erythroclastic glaucoma
- Traumatic glaucoma
- Angle recession glaucoma: Traumatic recession on anterior chamber angle
- Postsurgical glaucoma
- Aphakic pupillary block
- Ciliary block glaucoma
- Neovascular glaucoma (see below for more details)
- Drug-induced glaucoma
- Corticosteroid induced glaucoma
- Alpha-chymotrypsin glaucoma. Postoperative ocular hypertension from use of alpha chymotrypsin.
- Glaucoma of miscellaneous origin
- Associated with intraocular tumors
- Associated with retinal detachments
- Secondary to severe chemical burns of the eye
- Associated with essential iris atrophy
- Toxic glaucoma
Neovascular glaucoma, an uncommon type of glaucoma, is difficult or nearly impossible to treat, and is often caused by proliferative diabetic retinopathy (PDR) or central retinal vein occlusion (CRVO). It may also be triggered by other conditions that result in ischemia of the retina or ciliary body. Individuals with poor blood flow to the eye are highly at risk for this condition.
Neovascular glaucoma results when new, abnormal vessels begin developing in the angle of the eye that begin blocking the drainage. People with such condition begin to rapidly lose their eyesight. Sometimes, the disease appears very rapidly, especially after cataract surgery procedures. A new treatment for this disease, as first reported by Kahook and colleagues, involves the use of a novel group of medications known as anti-VEGF agents. These injectable medications can lead to a dramatic decrease in new vessel formation and, if injected early enough in the disease process, may lead to normalization of intraocular pressure. Currently, there are no high-quality controlled trials demonstrating a beneficial effect of anti-VEGF treatments in lowering IOP in people with neovascular glaucoma.
Toxic glaucoma is open-angle glaucoma with an unexplained significant rise of intraocular pressure following unknown pathogenesis. Intraocular pressure can sometimes reach 80 mmHg (11 kPa). It characteristically manifests as ciliary body inflammation and massive trabecular oedema that sometimes extends to Schlemm's canal. This condition is differentiated from malignant glaucoma by the presence of a deep and clear anterior chamber and a lack of aqueous misdirection. Also, the corneal appearance is not as hazy. A reduction in visual acuity can occur followed neuroretinal breakdown.
Associated factors include inflammation, drugs, trauma and intraocular surgery, including cataract surgery and vitrectomy procedures. Gede Pardianto (2005) reported on four patients who had toxic glaucoma. One of them underwent phacoemulsification with small particle nucleus drops. Some cases can be resolved with some medication, vitrectomy procedures or trabeculectomy. Valving procedures can give some relief, but further research is required.
Absolute glaucoma (H44.5) is the end stage of all types of glaucoma. The eye has no vision, absence of pupillary light reflex and pupillary response, and has a stony appearance. Severe pain is present in the eye. The treatment of absolute glaucoma is a destructive procedure like cyclocryoapplication, cyclophotocoagulation, or injection of 99% alcohol.
This section needs additional citations for verification. (August 2015)
Glaucoma is an umbrella term for eye conditions that damage the optic nerve and that can lead to a loss of vision. The main cause of damage to the optic nerve is intraocular pressure (IOP), excessive fluid pressure within the eye, which can be caused by factors such as blockage of drainage ducts and narrowing or closure of the angle between the iris and cornea.
Glaucoma is primarily categorized as either open-angle or closed-angle (or angle-closure). In open-angle glaucoma, the iris meets the cornea normally, allowing the fluid from inside the eye to drain, thus relieving the internal pressure. When this angle is narrowed or closed, pressure increases over time, causing damage to the optic nerve and leading to blindness.
Primary open-angle glaucoma (also called primary or chronic glaucoma) involves the slow clogging of drainage canals resulting in increased eye pressure, which causes progressive optic nerve damage. This manifests as a gradual loss of the visual field, starting with a loss of peripheral vision, but eventually all vision will be lost if the condition is not treated. This is the most common type of glaucoma, accounting for 90% of cases in the United States, but is less prevalent in Asian countries. Onset is slow and painless, and loss of vision is gradual and irreversible.
With narrow-angle glaucoma (also called closed-angle glaucoma), the iris bows forward, narrowing the angle that drains the eye, increasing pressure within the eye. If untreated, it can lead to the medical emergency of angle-closure glaucoma.
With angle-closure glaucoma (also called closed-angle, primary angle-closure or acute glaucoma), the iris bows forward and causes physical contact between the iris and trabecular meshwork, which blocks the outflow of aqueous humor from within the eye. This contact may gradually damage the draining function of the meshwork until it fails to keep pace with aqueous production, and the intraocular pressure rises. The onset of symptoms is sudden and causes pain and other noticeable symptoms, and the condition is treated as a medical emergency. Unlike open-angle glaucoma, angle-closure glaucoma is a result of the closing of the angle between the iris and cornea. This tends to occur in the farsighted, who have smaller anterior chambers, making physical contact between the iris and trabecular meshwork more likely. A variety of tests may be performed to detect those at risk of angle-closure glaucoma.
Normal-tension glaucoma (NTG, also called low-tension or normal-pressure glaucoma) is a condition in which the optic nerve is damaged although intraocular pressure (IOP) is in the normal range (12 to 22 mmHg (1.6 to 2.9 kPa)). Individuals with a family history of NTG, those of Japanese ancestry, those with a history of systemic heart disease and those with Flammer syndrome are at an elevated risk of developing NTG. The cause of NTG is unknown.
Secondary glaucoma refers to any case in which another disease, trauma, drug or procedure causes increased eye pressure, resulting in optic nerve damage and vision loss, which may be mild or severe. This may be the result of an eye injury, inflammation, a tumor or advanced cases of cataracts or diabetes. It can also be caused by certain drugs such as steroids. Treatment depends on whether the condition is identified as open-angle or angle-closure glaucoma.
With pseudoexfoliation glaucoma (also known as PEX or exfoliation glaucoma) the pressure results from the accumulation of microscopic granular protein fibers, which can block normal drainage of the aqueous humor. PEX is prevalent in Scandinavia, primarily in those over 70, and more commonly in women.
Pigmentary glaucoma (also known as pigmentary dispersion syndrome) is caused by pigment cells sloughing off from the back of the iris and floating around in the aqueous humor. Over time, these pigment cells can accumulate in the anterior chamber and begin to clog the trabecular meshwork. It is a rare condition that occurs mostly among white males in their mid-20s to 40s, most of whom are nearsighted.
Primary juvenile glaucoma is a neonate or juvenile abnormality in which ocular hypertension is evident at birth or shortly thereafter and is caused by abnormalities in the anterior chamber angle development that blocks the outflow of the aqueous humor.
Uveitic glaucoma is caused by uveitis, the swelling and inflammation of the uvea, the middle layer of the eye. The uvea provides most of the blood supply to the retina. Increased eye pressure can result from the inflammation itself or from the steroids used to treat it.
Visual field defects in glaucoma
In glaucoma visual field defects result from damage to the retinal nerve fiber layer. Field defects are seen mainly in primary open angle glaucoma. Because of the unique anatomy of the RNFL, many noticeable patterns are seen in the visual field. Most of the early glaucomatous changes are seen within the central visual field, mainly in Bjerrum's area, 10-20° from fixation.
Following are the common glaucomatous field defects:
- Generalized depression: Generalized depression is seen in early stages of glaucoma and many other conditions. Mild constriction of central and peripheral visual field due to isopter contraction comes under generalized depression. If all the isopters show similar depression to the same point, it is then called a contraction of visual field. Relative paracentral scotomas are the areas where smaller and dimmer targets are not visualized by the patient. Larger and brighter targets can be seen. Small paracentral depressions, mainly superonasal are seen in normal tension glaucoma (NTG). The generalized depression of the entire field may be seen in cataract also.
- Baring of blind spot: "Baring of blind spot" means exclusion of blind spot from the central field due to inward curve of the outer boundary of 30° central field. It is only an early non-specific visual field change, without much diagnostic value in glaucoma.
- Small wing-shaped Paracentral scotoma: Small wing-shaped Paracentral scotoma within Bjerrum's area is the earliest clinically significant field defect seen in glaucoma. It may also be associated with nasal steps. Scotoma may be seen above or below the blind spot.
- Siedel's sickle-shaped scotoma: Paracentral scotoma joins with the blind spot to form the Seidel sign.
- Arcuate or Bjerrum's scotoma:
- Ring or Double arcuate scotoma: Two arcuate scotomas join to form a Ring or Double arcuate scotoma. This defect is seen in advanced stages of glaucoma.
- Roenne's central nasal step: It is created when two arcuate scotomas run in different arcs to form a right angled defect. This is also seen in advanced stages of glaucoma.
- Peripheral field defects: Peripheral field defects may occur in early or late stages of glaucoma. Roenne's peripheral nasal steps occur due to contraction of peripheral isopter.
- Tubular vision: Tunnel vision is the loss of peripheral vision with retention of central vision, resulting in a constricted circular tunnel-like field of vision. It is seen in the end stages of glaucoma. Retinitis pigmentosa is another disease that causes tubular vision.
- Temporal island of vision: It is also seen in end stages of glaucoma. The temporal islands lie outside of the central 24 to 30° visual field, so it may not be visible with standard central field measurements done in glaucoma.
The United States Preventive Services Task Force stated, as of 2013, that there was insufficient evidence to recommend for or against screening for glaucoma. Therefore, there is no national screening program in the US. Screening, however, is recommended starting at age 40 by the American Academy of Ophthalmology.
There is a glaucoma screening program in the UK. Those at risk are advised to have a dilated eye examination at least once a year.
The modern goals of glaucoma management are to avoid glaucomatous damage and nerve damage, and preserve visual field and total quality of life for patients, with minimal side-effects. This requires appropriate diagnostic techniques and follow-up examinations, and judicious selection of treatments for the individual patient. Although intraocular pressure (IOP) is only one of the major risk factors for glaucoma, lowering it via various pharmaceuticals and/or surgical techniques is currently the mainstay of glaucoma treatment. A review of people with primary open-angle glaucoma and ocular hypertension concluded that medical IOP-lowering treatment slowed down the progression of visual field loss.
Vascular flow and neurodegenerative theories of glaucomatous optic neuropathy have prompted studies on various neuroprotective therapeutic strategies, including nutritional compounds, some of which may be regarded by clinicians as safe for use now, while others are on trial. Mental stress is also considered as consequence and cause of vision loss which means that stress management training, autogenic training and other techniques to cope with stress can be helpful.
Intraocular pressure can be lowered with medication, usually eye drops. Several classes of medications are used to treat glaucoma, with several medications in each class. By reducing pressure, eye drops also help to lower the risk of sight loss. Latanoprost, for example, seems to halve this risk.
Each of these medicines may have local and systemic side effects. Wiping the eye with an absorbent pad after the administration of eye drops may result in fewer adverse effects, like the growth of eyelashes and hyperpigmentation in the eyelid. Initially, glaucoma drops may reasonably be started in either one or in both eyes.
The possible neuroprotective effects of various topical and systemic medications are also being investigated.
- Prostaglandin analogs, such as latanoprost, bimatoprost and travoprost, increase uveoscleral outflow of aqueous humor. Bimatoprost also increases trabecular outflow.
- Topical beta-adrenergic receptor antagonists, such as timolol, levobunolol, and betaxolol, decrease aqueous humor production by the epithelium of the ciliary body.
- Alpha2-adrenergic agonists, such as brimonidine and apraclonidine, work by a dual mechanism, decreasing aqueous humor production and increasing uveoscleral outflow.
- Less-selective alpha agonists, such as epinephrine, decrease aqueous humor production through vasoconstriction of ciliary body blood vessels, useful only in open-angle glaucoma. Epinephrine's mydriatic effect, however, renders it unsuitable for closed-angle glaucoma due to further narrowing of the uveoscleral outflow (i.e. further closure of trabecular meshwork, which is responsible for absorption of aqueous humor).
- Miotic agents (parasympathomimetics), such as pilocarpine, work by contraction of the ciliary muscle, opening the trabecular meshwork and allowing increased outflow of the aqueous humour. Echothiophate, an acetylcholinesterase inhibitor, is used in chronic glaucoma.
- Carbonic anhydrase inhibitors, such as dorzolamide, brinzolamide, and acetazolamide, lower secretion of aqueous humor by inhibiting carbonic anhydrase in the ciliary body.
Poor compliance with medications and follow-up visits is a major reason for vision loss in glaucoma patients. A 2003 study of patients in an HMO found half failed to fill their prescriptions the first time, and one-quarter failed to refill their prescriptions a second time. Patient education and communication must be ongoing to sustain successful treatment plans for this lifelong disease with no early symptoms.
Adherence to medication protocol can be confusing and expensive; if side effects occur, the patient must be willing either to tolerate them or to communicate with the treating physician to improve the drug regimen. Initially, glaucoma drops may reasonably be started in either one or in both eyes.
Argon laser trabeculoplasty (ALT) may be used to treat open-angle glaucoma, but this is a temporary solution, not a cure. A 50-μm argon laser spot is aimed at the trabecular meshwork to stimulate the opening of the mesh to allow more outflow of aqueous fluid. Usually, half of the angle is treated at a time. Traditional laser trabeculoplasty uses a thermal argon laser in an argon laser trabeculoplasty procedure.
Nd:YAG laser peripheral iridotomy (LPI) may be used in patients susceptible to or affected by angle closure glaucoma or pigment dispersion syndrome. During laser iridotomy, laser energy is used to make a small, full-thickness opening in the iris to equalize the pressure between the front and back of the iris, thus correcting any abnormal bulging of the iris. In people with narrow angles, this can uncover the trabecular meshwork. In some cases of intermittent or short-term angle closure, this may lower the eye pressure. Laser iridotomy reduces the risk of developing an attack of acute angle closure. In most cases, it also reduces the risk of developing chronic angle closure or of adhesions of the iris to the trabecular meshwork.
Diode laser cycloablation lowers IOP by reducing aqueous secretion by destroying secretory ciliary epithelium.
Both laser and conventional surgeries are performed to treat glaucoma. Surgery is the primary therapy for those with congenital glaucoma. Generally, these operations are a temporary solution, as there is not yet a cure for glaucoma.
Canaloplasty is a nonpenetrating procedure using microcatheter technology. To perform a canaloplasty, an incision is made into the eye to gain access to the Schlemm's canal in a similar fashion to a viscocanalostomy. A microcatheter will circumnavigate the canal around the iris, enlarging the main drainage channel and its smaller collector channels through the injection of a sterile, gel-like material called viscoelastic. The catheter is then removed and a suture is placed within the canal and tightened.
By opening the canal, the pressure inside the eye may be relieved, although the reason is unclear, since the canal (of Schlemm) does not have any significant fluid resistance in glaucoma or healthy eyes. Long-term results are not available.
The most common conventional surgery performed for glaucoma is the trabeculectomy. Here, a partial thickness flap is made in the scleral wall of the eye, and a window opening is made under the flap to remove a portion of the trabecular meshwork. The scleral flap is then sutured loosely back in place to allow fluid to flow out of the eye through this opening, resulting in lowered intraocular pressure and the formation of a bleb or fluid bubble on the surface of the eye.
Scarring can occur around or over the flap opening, causing it to become less effective or lose effectiveness altogether. Traditionally, chemotherapeutic adjuvants, such as mitomycin C (MMC) or 5-fluorouracil (5-FU), are applied with soaked sponges on the wound bed to prevent filtering blebs from scarring by inhibiting fibroblast proliferation. Contemporary alternatives to prevent the scarring of the meshwork opening include the sole or combinative implementation of nonchemotherapeutic adjuvants such as the Ologen collagen matrix, which has been clinically shown to increase the success rates of surgical treatment.
Collagen matrix prevents scarring by randomizing and modulating fibroblast proliferation in addition to mechanically preventing wound contraction and adhesion.
Glaucoma drainage implants
The first glaucoma drainage implant was developed in 1966. Since then, several types of implants have followed on from the original: the Baerveldt tube shunt, or the valved implants, such as the Ahmed glaucoma valve implant or the ExPress Mini Shunt and the later generation pressure ridge Molteno implants. These are indicated for glaucoma patients not responding to maximal medical therapy, with previous failed guarded filtering surgery (trabeculectomy). The flow tube is inserted into the anterior chamber of the eye, and the plate is implanted underneath the conjunctiva to allow a flow of aqueous fluid out of the eye into a chamber called a bleb.
- The first-generation Molteno and other nonvalved implants sometimes require the ligation of the tube until the bleb formed is mildly fibrosed and water-tight. This is done to reduce postoperative hypotony—sudden drops in postoperative intraocular pressure.
- Valved implants, such as the Ahmed glaucoma valve, attempt to control postoperative hypotony by using a mechanical valve.
- Ab interno implants, such as the Xen Gel Stent, are transscleral implants by an ab interno procedure to channel aqueous humor into the non-dissected Tenon's space, creating a subconjunctival drainage area similar to a bleb. The implants are transscleral and different from other ab interno implants that do not create a transscleral drainage, such as iStent, CyPass, or Hydrus.
The ongoing scarring over the conjunctival dissipation segment of the shunt may become too thick for the aqueous humor to filter through. This may require preventive measures using antifibrotic medications, such as 5-fluorouracil or mitomycin-C (during the procedure), or other nonantifibrotic medication methods, such as collagen matrix implant, or biodegradable spacer, or later on create a necessity for revision surgery with the sole or combinative use of donor patch grafts or collagen matrix implant. And for glaucomatous painful blind eye and some cases of glaucoma, cyclocryotherapy for ciliary body ablation could be considered to be performed.
Laser-assisted nonpenetrating deep sclerectomy
The most common surgical approach currently used for the treatment of glaucoma is trabeculectomy, in which the sclera is punctured to alleviate intraocular pressure.
Nonpenetrating deep sclerectomy (NPDS) surgery is a similar, but modified, procedure, in which instead of puncturing the scleral bed and trabecular meshwork under a scleral flap, a second deep scleral flap is created, excised, with further procedures of deroofing the Schlemm's canal, upon which, percolation of liquid from the inner eye is achieved and thus alleviating intraocular pressure, without penetrating the eye. NPDS is demonstrated to have significantly fewer side effects than trabeculectomy. However, NPDS is performed manually and requires higher level of skills that may be assisted with instruments. In order to prevent wound adhesion after deep scleral excision and to maintain good filtering results, NPDS as with other non-penetrating procedures is sometimes performed with a variety of biocompatible spacers or devices, such as the Aquaflow collagen wick, ologen Collagen Matrix, or Xenoplast glaucoma implant.
Laser-assisted NPDS is performed with the use of a CO2 laser system. The laser-based system is self-terminating once the required scleral thickness and adequate drainage of the intraocular fluid have been achieved. This self-regulation effect is achieved as the CO2 laser essentially stops ablating as soon as it comes in contact with the intraocular percolated liquid, which occurs as soon as the laser reaches the optimal residual intact layer thickness.
For people with chronic closed-angle glaucoma, lens extraction can relieve the block created by the pupil and help regulate the intraocular pressure.
In open-angle glaucoma, the typical progression from normal vision to complete blindness takes about 25 years to 70 years without treatment, depending on the method of estimation used. The intraocular pressure can also have an effect, with higher pressures reducing the time until blindness.
As of 2010, there were 44.7 million people in the world with open angle glaucoma. The same year, there were 2.8 million people in the United States with open angle glaucoma. By 2020, the prevalence is projected to increase to 58.6 million worldwide and 3.4 million in the United States.
Both internationally and in the United States, glaucoma is the second-leading cause of blindness. Globally, cataracts are a more common cause. Glaucoma is also the leading cause of blindness in African Americans, who have higher rates of primary open-angle glaucoma. Bilateral vision loss can negatively affect mobility and interfere with driving.
A meta-analysis published in 2009 found that people with primary open angle glaucoma do not have increased mortality rates, or increased risk of cardiovascular death.
The association of elevated intraocular pressure (IOP) and glaucoma was first described by Englishman Richard Banister in 1622: "...that the Eye be grown more solid and hard, then naturally it should be...". Angle-closure glaucoma was treated with cataract extraction by John Collins Warren in Boston as early as 1806. The invention of the ophthalmoscope by Hermann Helmholtz in 1851 enabled ophthalmologists for the first time to identify the pathological hallmark of glaucoma, the excavation of the optic nerve head due to retinal ganglion cell loss. The first reliable instrument to measure intraocular pressure was invented by Norwegian ophthalmologist Hjalmar August Schiøtz in 1905. About half a century later, Hans Goldmann in Berne, Switzerland, developed his applanation tonometer which still today - despite numerous new innovations in diagnostics - is considered the gold standard of determining this crucial pathogenic factor. In the late 20th century, further pathomechanisms beyond elevated IOP were discovered and became the subject of research like insufficient blood supply – often associated with low or irregular blood pressure – to the retina and optic nerve head. The first drug to reduce IOP, pilocarpine, was introduced in the 1870s; other major innovations in pharmacological glaucoma therapy were the introduction of beta blocker eye drops in the 1970s and of prostaglandin analogues and topical (locally administered) carbonic anhydrase inhibitors in the mid-1990s.. Early surgical techniques like iridectomy and fistulating methods have recently been supplemented by less invasive procedures like small implants, a range of options now widely called MIGS (micro-invasive glaucoma surgery).
The word "glaucoma" comes from the Ancient Greek γλαύκωμα, a derivative of γλαυκóς, which commonly described the color of eyes which were not dark (i.e. blue, green, light gray). Eyes described as γλαυκóς due to disease might have had a gray cataract in the Hippocratic era, or, in the early Common Era, the greenish pupillary hue sometimes seen in angle-closure glaucoma. This colour is reflected in the Chinese word for glaucoma, 青光眼 (qīngguāngyǎn), literally “cyan-light eye”.
Eye drops vs. other treatments
The TAGS randomised controlled trial investigated if eye drops or trabeculectomy is more effective in treating advanced primary open-angle glaucoma. After two years researchers found that vision and quality of life are similar in both treatments. At the same time eye pressure was lower in people who underwent surgery and in the long-run surgery is more cost-effective.
The LiGHT trial compared the effectiveness of eye drops and selective laser trabeculoplasty for open angle glaucoma. Both contributed to a similar quality of life but most people undergoing laser treatment were able to stop using eye drops. Laser trabeculoplasty was also shown to be more cost-effective.
Rho kinase inhibitors
Rho kinase inhibitors, such as ripasudil, work by inhibition of the actin cytoskeleton, resulting in the morphological changes in the trabecular meshwork and increased aqueous outflow. More compounds in this class are being investigated in phase 2 and phase 3 trials.
A 2013 Cochrane Systematic Review compared the effect of brimonidine and timolol in slowing the progression of open angle glaucoma in adult participants. The results showed that participants assigned to brimonidine showed less visual field progression than those assigned to timolol, though the results were not significant, given the heavy loss-to-followup and limited evidence. The mean intraocular pressures for both groups were similar. Participants in the brimonidine group had a higher occurrence of side effects caused by medication than participants in the timolol group.
Studies in the 1970s reported that the use of cannabis may lower intraocular pressure. In an effort to determine whether marijuana, or drugs derived from it, might be effective as a glaucoma treatment, the US National Eye Institute supported research studies from 1978 to 1984. These studies demonstrated some derivatives of marijuana lowered intraocular pressure when administered orally, intravenously, or by smoking, but not when topically applied to the eye.
In 2003, the American Academy of Ophthalmology released a position statement stating that cannabis was not more effective than prescription medications. Furthermore, no scientific evidence has been found that demonstrates increased benefits and/or diminished risks of cannabis use to treat glaucoma compared with the wide variety of pharmaceutical agents now available.
In 2010 the American Glaucoma Society published a position paper discrediting the use of cannabis as a legitimate treatment for elevated intraocular pressure, for reasons including short duration of action and side effects that limit many activities of daily living.
Health disparities in glaucoma
A study conducted in UK showed that people living in an area of high deprivation were likely to be diagnosed in the later stage of the disease. It also showed that there were lack of professional ophthalmic services in the area of high deprivation.
A study in 2017 shows that there is a huge difference in the volume of glaucoma testing depending on the type of insurance in the US. Researchers reviewed 21,766 persons age ≥ 40 years old with newly diagnosed open-angle glaucoma (OAG) and found that Medicaid recipients had substantially lower volume of glaucoma testing performed compared to patients with commercial health insurance.
In research and clinical trials
Results from a meta-analysis of 33,428 primary open-angle glaucoma (POAG) participants published in 2021 suggest that there are substantial ethnic and racial disparities in clinical trials in the US. Although ethnic and racial minorities have a higher disease burden, the 70.7% of the study participants was White as opposed to 16.8% Black and 3.4% Hispanic/Latino.
- ^ a b c d e f g h i j k l m n o "Facts About Glaucoma". National Eye Institute. Archived from the original on 28 March 2016. Retrieved 29 March 2016.
- ^ a b c d e f g h i j k l m n o p q r s Mantravadi AV, Vadhar N (September 2015). "Glaucoma". Primary Care. 42 (3): 437–449. doi:10.1016/j.pop.2015.05.008. PMID 26319348.
- ^ Ferri FF (2010). Ferri's differential diagnosis : a practical guide to the differential diagnosis of symptoms, signs, and clinical disorders (2nd ed.). Philadelphia, PA: Elsevier/Mosby. p. Chapter G. ISBN 978-0323076999.
- ^ a b Vos, Theo; et al. (October 2016). "Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990-2015: a systematic analysis for the Global Burden of Disease Study 2015". Lancet. 388 (10053): 1545–1602. doi:10.1016/S0140-6736(16)31678-6. PMC 5055577. PMID 27733282.
- ^ Davis BM, Crawley L, Pahlitzsch M, Javaid F, Cordeiro MF (December 2016). "Glaucoma: the retina and beyond". Acta Neuropathologica. 132 (6): 807–826. doi:10.1007/s00401-016-1609-2. PMC 5106492. PMID 27544758.
- ^ Rhee DJ (2012). Glaucoma (2nd ed.). Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins. p. 180. ISBN 9781609133375. OCLC 744299538.
- ^ Mi XS, Yuan TF, So KF (16 September 2014). "The current research status of normal tension glaucoma". Clinical Interventions in Aging. 9: 1563–1571. doi:10.2147/CIA.S67263. PMC 4172068. PMID 25258525.
- ^ a b Vass C, Hirn C, Sycha T, Findl O, Bauer P, Schmetterer L (October 2007). "Medical interventions for primary open angle glaucoma and ocular hypertension". The Cochrane Database of Systematic Reviews. 2007 (4): CD003167. doi:10.1002/14651858.CD003167.pub3. PMC 6768994. PMID 17943780.
- ^ Ou Y. "Glaucoma in the African American and Hispanic Communities". Bright Focus Foundation. Bright Focus Foundation. Retrieved 26 June 2022.
- ^ "Glaucoma: The 'silent thief' begins to tell its secrets" (Press release). National Eye Institute. 21 January 2014. Archived from the original on 23 July 2015.
- ^ Resnikoff S, Pascolini D, Etya'ale D, Kocur I, Pararajasegaram R, Pokharel GP, Mariotti SP (November 2004). "Global data on visual impairment in the year 2002". Bulletin of the World Health Organization. 82 (11): 844–851. hdl:10665/269277. PMC 2623053. PMID 15640920.
- ^ GLOBAL DATA ON VISUAL IMPAIRMENTS 2010 (PDF). World Health Organization. 2010. p. 3.
- ^ "Greek Dictionary Headword Search Results". www.perseus.tufts.edu. Retrieved 31 March 2020.
- ^ Leffler CT, Schwartz SG, Stackhouse R, Davenport B, Spetzler K (December 2013). "Evolution and impact of eye and vision terms in written English". JAMA Ophthalmology. 131 (12): 1625–1631. doi:10.1001/jamaophthalmol.2013.917. PMID 24337558.
- ^ Friedman NJ, Kaiser PK, Pineda II R (2014). The Massachusetts Eye and Ear Infirmary Illustrated Manual of Ophthalmology E-Book. Elsevier Health Sciences. p. 234. ISBN 9780323225274.
- ^ Sommer A, Tielsch JM, Katz J, Quigley HA, Gottsch JD, Javitt J, Singh K (August 1991). "Relationship between intraocular pressure and primary open angle glaucoma among white and black Americans. The Baltimore Eye Survey". Archives of Ophthalmology. 109 (8): 1090–1095. doi:10.1001/archopht.1991.01080080050026. PMID 1867550.
- ^ Kelly SR, Khawaja AP, Bryan SR, Azuara-Blanco A, Sparrow JM, Crabb DP (October 2020). "Progression from ocular hypertension to visual field loss in the English hospital eye service". The British Journal of Ophthalmology. 104 (10): 1406–1411. doi:10.1136/bjophthalmol-2019-315052. PMID 32217541.
- ^ Gordon MO, Kass MA (May 2018). "What We Have Learned From the Ocular Hypertension Treatment Study". American Journal of Ophthalmology. 189: xxiv–xxvii. doi:10.1016/j.ajo.2018.02.016. PMC 5915899. PMID 29501371.
- ^ a b Rhee DJ, Katz LJ, Spaeth GL, Myers JS (2001). "Complementary and alternative medicine for glaucoma". Survey of Ophthalmology. 46 (1): 43–55. doi:10.1016/S0039-6257(01)00233-8. PMID 11525790.
- ^ Li M, Wang M, Guo W, Wang J, Sun X (March 2011). "The effect of caffeine on intraocular pressure: a systematic review and meta-analysis". Graefe's Archive for Clinical and Experimental Ophthalmology = Albrecht von Graefes Archiv für Klinische und Experimentelle Ophthalmologie. 249 (3): 435–442. doi:10.1007/s00417-010-1455-1. PMID 20706731. S2CID 668498.
- ^ Wang N, Wu H, Fan Z (November 2002). "Primary angle closure glaucoma in Chinese and Western populations". Chinese Medical Journal. 115 (11): 1706–1715. PMID 12609093.
- ^ Yanoff M, Duker JS (2009). Ophthalmology (3rd ed.). Mosby Elsevier. p. 1096. ISBN 9780323043328.
- ^ Online Mendelian Inheritance in Man (OMIM): Glaucoma, Primary Open Angle; POAG - 137760
- ^ Fernández-Martínez L, Letteboer S, Mardin CY, Weisschuh N, Gramer E, Weber BH, et al. (April 2011). "Evidence for RPGRIP1 gene as risk factor for primary open angle glaucoma". European Journal of Human Genetics. 19 (4): 445–451. doi:10.1038/ejhg.2010.217. PMC 3060327. PMID 21224891.
- ^ Online Mendelian Inheritance in Man (OMIM): Glaucoma, Normal Tension, Susceptibility to - 606657
- ^ Pardianto G, et al. (2005). "Aqueous Flow and the Glaucoma". Mimbar Ilmiah Oftalmologi Indonesia. 2: 12–15.
- ^ Chaum E, et al. "A 5-year-old girl who failed her school vision screening. Case presentation of Persistent fetal vasculature (PFV), also called persistent hyperplastic primary vitreous (PHPV)". Digital Journal of Ophthalmology. Archived from the original on 11 January 2009.
- ^ Hunt A, Rowe N, Lam A, Martin F (July 2005). "Outcomes in persistent hyperplastic primary vitreous". The British Journal of Ophthalmology. 89 (7): 859–863. doi:10.1136/bjo.2004.053595. PMC 1772745. PMID 15965167.
- ^ Chang B, Smith RS, Peters M, Savinova OV, Hawes NL, Zabaleta A, et al. (2001). "Haploinsufficient Bmp4 ocular phenotypes include anterior segment dysgenesis with elevated intraocular pressure". BMC Genetics. 2: 18. doi:10.1186/1471-2156-2-18. PMC 59999. PMID 11722794.
- ^ Puyo L, Paques M, Atlan M (2020). "Retinal blood flow reversal in out-of-plane vessels imaged with laser Doppler holography". arXiv:2008.09813 [physics.med-ph].
- ^ Alguire P (1990). "The Eye Chapter 118 Tonometry>Basic Science". In Walker HK, Hall WD, Hurst JW (eds.). Clinical methods: the history, physical, and laboratory examinations (3rd ed.). London: Butterworths. ISBN 978-0-409-90077-4. PMID 21250045.
- ^ Mozaffarieh M, Grieshaber MC, Flammer J (January 2008). "Oxygen and blood flow: players in the pathogenesis of glaucoma". Molecular Vision. 14: 224–233. PMC 2267728. PMID 18334938.
- ^ Jadeja RN, Thounaojam MC, Bartoli M, Martin PM (2020). "Implications of NAD+ Metabolism in the Aging Retina and Retinal Degeneration". Oxidative Medicine and Cellular Longevity. 2020: 2692794. doi:10.1155/2020/2692794. PMC 7238357. PMID 32454935.
- ^ Osborne NN, Wood JP, Chidlow G, Bae JH, Melena J, Nash MS (August 1999). "Ganglion cell death in glaucoma: what do we really know?". The British Journal of Ophthalmology. 83 (8): 980–986. doi:10.1136/bjo.83.8.980. PMC 1723166. PMID 10413706.
- ^ Levin LA, Peeples P (February 2008). "History of neuroprotection and rationale as a therapy for glaucoma". The American Journal of Managed Care. 14 (1 Suppl): S11–S14. PMID 18284310.
- ^ Varma R, Peeples P, Walt JG, Bramley TJ (February 2008). "Disease progression and the need for neuroprotection in glaucoma management". The American Journal of Managed Care. 14 (1 Suppl): S15–S19. PMID 18284311.
- ^ Hernández M, Urcola JH, Vecino E (May 2008). "Retinal ganglion cell neuroprotection in a rat model of glaucoma following brimonidine, latanoprost or combined treatments". Experimental Eye Research. 86 (5): 798–806. doi:10.1016/j.exer.2008.02.008. PMID 18394603.
- ^ Cantor LB (December 2006). "Brimonidine in the treatment of glaucoma and ocular hypertension". Therapeutics and Clinical Risk Management. 2 (4): 337–346. doi:10.2147/tcrm.2006.2.4.337. PMC 1936355. PMID 18360646.
- ^ Schwartz M (June 2007). "Modulating the immune system: a vaccine for glaucoma?". Canadian Journal of Ophthalmology. 42 (3): 439–441. doi:10.3129/can.j.ophthalmol.i07-050. PMID 17508041.
- ^ Morrison JC (2006). "Integrins in the optic nerve head: potential roles in glaucomatous optic neuropathy (an American Ophthalmological Society thesis)". Transactions of the American Ophthalmological Society. 104: 453–477. PMC 1809896. PMID 17471356.
- ^ Knox DL, Eagle RC, Green WR (March 2007). "Optic nerve hydropic axonal degeneration and blocked retrograde axoplasmic transport: histopathologic features in human high-pressure secondary glaucoma". Archives of Ophthalmology. 125 (3): 347–353. doi:10.1001/archopht.125.3.347. PMID 17353405.
- ^ Tezel G, Luo C, Yang X (March 2007). "Accelerated aging in glaucoma: immunohistochemical assessment of advanced glycation end products in the human retina and optic nerve head". Investigative Ophthalmology & Visual Science. 48 (3): 1201–1211. doi:10.1167/iovs.06-0737. PMC 2492883. PMID 17325164.
- ^ Berry FB, Mirzayans F, Walter MA (April 2006). "Regulation of FOXC1 stability and transcriptional activity by an epidermal growth factor-activated mitogen-activated protein kinase signaling cascade". The Journal of Biological Chemistry. 281 (15): 10098–10104. doi:10.1074/jbc.M513629200. PMID 16492674.
- ^ "Issue on neuroprotection". Can. J. Ophthalmol. 42 (3). June 2007. ISSN 1715-3360. Archived from the original on 12 May 2007.[page needed]
- ^ Hasnain, Syed Sikandar (2006). "Scleral edge, not optic disc or retina, is the primary site of injury in chronic glaucoma". Medical Hypotheses. 67 (6): 1320–1325. doi:10.1016/j.mehy.2006.05.030. ISSN 0306-9877. PMID 16824694.
- ^ Farandos NM, Yetisen AK, Monteiro MJ, Lowe CR, Yun SH (April 2015). "Contact lens sensors in ocular diagnostics". Advanced Healthcare Materials. 4 (6): 792–810. doi:10.1002/adhm.201400504. PMID 25400274. S2CID 35508652.
- ^ a b c d Pardianto G, et al. (2006). "Some difficulties on Glaucoma". Mimbar Ilmiah Oftalmologi Indonesia. 3: 49–50.
- ^ Thomas R, Parikh RS (September 2006). "How to assess a patient for glaucoma". Community Eye Health. 19 (59): 36–37. PMC 1705629. PMID 17491713.
- ^ Michelessi M, Lucenteforte E, Oddone F, Brazzelli M, Parravano M, Franchi S, et al. (November 2015). "Optic nerve head and fibre layer imaging for diagnosing glaucoma". The Cochrane Database of Systematic Reviews. 11 (11): CD008803. doi:10.1002/14651858.CD008803.pub2. PMC 4732281. PMID 26618332.
- ^ Thylefors B, Négrel AD (1994). "The global impact of glaucoma". Bulletin of the World Health Organization. 72 (3): 323–326. PMC 2486713. PMID 8062393.
- ^ a b Johnson CA (2003). "The use of a visual illusion to detect glaucoma.". In Andre J, Owens DA, Harvey Jr LO (eds.). Visual Perception: The Influence of H. W. Leibowitz. Washington, D.C.: The American Psychological Association. pp. 45–56.
- ^ "Five common Glaucoma Tests". Glaucoma Research Foundation. Archived from the original on 8 September 2017. Retrieved 20 February 2014.
- ^ "Nerve Fiber Analysis". Glaucoma Associates of Texas. White Rabbit Communications, Inc. 2010. Archived from the original on 26 March 2013. Retrieved 9 December 2012.
- ^ Paton D, Craig JA (1976). "Glaucomas. Diagnosis and management". Clinical Symposia. 28 (2): 1–47. PMID 1053095.
- ^ Logan CM, Rice MK (1987). Logan's Medical and Scientific Abbreviations. Philadelphia: J. B. Lippincott Company. p. 3. ISBN 978-0-397-54589-6.
- ^ a b "Primary Open-Angle Glaucoma: Glaucoma: Merck Manual Professional". Merck.com. Archived from the original on 28 November 2010. Retrieved 24 January 2011.
- ^ Simha A, Aziz K, Braganza A, Abraham L, Samuel P, Lindsley KB (February 2020). "Anti-vascular endothelial growth factor for neovascular glaucoma". The Cochrane Database of Systematic Reviews. 2020 (2): CD007920. doi:10.1002/14651858.CD007920.pub3. PMC 7003996. PMID 32027392.
- ^ Pardianto G (2006). "Difficulties on glaucoma". Mimbar Ilmiah Oftalmologi Indonesia. 3: 48–9.
- ^ Sit AJ (23 April 2006). "Many types of glaucoma, one kind of damage to optic nerve". Chicago Tribune. Archived from the original on 6 October 2012. Retrieved 18 August 2015.
Glaucoma is a broad term for a number of different conditions that damage the optic nerve, the 'cable' that carries visual information from the eye to the brain, thereby causing changes in vision.
- ^ Jindal A, Ctori I, Virgili G, Lucenteforte E, Lawrenson JG (May 2020). "Non-contact tests for identifying people at risk of primary angle closure glaucoma". The Cochrane Database of Systematic Reviews. 5 (7): CD012947. doi:10.1002/14651858.cd012947.pub2. PMC 7390269. PMID 32468576.
- ^ "Types of Glaucoma | National Eye Institute". www.nei.nih.gov. Retrieved 25 October 2020.
- ^ a b "Glaucoma". Parsons' diseases of the eye (22nd ed.). Elsevier. 15 July 2015. pp. 288–295. ISBN 978-81-312-3818-9.
- ^ Salmon JF. "Glaucoma". Kanski's Clinical ophthalmology (9th ed.). Elsevier. pp. 362–365.
- ^ Carroll JN, Johnson CA (22 August 2013). "Visual Field Testing: From One Medical Student to Another".
- ^ a b c d e Khurana AK, Khurana B (31 August 2015). "Glaucoma". Comprehensive ophthalmology (6th ed.). Jaypee, The Health Sciences Publisher. pp. 223–224. ISBN 978-93-5152-657-5.
- ^ "Retinitis pigmentosa". Genetics Home Reference.
- ^ Themes UF (11 July 2016). "Visual Fields in Glaucoma". Ento Key.
- ^ "Summaries for patients. Screening for glaucoma: U.S. Preventive Services Task Force recommendation statement". Annals of Internal Medicine. 159 (7): I-28. October 2013. doi:10.7326/0003-4819-159-6-201309170-00685. PMID 23836133.
- ^ "Glaucoma – National Institutes of Health". Nihseniorhealth.gov. Archived from the original on 25 December 2010. Retrieved 24 January 2011.
- ^ Noecker RJ (June 2006). "The management of glaucoma and intraocular hypertension: current approaches and recent advances". Therapeutics and Clinical Risk Management. 2 (2): 193–206. doi:10.2147/tcrm.2006.2.2.193. PMC 1661659. PMID 18360593.
- ^ Parikh RS, Parikh SR, Navin S, Arun E, Thomas R (1 May 2008). "Practical approach to medical management of glaucoma". Indian Journal of Ophthalmology. 56 (3): 223–230. doi:10.4103/0301-4738.40362. PMC 2636120. PMID 18417824.
- ^ Sabel BA, Wang J, Cárdenas-Morales L, Faiq M, Heim C (June 2018). "Mental stress as consequence and cause of vision loss: the dawn of psychosomatic ophthalmology for preventive and personalized medicine". The EPMA Journal. 9 (2): 133–160. doi:10.1007/s13167-018-0136-8. PMC 5972137. PMID 29896314.
- ^ "The glaucoma patients most at risk of sight loss were identified in a new study". NIHR Evidence (Plain English summary). National Institute for Health and Care Research. 30 October 2020. doi:10.3310/alert_42213. S2CID 242972154.
- ^ Founti P, Bunce C, Khawaja AP, Doré CJ, Mohamed-Noriega J, Garway-Heath DF (December 2020). "Risk Factors for Visual Field Deterioration in the United Kingdom Glaucoma Treatment Study". Ophthalmology. 127 (12): 1642–1651. doi:10.1016/j.ophtha.2020.06.009. PMID 32540325. S2CID 219702271.
- ^ Xu L, Wang X, Wu M (February 2017). "Topical medication instillation techniques for glaucoma". The Cochrane Database of Systematic Reviews. 2017 (2): CD010520. doi:10.1002/14651858.CD010520.pub2. PMC 5419432. PMID 28218404.
- ^ a b Leffler CT, Amini L (October 2007). "Interpretation of uniocular and binocular trials of glaucoma medications: an observational case series". BMC Ophthalmology. 7: 17. doi:10.1186/1471-2415-7-17. PMC 2093925. PMID 17916260.
- ^ Ritch R (June 2007). "Natural compounds: evidence for a protective role in eye disease". Canadian Journal of Ophthalmology. 42 (3): 425–438. doi:10.3129/can.j.ophthalmol.i07-044. PMID 17508040.
- ^ Tsai JC, Song BJ, Wu L, Forbes M (September 2007). "Erythropoietin: a candidate neuroprotective agent in the treatment of glaucoma". Journal of Glaucoma. 16 (6): 567–571. doi:10.1097/IJG.0b013e318156a556. PMID 17873720. S2CID 27407031.
- ^ Mozaffarieh M, Flammer J (November 2007). "Is there more to glaucoma treatment than lowering IOP?". Survey of Ophthalmology. 52 (Suppl 2): S174–S179. doi:10.1016/j.survophthal.2007.08.013. PMID 17998043.
- ^ Jaret P. "A New Understanding of Glaucoma". The New York Times. Archived from the original on 10 March 2014. Retrieved 20 February 2014.
- ^ Online Mendelian Inheritance in Man (OMIM): Glaucoma, Congenital: GLC3 Buphthalmos - 231300
- ^ Shingleton B, Tetz M, Korber N (March 2008). "Circumferential viscodilation and tensioning of Schlemm canal (canaloplasty) with temporal clear corneal phacoemulsification cataract surgery for open-angle glaucoma and visually significant cataract: one-year results". Journal of Cataract and Refractive Surgery. 34 (3): 433–440. doi:10.1016/j.jcrs.2007.11.029. PMID 18299068. S2CID 23904366.
- ^ Lewis RA, von Wolff K, Tetz M, Korber N, Kearney JR, Shingleton B, Samuelson TW (July 2007). "Canaloplasty: circumferential viscodilation and tensioning of Schlemm's canal using a flexible microcatheter for the treatment of open-angle glaucoma in adults: interim clinical study analysis". Journal of Cataract and Refractive Surgery. 33 (7): 1217–1226. doi:10.1016/j.jcrs.2007.03.051. PMID 17586378. S2CID 36397585.
- ^ Dada T, Sharma R, Sinha G, Angmo D, Temkar S (2016). "Cyclodialysis-enhanced trabeculectomy with triple Ologen implantation". European Journal of Ophthalmology. 26 (1): 95–97. doi:10.5301/ejo.5000633. PMID 26044372. S2CID 83593.
- ^ Yuan F, Li L, Chen X, Yan X, Wang L (2015). "Biodegradable 3D-Porous Collagen Matrix (Ologen) Compared with Mitomycin C for Treatment of Primary Open-Angle Glaucoma: Results at 5 Years". Journal of Ophthalmology. 2015 (637537): 637537. doi:10.1155/2015/637537. PMC 4452460. PMID 26078875.
- ^ a b Tanuj D, Amit S, Saptorshi M, Meenakshi G (July 2013). "Combined subconjunctival and subscleral ologen implant insertion in trabeculectomy". Eye. 27 (7): 889. doi:10.1038/eye.2013.76. PMC 3709396. PMID 23640614.
- ^ Cillino S, Casuccio A, Di Pace F, Cagini C, Ferraro LL, Cillino G (March 2016). "Biodegradable collagen matrix implant versus mitomycin-C in trabeculectomy: five-year follow-up". BMC Ophthalmology. 16 (24): 24. doi:10.1186/s12886-016-0198-0. PMC 4779569. PMID 26946419.
- ^ "Eyelights Newsletter: About Glaucoma New Zealand" (PDF). Glaucoma.org. Archived (PDF) from the original on 13 January 2015. Retrieved 20 February 2014.
- ^ Molteno AC, Polkinghorne PJ, Bowbyes JA (November 1986). "The vicryl tie technique for inserting a draining implant in the treatment of secondary glaucoma". Australian and New Zealand Journal of Ophthalmology. 14 (4): 343–354. doi:10.1111/j.1442-9071.1986.tb00470.x. PMID 3814422.
- ^ Lewis RA (August 2014). "Ab interno approach to the subconjunctival space using a collagen glaucoma stent". Journal of Cataract and Refractive Surgery. 40 (8): 1301–1306. doi:10.1016/j.jcrs.2014.01.032. PMID 24943904.
- ^ "Xen Gel Stent". AqueSys. AqueSys. Archived from the original on 29 June 2015. Retrieved 27 June 2015.
- ^ "Advances in Glaucoma Filtration Surgery". Glaucoma Today. Archived from the original on 29 June 2015. Retrieved 27 June 2015.
- ^ Otarola F, Virgili G, Shah A, Hu K, Bunce C, Gazzard G (March 2020). "Ab interno trabecular bypass surgery with Schlemm´s canal microstent (Hydrus) for open angle glaucoma". The Cochrane Database of Systematic Reviews. 2020 (3): CD012740. doi:10.1002/14651858.cd012740.pub2. PMC 7061024. PMID 32147807.
- ^ Rosentreter A, Schild AM, Dinslage S, Dietlein TS (February 2012). "Biodegradable implant for tissue repair after glaucoma drainage device surgery". Journal of Glaucoma. 21 (2): 76–78. doi:10.1097/IJG.0b013e3182027ab0. PMID 21278584. S2CID 40206358.
- ^ a b Rosentreter A, Mellein AC, Konen WW, Dietlein TS (September 2010). "Capsule excision and Ologen implantation for revision after glaucoma drainage device surgery". Graefe's Archive for Clinical and Experimental Ophthalmology = Albrecht von Graefes Archiv für Klinische und Experimentelle Ophthalmologie. 248 (9): 1319–1324. doi:10.1007/s00417-010-1385-y. PMID 20405139. S2CID 10384646.
- ^ Chiselita D (April 2001). "Non-penetrating deep sclerectomy versus trabeculectomy in primary open-angle glaucoma surgery". Eye. 15 (Pt 2): 197–201. doi:10.1038/eye.2001.60. PMID 11339590.
- ^ Ahmed IK (1 September 2005). "Making the Case for Nonpenetrating Surgery". Review of Ophthalmology. 12 (9).
- ^ Aptel F, Dumas S, Denis P (2009). "Ultrasound biomicroscopy and optical coherence tomography imaging of filtering blebs after deep sclerectomy with new collagen implant". European Journal of Ophthalmology. 19 (2): 223–230. doi:10.1177/112067210901900208. PMID 19253238. S2CID 22594085.
- ^ Matthew SJ, Sarkisian S, Nathan B, James MR (May 2012). Initial experience using a collagen matrix implant (ologen) as a wound modulator with canaloplasty: 12 month results. ARVO. Ft. Lauderdale.
- ^ Anisimova SY, Anisimova SI, Larionov EV (2012). "Biological drainage – Xenoplast in glaucoma surgery (experimental and 10-year of clinical follow-up)" (PDF). Copenhagen: EGS Congress. Archived (PDF) from the original on 17 October 2013.
- ^ Ong AY, Ng SM, Vedula SS, Friedman DS (March 2021). "Lens extraction for chronic angle-closure glaucoma". The Cochrane Database of Systematic Reviews. 2021 (3): CD005555. doi:10.1002/14651858.CD005555.pub3. PMC 8094223. PMID 33759192.
- ^ Heijl A, Bengtsson B, Hyman L, Leske MC (December 2009). "Natural history of open-angle glaucoma". Ophthalmology (Submitted manuscript). 116 (12): 2271–2276. doi:10.1016/j.ophtha.2009.06.042. PMID 19854514.
- ^ "Glaucoma". Coopereyecare.com. 25 July 2013. Archived from the original on 13 December 2013. Retrieved 20 February 2014.
- ^ "Death and DALY estimates for 2004 by cause for WHO Member States" (xls). World Health Organization. 2004. Archived from the original on 27 January 2012.
- ^ a b c Quigley HA, Broman AT (March 2006). "The number of people with glaucoma worldwide in 2010 and 2020". The British Journal of Ophthalmology. 90 (3): 262–267. doi:10.1136/bjo.2005.081224. PMC 1856963. PMID 16488940.
- ^ Sommer A, Tielsch JM, Katz J, Quigley HA, Gottsch JD, Javitt JC, et al. (November 1991). "Racial differences in the cause-specific prevalence of blindness in east Baltimore". The New England Journal of Medicine. 325 (20): 1412–1417. doi:10.1056/NEJM199111143252004. PMID 1922252.
- ^ "Glaucoma and Marijuana use". National Eye Institute. 21 June 2005. Archived from the original on 27 December 2009.
- ^ Ramulu P (March 2009). "Glaucoma and disability: which tasks are affected, and at what stage of disease?". Current Opinion in Ophthalmology. 20 (2): 92–98. doi:10.1097/ICU.0b013e32832401a9. PMC 2692230. PMID 19240541.
- ^ Akbari M, Akbari S, Pasquale LR (February 2009). "The association of primary open-angle glaucoma with mortality: a meta-analysis of observational studies". Archives of Ophthalmology. 127 (2): 204–210. doi:10.1001/archophthalmol.2008.571. PMID 19204241.
- ^ Bannister R (1622). Treatise of One Hundred and Thirteen Diseases of the Eyes and Eyelids. London.
- ^ Leffler CT, Schwartz SG, Wainsztein RD, Pflugrath A, Peterson E (2017). "Ophthalmology in North America: Early Stories (1491-1801)". Ophthalmology and Eye Diseases. 9: 1179172117721902. doi:10.1177/1179172117721902. PMC 5533269. PMID 28804247.
- ^ Albert D, Edwards D (1996). The History of Ophthalmologist. Cambridge, Mass.
- ^ Harper, Douglas. "glaucoma". Online Etymology Dictionary.
- ^ γλαυκός. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project.
- ^ Leffler CT, Schwartz SG, Giliberti FM, Young MT, Bermudez D (2015). "What was Glaucoma Called Before the 20th Century?". Ophthalmology and Eye Diseases. 7: 21–33. doi:10.4137/OED.S32004. PMC 4601337. PMID 26483611. Archived from the original on 23 April 2016.
- ^ Leffler CT, Schwartz SG, Hadi TM, Salman A, Vasuki V (2015). "The early history of glaucoma: the glaucous eye (800 BC to 1050 AD)". Clinical Ophthalmology. 9: 207–215. doi:10.2147/OPTH.S77471. PMC 4321651. PMID 25673972. Archived from the original on 26 February 2015.
- ^ "Advanced glaucoma: surgery lowers pressure in the eye more effectively than eye drops". NIHR Evidence (Plain English summary). National Institute for Health and Care Research. 31 March 2022. doi:10.3310/alert_49606. S2CID 247918434.
- ^ King AJ, Fernie G, Hudson J, Kernohan A, Azuara-Blanco A, Burr J, et al. (November 2021). "Primary trabeculectomy versus primary glaucoma eye drops for newly diagnosed advanced glaucoma: TAGS RCT". Health Technology Assessment. 25 (72): 1–158. doi:10.3310/hta25720. PMID 34854808.
- ^ Gazzard G, Konstantakopoulou E, Garway-Heath D, Garg A, Vickerstaff V, Hunter R, et al. (April 2019). "Selective laser trabeculoplasty versus eye drops for first-line treatment of ocular hypertension and glaucoma (LiGHT): a multicentre randomised controlled trial". Lancet. 393 (10180): 1505–1516. doi:10.1016/S0140-6736(18)32213-X. PMC 6495367. PMID 30862377.; Lay summary in: "A laser eye procedure can be effective and safe if used early as treatment for glaucoma". NIHR Evidence. National Institute for Health and Care Research. 4 June 2019. doi:10.3310/signal-000774.
Plain English summary
- ^ Wang SK, Chang RT (2014). "An emerging treatment option for glaucoma: Rho kinase inhibitors". Clinical Ophthalmology. 8: 883–890. doi:10.2147/OPTH.S41000. PMC 4025933. PMID 24872673.
- ^ a b c Sena DF, Lindsley K (January 2017). "Neuroprotection for treatment of glaucoma in adults". The Cochrane Database of Systematic Reviews. 1 (1): CD006539. doi:10.1002/14651858.CD006539.pub4. PMC 5370094. PMID 28122126.
- ^ Institute of Medicine (US), Joy JE, Watson Jr SJ, Benson Jr JA (1999). Joy JE, Watson Jr SJ, Benson JA (eds.). Marijuana and Medicine: Assessing the Science Base. Nap.edu. doi:10.17226/6376. ISBN 978-0-309-07155-0. PMID 25101425. Retrieved 20 February 2014.
- ^ a b "Complementary Therapy Assessment: Marijuana in the Treatment of Glaucoma". American Academy of Ophthalmology. Retrieved 4 May 2011.
- ^ "Complementary Therapy Assessments". American Academy of Ophthalmology. Retrieved 24 January 2011.
- ^ Jampel H (February 2010). "American glaucoma society position statement: marijuana and the treatment of glaucoma". Journal of Glaucoma. 19 (2): 75–76. doi:10.1097/IJG.0b013e3181d12e39. PMID 20160576. S2CID 40362575.
- ^ Day F, Buchan JC, Cassells-Brown A, Fear J, Dixon R, Wood F (September 2010). "A glaucoma equity profile: correlating disease distribution with service provision and uptake in a population in Northern England, UK". Eye. 24 (9): 1478–1485. doi:10.1038/eye.2010.73. PMID 20508654. S2CID 23130257.
- ^ Elam AR, Andrews C, Musch DC, Lee PP, Stein JD (October 2017). "Large Disparities in Receipt of Glaucoma Care between Enrollees in Medicaid and Those with Commercial Health Insurance". Ophthalmology. 124 (10): 1442–1448. doi:10.1016/j.ophtha.2017.05.003. PMC 6145133. PMID 28583710.
- ^ Allison K, Patel DG, Greene L (May 2021). "Racial and Ethnic Disparities in Primary Open-Angle Glaucoma Clinical Trials: A Systematic Review and Meta-analysis". JAMA Network Open. 4 (5): e218348. doi:10.1001/jamanetworkopen.2021.8348. PMC 8132140. PMID 34003274.
This article is a direct transclusion of the Wikipedia article and therefore may not meet the same editing standards as LIMSwiki. | 1 | 35 |
<urn:uuid:f3169435-8b9c-463e-9f82-de2d77fe02e1> | |Architectural style||Neoclassical, Palladian|
|Address||1600 Pennsylvania Avenue NW|
Washington, D.C. 20500
|Coordinates||38°53′52″N 77°02′11″W / 38.8977°N 77.0365°WCoordinates: 38°53′52″N 77°02′11″W / 38.8977°N 77.0365°W|
|Current tenants||Joe Biden, President of the United States and the First Family|
|Construction started||October 13, 1792|
|Completed||November 1, 1800|
|Floor area||55,000 sq ft (5,100 m2)|
|Design and construction|
|NRHP reference No.||19600001|
|Designated NHL||December 19, 1960|
The White House is the official residence and workplace of the president of the United States. It is located at 1600 Pennsylvania Avenue NW in Washington, D.C., and has been the residence of every U.S. president since John Adams in 1800 when the national capital was moved from Philadelphia to Washington, D.C. The term "White House" is often used as metonymy for the president and his advisers.
The residence was designed by Irish-born architect James Hoban in the neoclassical style. Hoban modelled the building on Leinster House in Dublin, a building which today houses the Oireachtas, the Irish legislature. Construction took place between 1792 and 1800, using Aquia Creek sandstone painted white. When Thomas Jefferson moved into the house in 1801, he and architect Benjamin Henry Latrobe added low colonnades on each wing to conceal what then were stables and storage. In 1814, during the War of 1812, the mansion was set ablaze by British forces in the Burning of Washington, destroying the interior and charring much of the exterior. Reconstruction began almost immediately, and President James Monroe moved into the partially reconstructed Executive Residence in October 1817. Exterior construction continued with the addition of the semi-circular South portico in 1824 and the North portico in 1829.
Because of crowding within the executive mansion itself, President Theodore Roosevelt had all work offices relocated to the newly constructed West Wing in 1901. Eight years later, in 1909, President William Howard Taft expanded the West Wing and created the first Oval Office, which was eventually moved as the section was expanded. In the main mansion (Executive Residence), the third floor attic was converted to living quarters in 1927 by augmenting the existing hip roof with long shed dormers. A newly constructed East Wing was used as a reception area for social events; Jefferson's colonnades connected the new wings. The East Wing alterations were completed in 1946, creating additional office space. By 1948, the residence's load-bearing walls and wood beams were found to be close to failure. Under Harry S. Truman, the interior rooms were completely dismantled and a new internal load-bearing steel frame was constructed inside the walls. On the exterior, the Truman Balcony was added. Once the structural work was completed, the interior rooms were rebuilt.
The present-day White House complex includes the Executive Residence, the West Wing, the East Wing, the Eisenhower Executive Office Building (the former State Department, which now houses offices for the president's staff and the vice president), and Blair House, a guest residence. The Executive Residence is made up of six stories: the Ground Floor, State Floor, Second Floor, and Third Floor, and a two-story basement. The property is a National Heritage Site owned by the National Park Service and is part of the President's Park. In 2007, it was ranked second on the American Institute of Architects list of "America's Favorite Architecture".
Further information: Presidency of George Washington § Residences
Following his April 1789 inauguration, President George Washington occupied two private houses in New York City, which served as the executive mansion. He lived at the first, Franklin House and owned by Treasury Commissioner Samuel Osgood, at 3 Cherry Street, through late February 1790. The executive mansion moved to the larger quarters at Alexander Macomb House at 39–41 Broadway, where he stayed with his wife Martha and a small staff until August 1790. In May 1790, construction began on a new official residence in Manhattan called Government House.
Washington never lived at the Government House since the national capitol was moved to Philadelphia and then to Washington, D.C. before its completion. The July 1790 Residence Act designated the capital be permanently located in the new Federal District, and temporarily in Philadelphia for ten years while the permanent capital was built. Philadelphia rented the mansion of Robert Morris, a merchant, at 190 High Street, now 524–30 Market Street, as the President's House, which Washington occupied from November 1790 to March 1797. Since the house was too small to accommodate the 30 people who then made up the presidential family, staff, and servants, Washington had it enlarged.
President John Adams also occupied the High Street mansion in Philadelphia from March 1797 to May 1800. On Saturday, November 1, 1800, Adams became the first president to occupy the White House. The President's House in Philadelphia was converted into Union Hotel and later used for stores before being demolished in 1832.
Philadelphia began construction of a much grander presidential mansion several blocks away in 1792. It was nearly completed by the time of Adams' 1797 inauguration. However, Adams chose not to occupy it, saying he did not have Congressional authorization to lease the building. It remained vacant until 1800 when it was sold to the University of Pennsylvania.
The President's House was a major feature of Pierre (Peter) Charles L'Enfant's[a] 1791 plan for the newly established federal city of Washington, D.C. Washington and his Secretary of State, Thomas Jefferson, who both had personal interests in architecture, agreed that the design of the White House and the Capitol would be chosen in a design competition.
Nine proposals were submitted for the new presidential residence with the award going to Irish-American architect James Hoban. Hoban supervised the construction of both the U.S. Capitol and the White House. Hoban was born in Ireland and trained at the Dublin Society of Arts. He emigrated to the U.S. after the American Revolution, first seeking work in Philadelphia and later finding success in South Carolina, where he designed the state capitol in Columbia.
President Washington visited Charleston, South Carolina, in May 1791 on his Southern Tour, and saw the Charleston County Courthouse then under construction, which had been designed by Hoban. Washington is reputed to have met with Hoban during the visit. The following year, Washington summoned the architect to Philadelphia and met with him in June 1792.
On July 16, 1792, the president met with the commissioners of the federal city to make his judgment in the architectural competition. His review is recorded as being brief, and he quickly selected Hoban's submission.
The Neoclassical design of the White House is based primarily on architectural concepts inherited from the Roman architect Vitruvius and the Venetian architect Andrea Palladio. The design of the upper floors also includes elements based on Dublin's Leinster House, which later became the seat of the Irish parliament (Oireachtas). The upper windows with alternate triangular and segmented pediments are inspired by the Irish building. Additionally, several Georgian-era Irish country houses have been suggested as sources of inspiration for the overall floor plan, including the bow-fronted south front and the former niches in the present-day Blue Room.
The first official White House guide, published in 1962, suggested a link between Hoban's design for the South Portico and Château de Rastignac, a neoclassical country house in La Bachellerie in the Dordogne region of France. Construction on the French house was initially started before 1789, interrupted by the French Revolution for 20 years, and then finally built between 1812 and 1817 based on Salat's pre-1789 design.
The conceptual link between the two houses has been criticized because Hoban did not visit France. Supporters of the connection contend that Thomas Jefferson, during his tour of Bordeaux in 1789, viewed Salat's architectural drawings, which were on file at École Spéciale d'Architecture. On his return to the U.S., Jefferson then shared the influence with Washington, Hoban, Monroe, and Benjamin Henry Latrobe.
Construction of the White House began at noon on October 13, 1792, with the laying of the cornerstone. The main residence and foundations of the house were built largely by both enslaved and free African-American laborers, and employed Europeans. Much of the other work on the house was done by immigrants, many of whom had not yet obtained citizenshi, including the sandstone walls, which were erected by Scottish immigrants, the high-relief rose, and garland decorations above the north entrance and the fish scale pattern beneath the pediments of the window hoods.
There are conflicting claims as to where the sandstone used in the construction of the White House originated. Some reports suggest sandstone from the Croatian island of Brač, specifically the Pučišća quarry whose stone was used to build the ancient Diocletian's Palace in Split, was used in the building's original construction. However, researchers believe limestone from the island was used in the 1902 renovations and not the original construction. Others suggest the original sandstone simply came from Aquia Creek in Stafford County, Virginia, since importation of the stone at the time would have proved too costly. The initial construction took place over a period of eight years at a reported cost of $232,371.83 (equivalent to $3,710,000 in 2021). Although not yet completed, the White House was ready for occupancy circa November 1, 1800.
Due in part to material and labor shortages, Pierre Charles L'Enfant's plan for a grand palace was five times larger than the house that was eventually built. The finished structure contained only two main floors instead of the planned three, and a less costly brick served as a lining for the stone façades. When construction was finished, the porous sandstone walls were whitewashed with a mixture of lime, rice glue, casein, and lead, giving the house its familiar color and name.
The main entrance is located on the north façade under a porte cochere with Ionic columns. The ground floor is hidden by a raised carriage ramp and parapet. The central three bays are situated behind a prostyle portico that was added circa 1830. The windows of the four bays flanking the portico, at first-floor level, have alternating pointed and segmented pediments, while the second-floor pediments are flat. A lunette fanlight and a sculpted floral festoon surmount the entrance. The roofline is hidden by a balustraded parapet.
The three-level southern façade combines Palladian and neoclassical architectural styles. The ground floor is rusticated in the Palladian fashion. The south portico was completed in 1824. At the center of the southern façade is a neoclassical projected bow of three bays. The bow is flanked by five bays, the windows of which, as on the north façade, have alternating segmented and pointed pediments at first-floor level. The bow has a ground-floor double staircase leading to an Ionic colonnaded loggia and the Truman Balcony, built in 1946. The more modern third floor is hidden by a balustraded parapet and plays no part in the composition of the façade.
The building was originally variously referred to as the President's Palace, Presidential Mansion, or President's House. The earliest evidence of the public calling it the "White House" was recorded in 1811. A myth emerged that during the rebuilding of the structure after the Burning of Washington, white paint was applied to mask the burn damage it had suffered, giving the building its namesake hue. The name "Executive Mansion" was used in official contexts until President Theodore Roosevelt established "The White House" as its formal name in 1901 via Executive Order. The current letterhead wording and arrangement of "The White House" with the word "Washington" centered beneath it dates to the administration of Franklin D. Roosevelt.
Although the structure was not completed until some years after the presidency of George Washington, there is speculation that the name of the traditional residence of the president of the United States may have been derived from Martha Washington's home, White House Plantation, in Virginia, where the nation's first president courted the first lady in the mid-18th century.
On Saturday, November 1, 1800, John Adams became the first president to take residence in the building. The next day he wrote his wife Abigail: "I pray Heaven to bestow the best of blessings on this House, and all that shall hereafter inhabit it. May none but honest and wise men ever rule under this roof." President Franklin D. Roosevelt had Adams's blessing carved into the mantel in the State Dining Room.
Adams lived in the house only briefly before Thomas Jefferson moved into the "pleasant country residence" in 1801. Despite his complaints that the house was too big ("big enough for two emperors, one pope, and the grand lama in the bargain"), Jefferson considered how the White House might be added to. With Benjamin Henry Latrobe, he helped lay out the design for the East and West Colonnades, small wings that help conceal the domestic operations of laundry, a stable and storage. Today, Jefferson's colonnades link the residence with the East and West Wings.
In 1814, during the War of 1812, the White House was set ablaze by British forces during the Burning of Washington, in retaliation for the American destruction of York, Port Dover and other towns in Upper Canada; much of Washington was affected by these fires as well. Only the exterior walls remained, and they had to be torn down and mostly reconstructed because of weakening from the fire and subsequent exposure to the elements, except for portions of the south wall. Of the numerous objects taken from the White House when it was ransacked by British forces, only three have been recovered.
White House employees and slaves rescued a copy of the Lansdowne portrait, and in 1939 a Canadian man returned a jewelry box to President Franklin Roosevelt, claiming that his grandfather had taken it from Washington; in the same year, a medicine chest that had belonged to President Madison was returned by the descendants of a Royal Navy officer. Some observers allege that most of the spoils of war taken during the sack were lost when a convoy of British ships led by HMS Fantome sank en route to Halifax off Prospect during a storm on the night of November 24, 1814, even though Fantome had no involvement in that action.
After the fire, President James Madison resided in the Octagon House from 1814 to 1815, and then in the Seven Buildings from 1815 to the end of his term. Meanwhile, both Hoban and Latrobe contributed to the design and oversight of the reconstruction, which lasted from 1815 until 1817. The south portico was constructed in 1824 during the James Monroe administration. The north portico was built in 1830. Though Latrobe proposed similar porticos before the fire in 1814, both porticos were built as designed by Hoban. An elliptical portico at Château de Rastignac in La Bachellerie, France, with nearly identical curved stairs, is speculated as the source of inspiration due to its similarity with the South Portico, although this matter is one of great debate.
Italian artisans, brought to Washington to help in constructing the U.S. Capitol, carved the decorative stonework on both porticos. Contrary to speculation, the North Portico was not modeled on a similar portico on another Dublin building, the Viceregal Lodge (now Áras an Uachtaráin, residence of the president of Ireland), for its portico postdates the White House porticos' design. For the North Portico, a variation on the Ionic Order was devised, incorporating a swag of roses between the volutes. This was done to link the new portico with the earlier carved roses above the entrance.
By the time of the American Civil War, the White House had become overcrowded. The location of the White House, just north of a canal and swampy lands, which provided conditions ripe for malaria and other unhealthy conditions, was questioned. Brigadier General Nathaniel Michler was tasked with proposing solutions to address these concerns. He proposed abandoning the use of the White House as a residence, and he designed a new estate for the first family at Meridian Hill in Washington, D.C. Congress, however, rejected the plan. Another option was Metropolis View, which is now the campus of The Catholic University of America.
When Chester A. Arthur took office in 1881, he ordered renovations to the White House to take place as soon as the recently widowed Lucretia Garfield moved out. Arthur inspected the work almost nightly and made several suggestions. Louis Comfort Tiffany was asked to send selected designers to assist. Over twenty wagonloads of furniture and household items were removed from the building and sold at a public auction. All that was saved were bust portraits of John Adams and Martin Van Buren. A proposal was made to build a new residence south of the White House, but it failed to gain support.
In the fall of 1882, work was done on the main corridor, including tinting the walls pale olive and adding squares of gold leaf, and decorating the ceiling in gold and silver, with colorful traceries woven to spell "USA." The Red Room was painted a dull Pomeranian red, and its ceiling was decorated with gold, silver, and copper stars and stripes of red, white, and blue. A fifty-foot jeweled Tiffany glass screen, supported by imitation marble columns, replaced the glass doors that separated the main corridor from the north vestibule.
In 1891, First Lady Caroline Harrison proposed major extensions to the White House, including a National Wing on the east for a historical art gallery, and a wing on the west for official functions. A plan was devised by Colonel Theodore A. Bingham that reflected the Harrison proposal. These plans were ultimately rejected.
However, in 1902, Theodore Roosevelt hired McKim, Mead & White to carry out expansions and renovations in a neoclassical style suited to the building's architecture, removing the Tiffany screen and all Victorian additions. Charles McKim himself designed and managed the project, which gave more living space to the president's large family by removing a staircase in the West Hall and moving executive office staff from the second floor of the residence into the new West Wing.
President William Howard Taft enlisted the help of architect Nathan C. Wyeth to add additional space to the West Wing, which included the addition of the Oval Office. In 1925, Congress enacted legislation allowing the White House to accept gifts of furniture and art for the first time.: 17 The West Wing was damaged by fire on Christmas Eve 1929; Herbert Hoover and his aides moved back into it on April 14, 1930. In the 1930s, a second story was added, as well as a larger basement for White House staff, and President Franklin Roosevelt had the Oval Office moved to its present location: adjacent to the Rose Garden.
Main article: White House Reconstruction
Decades of poor maintenance, the construction of a fourth-story attic during the Coolidge administration, and the addition of a second-floor balcony over the south portico for Harry S. Truman took a great toll on the brick and sandstone structure built around a timber frame. By 1948, the house was declared to be in imminent danger of collapse, forcing President Truman to commission a reconstruction and to live across the street at Blair House from 1949 to 1951.
The work, done by the firm of Philadelphia contractor John McShain, required the complete dismantling of the interior spaces, construction of a new load-bearing internal steel frame, and the reconstruction of the original rooms within the new structure. The total cost of the renovations was about $5.7 million ($60 million in 2021). Some modifications to the floor plan were made, the largest being the repositioning of the grand staircase to open into the Entrance Hall, rather than the Cross Hall. Central air conditioning was added, as well as two additional sub-basements providing space for workrooms, storage, and a bomb shelter. The Trumans moved back into the White House on March 27, 1952.
While the Truman reconstruction preserved the house's structure, much of the new interior finishes were generic and of little historic significance. Much of the original plasterwork, some dating back to the 1814–1816 rebuilding, was too damaged to reinstall, as was the original robust Beaux Arts paneling in the East Room. President Truman had the original timber frame sawed into paneling; the walls of the Vermeil Room, Library, China Room, and Map Room on the ground floor of the main residence were paneled in wood from the timbers.
Jacqueline Kennedy, wife of President John F. Kennedy (1961–63), directed a very extensive and historic redecoration of the house. She enlisted the help of Henry Francis du Pont of the Winterthur Museum to assist in collecting artifacts for the mansion, many of which had once been housed there. Other antiques, fine paintings, and improvements from the Kennedy period were donated to the White House by wealthy philanthropists, including the Crowninshield family, Jane Engelhard, Jayne Wrightsman, and the Oppenheimer family.
Stéphane Boudin of the House of Jansen, a Paris interior-design firm that had been recognized worldwide, was employed by Jacqueline Kennedy to assist with the decoration. Different periods of the early republic and world history were selected as a theme for each room: the Federal style for the Green Room, French Empire for the Blue Room, American Empire for the Red Room, Louis XVI for the Yellow Oval Room, and Victorian for the president's study, renamed the Treaty Room. Antique furniture was acquired, and decorative fabric and trim based on period documents was produced and installed.
The Kennedy restoration resulted in a more authentic White House of grander stature, which recalled the French taste of Madison and Monroe. In the Diplomatic Reception Room, Mrs. Kennedy installed an antique "Vue de l'Amérique Nord" wallpaper which Zuber & Cie had designed in 1834. The wallpaper had hung previously on the walls of another mansion until 1961 when that house was demolished for a grocery store. Just before the demolition, the wallpaper was salvaged and sold to the White House.
The first White House guidebook was produced under the direction of curator Lorraine Waxman Pearce with direct supervision from Mrs. Kennedy. Sales of the guidebook helped finance the restoration.
In a televised tour of the house on Valentine's Day in 1962, Kennedy showed her restoration of the White House to the public.
Congress enacted legislation in September 1961 declaring the White House a museum. Furniture, fixtures, and decorative arts could now be declared either historic or of artistic interest by the president. This prevented them from being sold (as many objects in the executive mansion had been in the past 150 years). When not in use or display at the White House, these items were to be turned over to the Smithsonian Institution for preservation, study, storage, or exhibition. The White House retains the right to have these items returned.: 29
Out of respect for the historic character of the White House, no substantive architectural changes have been made to the house since the Truman renovation. Since the Kennedy restoration, every presidential family has made some changes to the private quarters of the White House, but the Committee for the Preservation of the White House must approve any modifications to the State Rooms. Charged with maintaining the historical integrity of the White House, the congressionally-authorized committee works with each First Family – usually represented by the first lady, the White House curator, and the chief usher – to implement the family's proposals for altering the house.
During the Nixon Administration (1969–1974), First Lady Pat Nixon refurbished the Green Room, Blue Room, and Red Room, working with Clement Conger, the curator appointed by President Richard Nixon. Mrs. Nixon's efforts brought more than 600 artifacts to the house, the largest acquisition by any administration. Her husband created the modern press briefing room over Franklin Roosevelt's old swimming pool. Nixon also added a single-lane bowling alley to the White House basement.
Computers and the first laser printer were added during the Carter administration, and the use of computer technology was expanded during the Reagan administration. A Carter-era innovation, a set of solar water heating panels that were mounted on the roof of the White House, was removed during Reagan's presidency. Redecorations were made to the private family quarters and maintenance was made to public areas during the Reagan years. The house was accredited as a museum in 1988.
In the 1990s, Bill and Hillary Clinton refurbished some rooms with the assistance of Arkansas decorator Kaki Hockersmith, including the Oval Office, the East Room, Blue Room, State Dining Room, Lincoln Bedroom, and Lincoln Sitting Room. During the administration of George W. Bush, First Lady Laura Bush refurbished the Lincoln Bedroom in a style contemporary with the Lincoln era; the Green Room, Cabinet Room, and theater were also refurbished.
The White House became one of the first wheelchair-accessible government buildings in Washington when modifications were made during the presidency of Franklin D. Roosevelt, who used a wheelchair because of his paralytic illness. In the 1990s, Hillary Clinton, at the suggestion of Visitors Office Director Melinda N. Bates, approved the addition of a ramp in the East Wing corridor. It allowed easy wheelchair access for the public tours and special events that enter through the secure entrance building on the east side.
In 2003, the Bush administration reinstalled solar thermal heaters. These units are used to heat water for landscape maintenance personnel and for the presidential pool and spa. One hundred sixty-seven solar photovoltaic grid-tied panels were installed at the same time on the roof of the maintenance facility. The changes were not publicized as a White House spokeswoman said the changes were an internal matter. The story was picked up by industry trade journals. In 2013, President Barack Obama had a set of solar panels installed on the roof of the White House, making it the first time solar power would be used for the president's living quarters.
Today the group of buildings housing the presidency is known as the White House Complex. It includes the central Executive Residence flanked by the East Wing and West Wing. The Chief Usher coordinates day to day household operations. The White House includes six stories and 55,000 square feet (5,100 m2) of floor space, 132 rooms and 35 bathrooms, 412 doors, 147 windows, twenty-eight fireplaces, eight staircases, three elevators, five full-time chefs, a tennis court, a (single-lane) bowling alley, a movie theater (officially called the White House Family Theater), a jogging track, a swimming pool, and a putting green. It receives up to 30,000 visitors each week.
Main article: Executive Residence
The original residence is in the center. Two colonnades – one on the east and one on the west – designed by Jefferson, now serve to connect the East and West Wings added later. The Executive Residence houses the president's dwelling, as well as rooms for ceremonies and official entertaining. The State Floor of the residence building includes the East Room, Green Room, Blue Room, Red Room, State Dining Room, Family Dining Room, Cross Hall, Entrance Hall, and Grand Staircase. The Ground Floor is made up of the Diplomatic Reception Room, Map Room, China Room, Vermeil Room, Library, the main kitchen, and other offices.
The second floor family residence includes the Yellow Oval Room, East and West Sitting Halls, the White House Master Bedroom, President's Dining Room, the Treaty Room, Lincoln Bedroom and Queens' Bedroom, as well as two additional bedrooms, a smaller kitchen, and a private dressing room. The third floor consists of the White House Solarium, Game Room, Linen Room, a Diet Kitchen, and another sitting room (previously used as President George W. Bush's workout room).
Main article: West Wing
The West Wing houses the president's office (the Oval Office) and offices of his senior staff, with room for about 50 employees. It includes the Cabinet Room, where the president conducts business meetings and where the Cabinet meets, as well as the White House Situation Room, James S. Brady Press Briefing Room, and Roosevelt Room. In 2007, work was completed on renovations of the press briefing room, adding fiber optic cables and LCD screens for the display of charts and graphs. The makeover took 11 months and cost of $8 million, of which news outlets paid $2 million. In September 2010, a two-year project began on the West Wing, creating a multistory underground structure.
Some members of the president's staff are located in the adjacent Eisenhower Executive Office Building, which was, until 1999, called the Old Executive Office Building and was historically the State War and Navy building.
The Oval Office, Roosevelt Room, and other portions of the West Wing were partially replicated on a sound stage and used as the setting for The West Wing television show.
Main article: East Wing
The East Wing, which contains additional office space, was added to the White House in 1942. Among its uses, the East Wing has intermittently housed the offices and staff of the first lady and the White House Social Office. Rosalynn Carter, in 1977, was the first to place her personal office in the East Wing and to formally call it the "Office of the First Lady". The East Wing was built during World War II in order to hide the construction of an underground bunker to be used in emergencies. The bunker has come to be known as the Presidential Emergency Operations Center.
The White House and grounds cover just over 18 acres (about 7.3 hectares). Before the construction of the North Portico, most public events were entered from the South Lawn, the grading and planting of which was ordered by Thomas Jefferson. Jefferson also drafted a planting plan for the North Lawn that included large trees that would have mostly obscured the house from Pennsylvania Avenue. During the mid-to-late 19th century a series of ever larger greenhouses were built on the west side of the house, where the current West Wing is located. During this period, the North Lawn was planted with ornate carpet-style flowerbeds.
The general layout of the White House grounds today is based on the 1935 design by Frederick Law Olmsted Jr. of the Olmsted Brothers firm, commissioned by President Franklin D. Roosevelt. During the Kennedy administration, the White House Rose Garden was redesigned by Rachel Lambert Mellon. The Rose Garden borders the West Colonnade. Bordering the East Colonnade is the Jacqueline Kennedy Garden, which was begun by Jacqueline Kennedy but completed after her husband's assassination.
On the weekend of June 23, 2006, a century-old American Elm (Ulmus americana L.) tree on the north side of the building came down during one of the many storms amid intense flooding. Among the oldest trees on the grounds are several magnolias (Magnolia grandiflora) planted by Andrew Jackson, including the Jackson Magnolia, reportedly grown from a sprout taken from the favorite tree of Jackson's recently deceased wife, the sprout planted after Jackson moved into the White House. The tree stood for over 200 years. In 2017, having become too weak to stand on its own, it was decided it should be removed and replaced with one of its offspring.
Michelle Obama planted the White House's first organic garden and installed beehives on the South Lawn of the White House, which will supply organic produce and honey to the First Family and for state dinners and other official gatherings. In 2020, First Lady Melania Trump redesigned the Rose Garden.
See also: White House Visitors Office and List of White House security breaches
Like the English and Irish country houses it was modeled on, the White House was, from the start, open to the public until the early part of the 20th century. President Thomas Jefferson held an open house for his second inaugural in 1805, and many of the people at his swearing-in ceremony at the Capitol followed him home, where he greeted them in the Blue Room. Those open houses sometimes became rowdy: in 1829, President Andrew Jackson had to leave for a hotel when roughly 20,000 citizens celebrated his inauguration inside the White House. His aides ultimately had to lure the mob outside with washtubs filled with a potent cocktail of orange juice and whiskey.
The practice continued until 1885, when newly elected Grover Cleveland arranged for a presidential review of the troops from a grandstand in front of the White House instead of the traditional open house. Inspired by Washington's open houses in New York and Philadelphia, John Adams began the tradition of the White House New Year's Reception. Jefferson permitted public tours of his house, which have continued ever since, except during wartime, and began the tradition of an annual reception on the Fourth of July. Those receptions ended in the early 1930s. President Bill Clinton briefly revived the New Year's Day open house in his first term.
In February 1974, a stolen U.S. Army helicopter landed without authorization on the White House's grounds. Twenty years later, in 1994, a light plane flown by Frank Eugene Corder crashed on the White House grounds, and he died instantly.
As a result of increased security regarding air traffic in the capital, the White House was evacuated in May 2005 before an unauthorized aircraft could approach the grounds.
On May 20, 1995, primarily as a response to the Oklahoma City bombing of April 19, 1995, the United States Secret Service closed off Pennsylvania Avenue to vehicular traffic in front of the White House, from the eastern edge of Lafayette Park to 17th Street. Later, the closure was extended an additional block to the east to 15th Street, and East Executive Avenue, a small street between the White House and the Treasury Building.
After September 11, 2001, this change was made permanent, in addition to closing E Street between the South Portico of the White House and the Ellipse. In response to the Boston Marathon bombing, the road was closed to the public in its entirety for a period of two days.
The Pennsylvania Avenue closure has been opposed by organized civic groups in Washington, D.C. They argue that the closing impedes traffic flow unnecessarily and is inconsistent with the well-conceived historic plan for the city. As for security considerations, they note that the White House is set much farther back from the street than numerous other sensitive federal buildings are.
Prior to its inclusion within the fenced compound that now includes the Old Executive Office Building to the west and the Treasury Building to the east, this sidewalk[clarification needed] served as a queuing area for the daily public tours of the White House. These tours were suspended in the wake of the September 11 attacks. In September 2003, they resumed on a limited basis for groups making prior arrangements through their Congressional representatives or embassies in Washington for foreign nationals and submitting to background checks, but the White House remained closed to the public. Due to budget constraints, White House tours were suspended for most of 2013 due to sequestration. The White House reopened to the public in November 2013.
The White House Complex is protected by the United States Secret Service and the United States Park Police.
During the 2005 presidential inauguration, NASAMS (Norwegian Advanced Surface-to-Air Missile System) units were used to patrol the airspace over Washington, D.C. The same units have since been used to protect the president and all airspace around the White House, which is strictly prohibited to aircraft.
....we have in the archive, a letter from Franklin Roosevelt, the American president, and it's thanking a descendant of one of Victory's crews, who are returning a medicine chest to the White House....this image of, of Roosevelt sitting down and writing a wonderful, and patient thank you letter, when he knows that the Germans have just invaded Czechoslovakia.....
For the White House itself, and thus for the American people, Pat Nixon also decided to accelerate the collection process of fine antiques as well as historically associative pieces, adding some 600 paintings and antiques to the White House Collection. It was the single greatest collecting during any Administration. | 1 | 5 |
<urn:uuid:cedff95e-5593-466a-96be-1674e84625dc> | There are digital elements in everything we do, from the infrastructure that helps us navigate our world to the things that allow us to communicate with one another. The usage of digital tools in architecture, such as those provided by planner5d.com, is growing rapidly and becoming more widespread. The growing interest in the influence these technologies are having and will have on our daily lives has increased the usage of these tools in architecture schools, small independent businesses, and multinational corporations.
This article will examine the important breakthroughs in digital thinking in architecture from the late nineteenth century to the present. We follow the timeline as it goes back in time to help us predict what might happen in the future.
Origins: Morphological Thought
The future of architecture in the digital era is strongly intertwined with our understanding of our relationship with nature. Architectural design in the 19th century has been influenced by the findings of Charles Darwin and the structuralist ideas of D’Arcy Thompson.
This theory was popularized by architect Louis Sullivan, who developed the idea of functionalism. The concept holds that a building’s design should follow its intended use. Frank Lloyd Wright and others who had worked for Sullivan early in their careers developed the concept of ‘organic architecture,’ emphasizing the significance of taking into account natural behavior while designing buildings.
From the way nature’s patterns and motifs were used to decorate buildings to their unifying geometric relationships, this idea permeated a great deal of work during this period.
In the early to mid-20th century, architects were profoundly impacted by their understanding of nature’s principles and the mathematics that support them. A new meaning for the phrase, “form follows function,” began to emerge during this time.
If its parameters can define a building’s function, architects can use mathematical formulae to design form, as per Italian architect Luigi Moretti. The sets of correlations that arose, according to Moretti, generated the concept of ‘architettura parametrica,’ in which the performance of architectural components is assigned specific parameters.
When one thinks about architectural processes and forms during a moment of technological progress, it is not the first time that architects have adopted an algorithmic approach to thinking about their work.
Antoni Gaudi, the Catalan architect who designed Barcelona’s Sagrada Familia (1882-1926), used a combination of computing and analog methods to create his models. When it came to designing, Gaudi rarely employed drawings, preferring instead to work directly with the properties of tangible and material objects.
The American architect, systems theorist, and futurist Buckminster ‘Bucky’ Fuller was also a proto-parametricist. Fuller was a strong believer in the power of technological innovation to help people get more done with less and make better use of the resources they had. His work may be considered a pioneer to today’s digital design and fabrication in many ways.
Cybernetics have profoundly influenced architecture and design since the second half of the twentieth century. This theory holds that all human and machine behavior may be viewed as a series of feedback loops, with inputs and outputs flowing back and forth. Cedric Price, one of the most innovative British architects of the twentieth century, is perhaps the most well-known to adapt this theory.
During the 1980s and 1990s at the Architectural Association School of Architecture, Julia and John Frazer used generative and evolutionary algorithms as a new model for the design process.
The First Digital Explorations
Over the late 1980s and early 1990s, computational tools were increasingly important in the design process and the creation of drawings in the architecture and design profession.
American architect Frank Gehry employed digital technology to build design techniques and design software. His effect on the usage of computational tools can be more widespread. This iterative design method was evident in Gehry’s first computer-aided project, the Lewis Residence (1989-1995), which he used for years of experimenting.
The architect created what he dubbed a “morphological diagram” by using computers to explore various design alternatives.
Gehry and his colleagues developed a CATIA interface to make it easier to realize their ideas and keep them true to Gehry’s vision. Any unique machine tolerances were not considered when creating the data supplied by this software. Gehry Technologies created a separate BIM program called Digital Project based on this technology.
The Journey from Virtual to Physical
In the late 1990s and early 2000s, architectural notions studied in previous decades were realized. During the financial boom, vast sums of money were poured into architecture.
Architects who had previously solely worked on drawings, animations, installations, or small buildings could compete for large-scale projects. The search for more expressive forms of architecture has resulted in some of the most recognizable structures in cities worldwide.
FOA’s 1995 design for the Yokohama International Port Terminal was considered futuristic at the time. The top of the terminal simulates a dynamic landscape so that visitors can travel fluidly from the exterior to the terminal’s interior. The notion was made possible via computer-aided design.
Rise of Internet
Later, the World Wide Web was born. As a result of the advent of new communication technologies, collaboration – essential to any architectural practice – could now happen faster.
No more waiting for architectural drawings made the design process painfully tedious. This allows people in different regions to work on them almost in real-time. It is now common to see numerous offices working together on huge international competitions.
Academic and industrial partners collaborate to find new shapes and forms using generative design processes. An extensive body of collaborative design research spanned across disciplines and industries, tying together diverse characteristics into intricate networks that took shape as the network’s dynamic relationships evolved.
AA Emtech focuses on material behavior biomimetics to explore the potential of future emergence and natural systems in architectural design.
Later, the Institute for Computational Design (ICD) at Stuttgart University integrated this method with research into innovative fabrication technologies. ICD continues to work with industrial and mobile robotics, often at the scale of a pavilion.
Technological advancement in architecture allowed morphological thinking to evolve in the 21st century. It gave new life to notions of non-linearity and agent-based modelling through digitalization. | 1 | 6 |
<urn:uuid:6b4178f2-3e86-4cb2-8362-b73863f012bf> | Why Treatment Is Important
While many medications, such as antibiotics, cure the illnesses they are designed to treat, antidepressants do not cure depression. Their effect is only temporary. This is because antidepressants work by changing the brain’s chemistry, but only for as long as the person is taking them. They do not address the underlying causes of depression.
The National Institute of Mental Health shares that depression has a number of potential, and oftentimes complex, causes. Some may be genetic or biological and others may be environmental or psychological.
No matter the cause, untreated depression can be extremely debilitating to an individual, interfering with every part of life. In addition, severe depression can potentially lead to suicide if it does not receive immediate attention.
Depression has also been linked to a variety of physical health issues, including heart disease, obesity, diabetes, Alzheimer’s disease, and other chronic disorders. In the case of heart disease, hypertension, and diabetes, depression may accelerate the progression of the disease.
Having depression can even make it more difficult to treat other medical illnesses because the lack of motivation and energy associated with depression makes it more difficult for patients to comply with their treatment regimens.
What Is Your Hcp Noting During Check
Achieving remission can be a long, hard road for people who suffer from clinical depression. The APA guide reports that most treatments take approximately four to eight weeks before a healthcare provider can determine whether the treatment is effective.
Your doctors goal is to see a reduction in the severity of your depression symptoms. Specifically, your healthcare provider may use the following terms to discuss depression treatment goals:
- Symptom Improvement: Any change in your HAM-D17 score
- Response: A greater than or equal to 50% decrease in HAM-D17 score .
- Remission: A HAM-D17 score has decreased to 7 or below . This is the ultimate goal for your healthcare provider and you.
Sometimes, though, the decided-upon treatments do not provide the relief that you need. The STAR*D trial, which evaluated depression treatment, found that a trial-and-error approach to treating depression can be slow. Fewer than 40% of the participants achieved depression remission after several weeks of the first medication protocol.
The APA treatment guide suggests that treatment-resistant depression may require different and/or more intense options. Some of the treatment options could include Electroconvulsive therapy , transcranial magnetic stimulation, or vagus nerve stimulation.
Alcohol Tobacco And Other Drugs
Misusing alcohol, tobacco, and other drugs can have both immediate and long-term health effects.
The misuse and abuse of alcohol, tobacco, illicit drugs, and prescription medications affect the health and well-being of millions of Americans. SAMHSAs 2020 National Survey on Drug Use and Health reports that approximately 19.3 million people aged 18 or older had a substance use disorder in the past year.
Read Also: Can Being Lonely Lead To Depression
Key Points About Depression
Depression is a serious mood disorder that affects your whole body including your mood and thoughts.
Its caused by a chemical imbalance in the brain. Some types of depression seem to run in families.
Depression causes ongoing, extreme feelings of sadness, helplessness, hopeless, and irritability. These feelings are usually a noticeable change from whats normal for you, and they last for more than two weeks.
Depression may be diagnosed after a careful psychiatric exam and medical history done by a mental health professional.
Depression is most often treated with medicine or therapy, or a combination of both.
Depression: What You Need to Know as You Age
Get the help you or a loved one needs, and get the latest expert insights on coping and preventing this mood disorder.
How To Manage Depression
Even with medication and therapy, you might still experience some symptoms of recurring depression. This is especially true if you have a major depressive disorder, which is a condition that may return throughout your life. Below are ways to cope with depression long-term. Your Mercy therapist can also help you find ways to manage depression in a healthy way.
- Join a support group
- Schedule fun activities into your day, even if you dont feel like it
- Try to get 8 hours of sleep, which is not too much and not too little
- Try to get exercise and sunlight daily
- Know your triggers and develop a list of actions or activities that can quickly boost your mood
Also Check: Helping A Partner Through Depression
Are There Physical Signs Of Depression
Yes. In fact, a great many people with depression come to their doctor first with only physical issues. You might notice:
- Back pain
- Gut problems
- Constant tiredness
- Sleep problems
- Slowing of physical movement and thinking
You might notice these symptoms and signs even before you notice the mental health symptoms of depression, or you might notice them at the same time. Your doctor can help you figure out the source of your symptoms.
What If My Symptoms Dont Improve
If youre not responding to treatment, you may live with treatment resistant depression. This is when your symptoms have not improved after at least 2 standard treatments. This can also be known as treatment-refractory depression.
There is currently no official criteria used to diagnose treatment resistant depression.
What treatment is available for treatment-resistant depression?There are treatment options for treatment resistant depression. Even if antidepressants have not worked already for you, your doctor may suggest a different antidepressant from a different class.
The new antidepressant you are offered will depend on the first antidepressant you were given.
Sometimes your doctor can prescribe a second type of medication to go with your antidepressant. This can sometimes help the antidepressant work better than it does by itself.
Where antidepressants have not worked, your doctor may suggest talking therapies, ECT or brain stimulation treatments. See the previous section for more information on these.
What is an implanted vagus nerve stimulator, and how is it used in treatment resistant depression?If you live with treatment resistant depression, and youve not responded to other treatments, you may be able ask for an implanted vagus nerve stimulator.
Please speak to your doctor if youre interested in this treatment and for more information. You may be able to get this treatment funded through an Individual Funding Request.
- NHS – Your Rights by clicking here.
Also Check: Can Depression Be Cured Permanently
Can You Inherit Depression
Genetic factors do play a role in depression, but so do biological, environmental, and psychological factors.2 Unipolar depression is less likely to be inherited than Bipolar disorder , says Steven Hollon, PhD, of Brentwood, Tennessee, a professor of psychology at Vanderbilt University.
While depression does tend to run in families, just because a family member has depression does not mean you are going to get it, says Rudy Nydegger, PhD, Professor Emeritus of psychology and management at Union College and chief in the Division of Psychology at Ellis Hospital, both in Schenectady, New York. It is not a simple gene thing, he says. And the important thing is not so much why a person has depression but what are we going to do to help them.
Conditions That May Be Misdiagnosed As Depression
Depression has a complex relationship to other chronic illnesses. Sometimes, when a patient goes to their doctor with undiagnosed symptoms, if the doctor cant figure out exactly whats causing them or if their symptoms overlap with the symptoms of depression they may diagnose their patient with depression even though there is actually another medical issue.
At the same time, having a chronic illness can, understandably, lead to depression, or a person may have depression independent of other health conditions. If you do have depression, its important for you to be diagnosed and treated for it. But if thats not the cause of your other symptoms, yet youre consistently told depression is to blame, you may not get the proper diagnosis or care you need.
To raise awareness of a few conditions that may be misdiagnosed, we asked our Mighty community to share a medical condition they have that was misdiagnosed as depression. Whether you have depression, a separate medical condition, or a condition that causes depression, know that you deserve to have the right diagnosis and treatment, and to have a medical team fighting to get to the bottom of your health challenges.
Read Also: What Percentage Of The World Suffers From Depression
What Can I Do If I Have Depression
If you have symptoms of depression, see your healthcare provider. They can give you an accurate diagnosis, refer you to a specialist or suggest treatment options.
If you or someone you know is thinking of hurting themselves or taking their own life:
- Go to the emergency department of your hospital.
- Contact a healthcare provider.
- Speak to a trusted friend, family member or spiritual leader.
A note from Cleveland Clinic
Depression is a common condition that affects millions of Americans every year. Anyone can experience depression even if there doesnt seem to be a reason for it. Causes of depression include difficulties in life, brain chemistry abnormalities, some medications and physical conditions. The good news is that depression is treatable. If you have symptoms of depression, talk to your healthcare provider. The sooner you get help, the sooner you can feel better
Last reviewed by a Cleveland Clinic medical professional on 12/31/2020.
How Do I Know When To Seek Help
The biggest hurdle to diagnosing and treating depression is recognizing that someone has it. Unfortunately, about half of the people who have depression never get diagnosed or treated. And not getting treatment can be life threatening: More than 10% of people who have depression take their own lives.
- When depression is hurting your life, such as causing trouble with relationships, work issues, or family disputes, and there isn’t a clear solution to these problems, you should seek help to keep things from getting worse, especially if these feelings last for any length of time.
- If you or someone you know is having suicidal thoughts or feelings, seek help right away.
Also Check: Problem Solving Therapy For Depression
Working With A Mental Health Professional Or Treatment Program
Once the diagnosis is made, the professional can then advise the individual on treatment for the condition. Depending on the type of condition and severity, there are several general treatment types that may be prescribed, either alone or in combination, as described by the American Association of Community Psychiatrists:
- One-on-one work with a counselor or psychiatrist
- Outpatient treatment through a treatment program
- Inpatient treatment through a rehab or treatment center
- Hospitalization or other emergency measures
The care plan will be determined by the treatment professional based on the severity of the disorder, how well the individual is able to function, the expected potential for recovery or relapse risk, the persons living environment and its ability to support recovery, and the individuals safety risk, among other factors.
Depression And Suicide: Getting Help In A Crisis
Some people who are depressed may think about hurting themselves or committing suicide . If you or someone you know is having thoughts about hurting themselves or committing suicide please seek immediate help. The following resources can help:
- Call to reach a 24hour crisis center or dial 911. is the National Suicide Prevention Lifelineexternal icon, which provides free confidential help to people in crisis. The Substance Abuse and Mental Health Services Administrationexternal icon runs this lifeline.
- Get help from your primary doctor or other health care provider.
- Reach out to a close friend or loved one.
- Contact a minister, spiritual leader, or someone else in your faith community.
Textbox module not selected or not found.
Read Also: Depression In Bed All Day
Pregnant And Postpartum Women
The prevalence of depression in the postpartum period has been estimated at 10%.26,27 The onset of postpartum depression occurs during the prenatal and antepartum periods in approximately 50% of pregnancies.28 For cases that begin after delivery, roughly 90% occur in the first four months.29
Postpartum depression has significant effects on the entire family. It is associated with abnormal development, cognitive impairment, and psychopathology in children.30 It can interfere with breastfeeding, maternal-infant bonding, and the mother’s relationship with her partner.26 It is often overlooked and may be mistaken for normal behavioral changes that occur during this period, known as the postpartum blues. Postpartum depression is not listed as its own diagnosis in the Diagnostic and Statistical Manual of Mental Disorders, 5th ed. , but rather as a qualifier to the diagnosis of major depressive disorder.31
The USPSTF, American Academy of Family Physicians, and American College of Obstetricians and Gynecologists recommend screening all postpartum women for depression.16,17,23,32,33 Patients should be screened for depression at least once during the perinatal period. Evidence supports the use of the PHQ-2, PHQ-9, or Edinburgh Postnatal Depression Scale .33 The Postpartum Depression Screening Scale is a more in-depth tool however, it requires additional time to administer with more than 20 questions, limiting its use during routine outpatient office visits.
What Are The Symptoms Of Depression And How Is It Diagnosed
The NHS recommends that you should see your GP if you experience symptoms of depression for most of the day, every day, for more than 2 weeks.
Doctors make decisions about diagnosis based on manuals. The manual used by NHS doctors is the International Classification of Diseases .
When you see a doctor they will look for the symptoms that are set out in the ICD-10 guidance. You do not have to have all of these to be diagnosed with depression. You might have just experience some of them.
Some symptoms of depression are:
- low mood, feeling sad, irritable or angry,
- having less energy to do certain things,
- losing interest or enjoyment in activities you used to enjoy,
- reduced concentration,
You may also find that with low mood you:
- feel less pleasure from things,
- feel more agitated,
- find your thoughts and movements slow down, and
- have thoughts of self-harm or suicide.
Your doctor should also ask about any possible causes of depression. For example, they may want to find out if youve experienced anything traumatic recently which could be making you feel this way.
There are no physical tests for depression. But the doctors may do some tests to check if you have any physical problems. For example, an underactive thyroid can cause depression.
On the NHS website, they have a self-assessment test which can help you to assess whether you are living with depression: www.nhs.uk/mental-health/conditions/clinical-depression/overview/
Read Also: What Do People Do When They Are Depressed
What Is A Person Who Cries A Lot Called
Definitions of crybaby. a person given to excessive complaints and crying and whining. synonyms: bellyacher, complainer, grumbler, moaner, sniveller, squawker, whiner. types: kvetch. a constant complainer.
The clinically depressed person needs firm and diversion, but too many needs can boost feelings of failing as well as exhaustion. For example, some physicians can regular doctors diagnose depression have actually reported success by adding bupropion to SSRIs to boost sex-related feature. This choice may be ideal if the person gets on high dosages of an SSRI.
How Can My Healthcare Provider Tell Whether I Am Sad Or Depressed
Throughout life, people face many situations that result in feelings of sadness or grief: death of a loved one, loss of a job, or the ending of a relationship. Your healthcare provider, during your appointment, will likely have an unstructured conversation with you to figure out whether you might be clinically depressed or whether you are struggling with a temporary sadness that is not depression.
While depression shares some characteristics with grief and sadness, they are not the same. Typically, people experiencing grief will feel overwhelming sad feelings in waves, according to the American Psychiatric Association. In the case of grief, self-esteem is usually maintained.
With Major Depressive Disorder , the painful emotions tend to persist without much relief and often are paired with feelings of worthlessness and self-loathing. The National Institutes of Health writes that Major Depressive Disorder causes severe symptoms that affect how you feel, think, and handle daily activities, such as sleeping, eating, or working. These symptoms must be present for at least two weeks in order to be diagnosed with depression.
Recommended Reading: Probiotics And Anxiety And Depression
What Gender Is More Depressed
About twice as many women as men experience depression. Several factors may increase a womans risk of depression. Women are nearly twice as likely as men to be diagnosed with depression. Depression can occur at any age.
It includes dental implanting a pacemaker-like device that generates pulses of electrical energy to promote the vagus nerve. The vagus nerve is one of the 12 cranial nerves, the paired nerves that connect to the undersurface of the brain as well as relay details anxiety depression more tests_diagnosis to and also from the brain. Penis Pump Vacuum tightness gadgets, or penis pumps, might serve in the therapy of erectile dysfunction. A penis pump is an acrylic cylinder with a pump that can be affixed to completion of the penis.
What Does The Doctor Look For To Make A Depression Diagnosis
A doctor can rule out other conditions that may cause depression with a physical examination, a personal interview, and lab tests. The doctor will also do a complete diagnostic evaluation, discussing any family history of depression or other mental illness.
Your doctor will evaluate your symptoms, including how long you’ve had them, when they started, and how they were treated. Theyâll ask about the way you feel, including whether you have any symptoms of depression such as:
- Sadness or depressed mood most of the day or almost every day
- Loss of enjoyment in things that were once pleasurable
- Major change in weight or appetite
- Insomnia or excessive sleep almost every day
- Physical restlessness or sense of being run-down that others can notice
- Fatigue or loss of energy almost every day
- Feelings of hopelessness or worthlessness or excessive guilt almost every day
- Problems with concentration or making decisions almost every day
- Recurring thoughts of death or suicide, suicide plan, or suicide attempt
You May Like: What Should I Do For Depression | 1 | 3 |
<urn:uuid:c669fe33-4fc4-4c45-bf34-8611cd6f12fb> | How to See Coordinate Points on Maps – Maybe you are already familiar with this Google Maps application, because Google Maps is something that is definitely on the smart cellphone that you have.
Google Maps is a digital map application that has the same function as maps in general, but has a more concise shape and has features that complement this map application.
And one thing that I like about Google Maps is that there is a location sharing service from a place in the form of coordinates.
This is to make users comfortable using this application to access a point in another hemisphere.
However, it turns out that there are still people who don’t know how to see the coordinates on Google Maps, so many are looking for information related to this.
But now you don’t need to bother looking for that matter anymore, because you are in the right article.
Later in this article we will explain to readers about how to see the coordinates on Google Maps.
Those who really want to know about how to see coordinates on Maps, please refer to the review below until it’s finished so you can get detailed information.
A Glance About Google Maps Coordinate Points
|A Glance About Google Maps Coordinate Points|
Before we explain about our theme, namely how to see coordinates on Maps, we will first describe an explanation of Google Maps coordinates so you can better understand how to use coordinates on Google Maps.
The coordinate point is the position of a place which is shown by means of vertical and horizontal lines that we usually encounter on a map.
There are two types of coordinates, namely geographic coordinates and UTM coordinates. Geographical coordinates are usually written in degrees (o), minutes (‘) and seconds (“).
That line will be the benchmark for geographic coordinates, while the UTM coordinates are written as the X and Y axes, for the X axis from west to east, and the Y axis from south to north.
Google Maps as a navigation service in general uses a geographic coordinate system in its services.
This is useful to assist users in navigating a location, and for this geographic coordinate system is a system that can be used to indicate the position of a place in all parts of the Earth in the form of a series of letters, numbers and certain symbols.
There are various kinds of position descriptions that can be used in the system, in general the position descriptions are usually shown in the form of latitude and longitude and the height of a place as in Google Maps.
Although you can share coordinate data using a link on the Google Maps service.
But actually you can also share location points in the form of direct coordinates to make them more accurate.
Google Maps supports DMS (Degree Minute and Second) formats, for example 41°24’12.2″N 2°10’26.5″E, and DMM (Degrees and Decimal Minutes) like 41 24.2028 2 10.4418 and DD (Decimal Degrees) like 41.40338, 2.17403.
And in order to be successful in finding a location using the coordinate method, you must follow several formats for writing coordinates according to Google Maps standards, including the following:
- First, please use the degree of the letter “d”.
- Second, please use a period (.) to make a decimal number.
- And third, please include the latitude coordinates first, then the longitude coordinates.
3 Ways to See Coordinate Points on Maps
Now that you have an idea of the coordinates on Google Maps, then we will explain our theme in this article.
Later below we will explain how to view coordinates on Maps via Android, iPhone or iPad, and PC.
1. How to See Coordinate Points on Maps Using Android
|How to See Coordinate Points on Maps Using Android|
First, we will explain how to see coordinates on Google Maps using an Android phone.
As we know that usually Android-based cellphones are equipped with the Google Maps application in it.
If you do use an Android cellphone, then you can use it to see coordinates on Google Maps.
By using this Android, you only need to activate the GPS and a stable internet connection so that the location to be shared can be detected by satellite.
And below are the steps for how to see coordinates on Maps using an Android cellphone, namely as follows:
- First, please open the Google Maps application on your Android phone (Download the Google Maps Application for Android).
- Then please find the place you want to share.
- Then please tap and hold for a few moments at that point until a location pin appears in red and a description appears regarding the location point or what is commonly called dropped in, and this is usually at the bottom of the screen.
- If the dropped in point appears, then you will see the coordinates of the location in the search field at the top of the Google Maps page.
- If you want to save the coordinates, please copy them, then share the points directly.
2. How to See Coordinate Points on Maps Using iPhone
|How to See Coordinate Points on Maps Using iPhone|
Next, we will explain how to see coordinates on Google Maps using an iPhone.
Immediately, here are the steps that must be implemented for how to see coordinates on Maps using an iPhone, as follows:
- The first step please open the Google Maps application, then you run the application (Download the Google Maps App for iPhone).
- Then please put the location pin at the point you want on the map.
- Next, please touch and hold for a few moments at a certain position on the map to save the location pin, then you share the marked location via Messages.
- After that, please click the “Dropped in” tab at the bottom of the screen, then select “Share”.
- Then you will see many options for sharing location points, and sharing them through the Messages menu is the fastest way to get coordinates.
- The next step, please select “Recipient of the message”, then you press “Send”, in this case you can share it with yourself or with friends to see the coordinates of the latitude and longitude.
- Then you will get a message containing the location point that you shared earlier, please open the message.
- Then please click on the link from Google Maps, for this link it will appear in the form of a message after the location address, and for the format of the message it starts with “goo.gl/maps.”.
- Please look for latitude and longitude coordinates, because the link will run Google Maps and display the coordinates of these points at the top and bottom of the screen.
- Usually the latitude coordinates in coordinate pairs are displayed first.
Please note that there are two ways to create a location pin, the first is by typing the name of the location or place of interest in the search field.
Then the second by using a finger to navigate the map interface and search for the desired location.
3. How to See Coordinate Points on Maps Using a Laptop
|How to See Coordinate Points on Maps Using a Laptop|
Finally, we will explain how to see coordinates on Google Maps using a laptop.
The steps for how to see coordinates on Maps using a laptop device are as follows:
- First of all, please look for the address or location point you want using Google Maps (Download Google Maps for PC).
- If you have found the location you are looking for, please zoom in or out on the map to find the geographic location.
- Then please put a location pin, you have to click exactly on the location point you want to know the coordinates of when the pin is placed, and the latitude and longitude coordinates will be part of the url in the address line.
- The next step, please right-click on the Location Pin, then you select “What’s Here”, on a Mac computer, please hold down the CTRL key while clicking the mouse, but if you want it to be faster without having to put a location pin, you can right-click directly at a location on the map.
- Then you will get latitude and longitude coordinates, and these coordinates will be listed in a box that appears at the bottom of the computer screen, the latitude coordinates will appear first.
The final word
That’s what we can explain in this article about how to see coordinates on maps through various devices, starting from Android cellphones, iPhone cellphones, and laptops or computers.
We hope that the information we provide in this article will help readers who previously had difficulty finding coordinates on Google Maps, so now they can do it easily with this article.
That’s all and thank you for visiting our article, and if what we discuss in this article is important for other people to know too, then you can help share this information. | 1 | 2 |
<urn:uuid:bac1794e-044c-4566-b9f6-3354813e6a82> | The ways in which instructors use technology in the classroom impacts instructional methods, student learning, and/or the development of curricular goals via replacement, amplification, or transformation of existing lessons and activities (Hughes, et al., 2006). The Replacement, Amplification, Transformation (RAT) model (Hughes, 1998) identifies the primary purposes for technology integration. Figure 1 defines these technology-use purposes.
Artwork Depicting the RAT Framework
Originally developed as a research tool to study the "nature of technology-supported practices teachers developed and implemented in their teaching" (Hughes, 2022), the tool was later developed for use as an instructor self-assessment framework for critically determining how an instructor's use of technology best served themselves and their students "as a means to some pedagogical and curricular end" (Hughes, Thomas, & Scharber (2006). The RAT framework is organized around three themes and dimensions outlined in the RAT Question Guide (Hughes, 2022). In this guide, Hughes also proposes ways to consider these themes and dimensions at the school/district level. Themes include instructional methods, student learning, and curriculum goals (Hughes, et al., 2006). Each of the themes is further broken down into dimensions (see Figure 2).
Technology Use Impact Themes and Dimensions of the RAT Framework
Table 1 and the following discussion focus on an example of a grammar lesson that might be taught in an elementary classroom by purpose. The more impact the technology use has on the three dimensions (instructional methods, student learning, and/or curriculum goal development), the more likely the use is transformative for that particular instructor and their learners.
RAT Framework: Examples by Purpose
Original Activity: Students use highlighters in different colors to mark parts of speech on a worksheet printed from the teacher’s computer files. Students might exchange papers for grading purposes.
|Replacement||This is replaced by having students use the built-in highlighter tool in Google Docs, Microsoft Word, or some other related app to identify different parts of speech (Hughes, et al., 2006).|
|Amplification||Allow students to use built-in tools in Google Docs to help define unknown words, or new vocabulary, and identify the parts of speech in use. Further, have them create their own sentences and use the tools to make sure they are writing complete sentences using all the desired parts of speech. Using commenting, or Track Changes (MS Word) or Suggestions (Google Docs), students can engage in peer review asynchronously or synchronously.|
|Transformative||After learning about the parts of speech, have students demonstrate their knowledge by creating a game in PowerPoint or a printable worksheet in Google Docs or some other game development tool. For example, they could create a sentence builder activity using images or a Jeopardy round, using PowerPoint templates. They must include an answer key. Students play each other’s game and evaluate the game for accuracy. Imagine how exciting this might be if they were exchanging their games with other students from other schools around the nation, or even the world.|
Using the example from Table 1, as a replacement, technology moves the non-digital instructional methods, objectives, and ungraded or graded activities to an internet-based format. The use of a digital document as replacement for a printed document, which still asks students to highlight the parts of speech in different colors, does not change how the educator teaches, how/what the students learn, or the previously established curriculum goals.
As amplification, technology enhances or makes more efficient the instructional methods, the student learning processes, and/or the curriculum goals. For example, Hughes et al. (2006) describe a teacher who created tests, handouts, and other documents in a word processing application in the early days of technology use in the classroom as opposed to using handwritten or typewritten documents. This act served as amplification, according to the teacher’s self-assessment, because it created an archive that she could later modify without having to recreate the whole document. In the early days of migration from workbooks and mimeo copies to digital files stored on computers, this would have been revolutionary. Although it did not enhance student learning or curriculum goals, this act significantly enhanced instructional preparation, making this use of technology an example of amplification.
In the Table 1 example of amplification, students are still identifying parts of speech, but they are using technology to help identify words that may not already be familiar with, such as vocabulary terms from new content being learned, and the tools allow the students to create their own sentences containing all the proper parts of speech. The technology use also changes curriculum goals by moving beyond parts of speech identification into application and evaluation of that knowledge and enhances the learning process for students by making the experience more student-centered and relevant. Moreover, the technology use amplifies the student learning process by expanding student interaction and knowledge exchange with each other within a space designed for back-and-forth dialogue around the application of learned skills. Finally, the use of commenting and editing tools in either synchronous or asynchronous modes allows for increased efficiency in peer review.
For transformative technology integration, the technology must significantly change any of the identified dimensions within the educator’s instructional methods, the students’ learning processes, and/or curriculum goals. In Table 1, an example of transformative technology included having students use technology to create a game and learn from each other or from students in other classrooms as they played and evaluated the accuracy of each others' creations. This changes all three themes of teaching and learning and various dimensions within those themes in the following ways:
- Instructional methods: Primarily, the instructor’s method of assessing the students’ knowledge has changed. Rather than a multiple choice test or grading highlighted parts of speech, the instructor is now assessing a product students have created to apply their knowledge of parts of speech.
- Student learning: The learning process for students has been transformed and made more rigorous. They have moved beyond identifying parts of speech and are now creating artifacts that rely on their knowledge for success. Students are more motivated, and their cognitive load is increased. If working with others, they are also increasing their collaboration skills.
- Curriculum goals: Creating a game or activity that relies on knowledge of the parts of speech requires students to use higher-level cognitive skills rather than simply being able to identify parts of speech. This means that students can identify the parts of speech, define their purpose, apply them appropriately, and evaluate their use and application by others.
The RAT framework was created to help educators to develop technology-integrated lessons and to assess the worthwhile use of the chosen technology (Hughes et al., 2006). Originally developed for K12 preservice and in-service teachers and later applied to K12 school administrators at a programmatic level, the RAT model has been implemented in higher education also. For example, Billingsley, Smith, Smith & Meritt (2019) used the RAT framework as a lens to conduct a systematic literature review of immersive virtual reality (VR) used in teacher preparation programs to help address today's field placement limitations. Specifically, the authors looked at studies that explored the potential revolutionary use of immersive VR in teacher education as a training tool to learn about specific concepts, develop classroom and behavioral management skills, engage in role-playing scenarios or simulations, etc. They explained their rationale for using the RAT framework as follows:
By knowing the extent to which VR has been previously utilized, whether the technology replaced, enhanced, or transformed learning, teacher educators can decide whether these virtual experiences, indeed, broaden teacher candidates’ learning experiences and justify the resource commitment (Billingsley et al., 2019).
In another example, Dang, Smidt, Schumann, Funke, & Magassouba (2012) used the RAT framework as a lens to identify technology-use purposes related to the affordances found through a CMS/LMS system. While 9 professors and their courses were studied, this paper focused on one of the professors and his use of technology tools within the LMS for his online graduate-level education course. Overall, increased efficiency made using the LMS an amplification of typical physical classroom practices. When broken down by specific tools used within the LMS, some were identified as replacement, while others were marked as amplification.
- Replacement: The Survey tool used to gather student feedback about the course simply replaced a traditional printed survey.
- Replacement: The News and E-mail tools were also identified as replacement as they provided general feedback and course study guides and encouraged participation.
- Amplification: The Quizzes tool was used to create random selections of 30 questions from a 100-200 item question bank, which could be timed and retaken. Furthermore, students had access to their notes, which allowed for use of the quizzes for both assessment and as a guided learning tool.
- Amplification: The inclusion of 10-minute, instructor-made videos within the LMS that chunked the professor's typical classroom lectures into manageable segments for the students, which they could stop and review or rewatch later, was, as noted by the authors, "...the essence of amplification" (Dang et. al, 2012.)
- Amplification: the use of Dropbox and Gradebook tools made turning in assignments and grading more efficient.
While no transformative purposes were identified, the potential existed for the professor's use of several tools. The authors identified three tools used as having current amplification purposes with the potential for transformative use: (a) the professor's minimalist use of the Content tool that helped students in the flow of learning new material; (b) the structure, both small group and whole class, within the Discussion tool that helped to create "content-rich and student-driven discussions" (Dang, et. al, 2012); and (c) the purposefully-designed use of small-group chat.
Although transformative technology use often elevates the learning experiences of students and helps to engage higher-level cognitive thinking skills, the RAT framework does not suggest that all classroom technology use must be transformative, nor that it is a level of technological use to be achieved as part of a sequential technological improvement plan. In fact, there are times when instructors may purposefully decide not to make their lessons transformative due to time constraints, technology access barriers, or misalignment with school/district scope and sequence plans. Furthermore, transformative technology use as defined in the RAT model is subjective and, in the case of teacher self-assessment, a personally-determined attribute—meaning that what might be transformative use for one instructor, their students, and/or their curriculum goals may not be considered transformative for others. In RAT, transformative use of technology is not synonymous with the use of revolutionary technologies or the use of the latest technology tools and trends, unless it happens to support transformative teaching and learning and/or development and achievement of transformative curricular goals. Rather, technology integration should be a purposeful, planned event with the benefits and drawbacks of its use fully realized and understood.
Billingsley, G., Smith, S., Smith, S., & Meritt, J. (2019). A systematic literature review of using immersive virtual reality technology in teacher education. Journal of Interactive Learning Research, 30(1), 65-90. Retrieved from https://edtechbooks.org/-BWTV.
Dang, T. N., Smidt, E., Schumann, J., Funke, L., & Magassouba, Y. (2012). Analyzing CMS affordances through the RAT rramework for online language teacher education: A case study. Global Science and Technology Forum. Retrieved from https://edtechbooks.org/-kRSG.
Hughes, J. E. (2000). Teaching English with technology: Exploring teacher learning and practice. Ph.D. thesis, Michigan State University. Retrieved from https://edtechbooks.org/-MzF.
Hughes, J. E. (n.d.). Replacement, amplification, and transformation: The R.A.T. model. Techedges. https://edtechbooks.org/-tkk.
Hughes, J., Thomas, R., & Scharber, C. (2006, March). Assessing technology integration: The RAT–replacement, amplification, and transformation-framework. In Society for Information Technology & Teacher Education International Conference (pp. 1616-1620). Association for the Advancement of Computing in Education (AACE). Retrieved from https://edtechbooks.org/-JXrh.
Hughes, J. E. (2022). R.A.T. Question Guide. Techedges.org. Retrieved 6/6/22 from: https://edtechbooks.org/-MJei.
The RAT Technology Integration Model
Suggested Citation(2022). RAT: The RAT Technology Integration Model. EdTechnica: The Open Encyclopedia of Educational Technology. https://edtechbooks.org/encyclopedia/rat
CC BY: This work is released under a CC BY license, which means that you are free to do with it as you please as long as you properly attribute it.
End-of-Chapter Survey: How would you rate the overall quality of this chapter?
- Very Low Quality
- Low Quality
- Moderate Quality
- High Quality
- Very High Quality | 1 | 2 |
<urn:uuid:f246a93e-bda3-4431-ba2f-710c9092f929> | COVID-19 and Children
Is there any published data about pediatric patients with COVID-19?
A very important study of over 2143 pediatric patients that tested positive for COVID-19 in China was recently published in the journal, Pediatrics.
Can children get COVID-19?
Children of all ages can get COVID-19. Boys and girls are equally likely to get it. The symptoms of COVID-19 are similar in children and adults, but children tend to have milder cases or no symptoms at all. Children with symptoms tend to have fever, runny nose, cough and sometimes vomiting and diarrhea (CDC). Because children with COVID-19 often have no symptoms, they may play a major role in spreading the virus.
What symptoms do children experience?
In a study of over 2000 pediatric patients that tested positive for COVID-19 in China, more than 90% of the children either had no symptoms or mild/moderate cases.
Are children in general at risk of becoming very sick or dying from COVID-19?
Based on what we know so far, children are much less likely than adults to get severely ill from COVID-19. In a study of over 2000 pediatric patients that tested positive for COVID-19 in China, only one child died. Most cases were mild, with far fewer severe and critical cases in children (5.9%) compared to adults (18.5%). Very few had difficulty breathing or low blood oxygen levels (0.5%). Very few experienced ARDS or multiorgan system dysfunction (0.6%).
Why does it seem that children do better with the virus?
We do not fully understand why children do not get as sick as adults. The data that is emerging from China, Italy, Europe and Seattle are reassuring. Children are not being admitted to the hospital or developing severe disease nearly as often as adults.
Are children with transplants at higher risk?
There is still much more to be learned about how the disease impacts children, especially those with underlying medical conditions. For children with transplants, we do not have specific information on whether COVID-19 infection will be more severe. However, other viruses often cause more severe disease in people whose immune system is low, such as transplant recipients. The limited existing reports in the pediatric transplant population provide some cautious optimism that transplant children have a similar disease course as non-transplant children.
Are there any pediatric transplant cases that we know of?
There are very few reported cases in pediatric transplant recipients. But it is still too early to know for sure if children with transplants are at increased risk for severe disease. There are 3 reported cases of children with transplants in Italy who tested positive for COVID-19. None of those children developed lung disease. At least one very young child in the United States required ICU care but this is not yet published. It is unclear whether the small number of cases reported is because they are not severely affected, or because the families have been more rigorous about self-isolation.
A study of 87 mostly adult heart transplant recipients in China found that prevention and quarantine efforts led to a low rate of COVID-19. Even though most resided in Hubei and many had recently traveled to Wuhan, the center of the COVID-19 outbreak, all cases were mild, especially thanks to five prevention/ quarantine efforts:
- Self-quarantining at home for more than 1 week
- Wearing a mask
- Washing hands
- Monitoring body temperature and symptoms daily
Until we know more, we strongly recommend good hand hygiene and social distancing for the families of children with transplants. Teach your children to become health heroes by reading our “Wanna be a Health Hero” handout!
Are there any current cases of pediatric transplant recipients with COVID-19 that show clinical outcomes thus far?
There are very few reported cases of COVID-19 in pediatric transplant recipients, and it is still too early to know for sure if children with transplants are at increased risk for severe disease. There are isolated case reports from North America about transplanted children with other comorbidities who have been infected. Outcome reports are not yet available from those cases. A widely circulated study of 700 pediatric liver transplant patients in Bergamo, a city located in the “red zone” of the Italian outbreak of COVID-19, was recently published in the journal Liver Transplant. The 700 children were in various stages after liver transplant, including three patients who received their transplant within the last two months, ten currently in-patients, 100 with autoimmune liver disease, and three under chemotherapy for hepatoblastoma (inpatients). Although the city of Bergamo was experiencing an extremely high incidence of COVID-19, only three of the 700 children tested positive for COVID-19, and none developed severe symptoms or pneumonia. The researchers thus concluded that “available data on Coronavirus past and present outbreaks suggest that immunosuppressed patients are not at increased risk of severe pulmonary disease compared to the general population.”
Do we know how COVID-19 affects the kidneys?
For patients who get extremely sick from COVID-19 and need a ventilator in the ICU, serious kidney problems can occur, often because of the extreme severity of the effects on the lungs/heart, which then affects the kidneys. Some published research suggests that COVID-19 can directly harm the kidney(s), which may explain the increased frequency of kidney dysfunction in COVID-19 infection compared to infections by other respiratory viruses. There is also evidence that muscle damage (rhabdomyolysis) in kidney transplant recipients infected with COVID-19 may lead to kidney damage. Again, these impacts are understood within adult COVID-19 populations, and the limited existing research on pediatric transplant recipients suggests that severe cases are rare.
Can COVID-19 cause long-term damage to children’s lungs?
Because COVID-19 is such a new illness, nobody knows much about the long-term effects of the disease. However, we know that kids tend to be less seriously affected than adults, and that the long-term effects would probably be the greatest among those who get the sickest.
How do children spread COVID-19?
Since most children have no symptoms or mild cases of COVID-19, children may play a major role in spreading COVID-19. Sneezes, coughs, and poop can spread COVID-19. It’s important to know that children who are not toilet-trained could possibly spread the disease to people who change their diapers.
How to Protect Yourself and Your Family
What should I do to keep my child and family safe?
As the parent or caregiver of a child with a transplant, your family should already be careful about protecting yourselves from germs and infections. It’s important to continue healthy habits and take extra precautions.
Encourage your family to be health heroes!
- Ensure your child takes all of their normal medicines. Because pharmacies might get crowded during this time, make sure you have 4 weeks of medication on hand.
- Please use mail-order pharmacies, if possible. You can also contact your local pharmacist to discuss and understand the local supply for medications.
- Stay at home to slow down the spread of the coronavirus. If you need to leave your home, use “social distancing.” That means keeping more distance—at least 6 feet—between yourself and other people. When you get back home, change your clothes and wash your hands.
- Don’t touch your face or rub your eyes.
- Wash your hands for 20 seconds while singing “Happy Birthday!”
- Encourage your family to be more careful than normal to prevent unnecessary injuries and doctor visits.
Why social distancing?
Social distancing is a very important way to slow down the spread of the coronavirus and “flatten the curve.” We recommend staying home. If you must leave your home, use “social distancing.” That means keeping more distance—at least 6 feet—between yourself and others.
What is “flattening the curve”?
If too many people get sick with COVID-19 too quickly, the healthcare system will not have enough medical staff, hospital beds, or equipment to properly care for them or others with different medical issues. Flattening the curve means reducing the number of people who get sick at the same time. This is the way we make sure that the healthcare system has enough capacity to care for sick people. We can’t stop the coronavirus, but we can slow it from spreading too fast. Flattening the curve will help our healthcare system deal with the strain of the outbreak.
If schools open in 2 weeks, should we send our children back?
Until there is a better understanding of how immunocompromised children are affected by COVID-19, we recommend that all children with transplants avoid large gatherings, including school, if there is an ongoing outbreak in your community. We recommend that other children living in your home also stay away from school and other group settings. Since children overall are more likely to have no symptoms or mild symptoms, they could transmit it to others unknowingly and potentially place immunocompromised children at greater risk of getting COVID-19.
How much medication/prescriptions should we keep on hand?
If possible have at least 4 weeks of your medications (if insurance allows) remaining at all times. Check to see if your insurance will allow for a 90-day supply. Try having medications mailed to your home, delivered or picked up by a caregiver so your family can avoid crowded places.
Should parents wear masks and gloves if still going to work?
We would encourage you to stay at home in self-isolation as much as possible. If you are going to be outside and able to maintain a 6-foot distance from other people. Using a mask depends on where you live and the recommendation of your team. With the shortage of masks it may not be possible. Masks are also not as protective as they may appear. We strongly recommend that you avoid crowded places, like grocery stores. Try to have your groceries delivered or a friend or family drop them off for you. Some stores have also started having special hours for the elderly and the immunocompromised. Check with your local grocery store. Try to spray deliveries with disinfectant and wipe them down before bringing them into your home. Don’t forget to wipe down the cart when you arrive at the store.
Other than self-quarantining and wiping down deliveries, what else can we be doing?
To help keep your child from getting infected, we recommend avoiding crowds, avoiding people who are sick, staying around the house as much as possible, and washing hands vigilantly. We also recommend social distancing as much as possible and avoiding touching things that other people have touched.
When shopping should we be using gloves? Could items at store be contaminated?
Gloves are not recommended at this time when shopping. If possible, avoid taking your child with you to the store to limit potential exposures. Wash or disinfect your hands as soon as possible after being in a public place like the grocery store. Remember to try to avoid touching your face, mouth and eyes while you are shopping.
If we must report to work for an essential service (like healthcare or grocery store operations), what should we do when arriving at home?
If it is possible for working family members to work from home, this is preferred. If this is not possible, you should adhere to the following practices for trying to avoid infection:
- Frequently wash or sanitize your hands.
- Don’t touch your face or rub your eyes.
- Maintain 6 feet between yourself and others.
- When you get home, it might also be smart to put your clothes in the laundry and change into fresh clothes. You could also spray disinfectant like Lysol spray on your shoes and leave them outside your door or in a separate plastic container.
Should my children and family members wear masks if we need to go out in public?
We would encourage you to stay at home in self-isolation as much as possible. If you are going to be outside and able to maintain a 6-foot distance from other people, it’s not necessary to wear a mask. We strongly recommend that you avoid crowded places, like grocery stores. Try to have your groceries delivered by friends and family, and try to spray deliveries with disinfectant and wipe them down before bringing them into your home.
If I am a single parent and must take my children out with me to the grocery store or another public place, how can I protect my family?
Stay home as much as possible. We strongly recommend avoiding locations that are likely to be crowded, such as grocery stores. Try to have your groceries delivered or ask for help with errands from other family members or community members.
Am I eligible for FMLA benefits to help my child during the COVID-19 outbreak?
If you are concerned about protecting your child from COVID-19, you might be wondering about the FMLA (Family Medical Leave Act). The FMLA provides certain employees with protection to take unpaid leave for specific family and medical reasons. It is often useful for families when a family member requires care at home. More recent laws specific to the COVID-19 pandemic—specifically the Families First Coronavirus Protection Act (FFCPA)—have expanded FMLA and paid sick leave options for many people and may apply to your particular situation. However, in general, these laws do not allow employees to take leave from work due to concern about acquiring COVID-19 at the workplace. If you wish to explore your options for taking leave from work under these laws, please consult with your healthcare team.
What type of mask should we wear if we want to wear a mask?
The CDC recommends that all people wear face masks when in public. Before putting on a mask, always clean your hands with hand sanitizer with 60% or more alcohol or wash your hands with soap and water. Try not to touch the mask while using it. Replace the mask with a new one if it becomes damp. Don’t reuse single-use masks. If you sew or create your own mask, always wash it after you wear it.
The CDC currently recommends that NIOSH-approved N95 respirator masks be reserved for healthcare professionals working on the front-lines of the coronavirus pandemic. There is a shortage of N95 respirators in the U.S., and it’s important that hospital workers can get access to these more protective masks, which filter out 95% of very small bacteria and virus particles.
Remember unlike N95s, face masks are loose-fitting and provide only barrier protection against bacteria and virus droplets. Facemasks don’t require fit testing or seal checking. Most facemasks can filter out large respiratory particles, but don’t effectively filter small particles from the air and do not prevent leakage around the edge of the mask. The CDC recommends wearing face masks in public settings to help slow the spread of COVID-19.
What if they run out of masks? Is it ok to go out without mask?
If you run out of masks, you should do your best to limit contacts and use careful social distancing.
How can I help my children cope mentally?
ACTION and PHTS have created a printable handout for children. It shows them how to be “health heroes” during the coronavirus pandemic. Be sure to download the handout and go over it with your children.
Share truthful information with your child based on their age and try to avoid overwhelming them. UNICEF suggests: “Children have a right to truthful information about what’s going on in the world, but adults also have a responsibility to keep them safe from distress. Use age-appropriate language, watch their reactions, and be sensitive to their level of anxiety. If you can’t answer their questions, don’t guess. Use it as an opportunity to explore the answers together. Websites of international organizations like UNICEF and the World Health Organization are great sources of information. Explain that some information online isn’t accurate, and that it’s best to trust the experts.”
Adjust the information you share based on the age of your child:
- Basic information about what germs are and how to stay healthy
- Helpful things to remember about health and lots of simple examples.
Upper elementary/early middle school
- Real facts of the sickness to help them separate truth from other false information they may see on the internet
- Talk about what their schools and other groups are doing to help
- More detailed information on what they can do
Upper middle/High school
- Can discuss it more in depth
- Share more resources for them to review; which sites can be trust/helpful (help them feel like they have some control).
- Understand the real importance of healthy habits and social distancing
How can I help my children cope emotionally?
Remind your children that it is okay to feel sad, worried, angry, or even happy with all the changes happening. All of us need to practice patience and understanding during these changes. Here are some great ways you can support your child during this time:
- Validate or give “names” to feelings, but do not dwell on things too much.
- Engage in healthy routines. Develop daily schedules together.
- Help your child to find positive, distracting activities when negative feelings take over. Limit news/media exposure. Together as a family, think of something good that happened each day.
- Encourage relaxation strategies, like deep breathing, mindfulness exercises, and yoga.
- Challenge those negative thoughts! Ask: Is this thought true? Is it helpful? Is there a more helpful thought I can focus on?
- Model healthy coping and take time to address your own feelings.
How can I cope?
Before talking with your child, talk with a friend, family member, coworker, or healthcare provider over the phone about your own anxieties so that you avoid increasing fear in your child by sharing all of your worries with them. Try to maintain routines, even if you’re at home. If you have any concerns, reach out to your primary care or mental health provider.
Can I still hang out with a few of my friends at my house?
No. The best way to protect yourself and your family is to limit your contacts. We strongly recommend that you do NOT have play dates, gatherings, or parties. Instead, use FaceTime, MessengerKids, and other technologies to stay in touch.
Can COVID-19 be transmitted from a mother to her baby?
The American Academy of Pediatrics (AAP) recently published initial guidance on the management of infants born to mothers with COVID-19. The AAP notes that there is limited data for pregnant women and newborns with COVID-19, but that a few small cases suggest that COVID-19 can be, although infrequently, transmitted from the pregnant mother to the newborn before or after birth. Children of all ages are susceptible to COVID-19, and infants under 1 years old are at risk for severe disease, though this is still a relatively rare outcome.
In Case of an Outbreak
If someone in our home contracts COVID-19, is there any hope of preventing the spread? How do we do that?
We understand that having someone in the home diagnosed with COVID-19 will be stressful. Start by reading the CDC’s recommendations for cleaning your home if someone gets COVID-19. There are several ways to help prevent spread to other family members within the household.
The infected household member must:
- Stay home except to get medical care. Do not use public transportation. Do not go to work or school.
- Distance yourself, or self-quarantine, as much as possible. Sleep in a separate room and try to stay in that room and eat your meals there.
- Designate a bathroom for the infected person, if possible. Do not let any other family members use that bathroom.
- Keep your child in a separate room and bathroom from the sick family member.
- Avoid sharing personal household items (cups, forks, towels, blankets etc.) with other people in the home.
- Clean and disinfect objects and surfaces that you touch during the day. Use household cleaner or wipes. Then use disinfectant spray. Be sure to follow instructions on the disinfectant and make sure you allow the spray to sit on counters and objects as long as the package recommends to ensure you are fully disinfecting.
- If you do have a mask available the infected family member should wear a mask if they need to interact with others in the household. Masks can help contain droplets from the infected individual.
Everyone in the home can:
- Frequently wash your hands throughout the day and before you eat.
- Wash for 20 seconds with soap and water.
- If hands are not visibly dirty, you can use hand sanitizer.
- Avoid touching your face.
- Clean and disinfect objects and surfaces that are frequently touched during the day using household cleaners or wipes.
How long should transplant patients remain in isolation?
For general transplant patients who do not show symptoms of COVID-19, we recommend staying at home and avoiding contact with any sick persons.
For anyone who has tested positive for COVID-19, they must remain in isolation until they have negative tests and are clinically better. If no repeat testing is available, they should remain in isolation until they are clinically better. “Clinically better” means no fever without medications and resolution of all symptoms for at least 3 days.
If my husband got exposed on Monday to one of his workmates who showed signs of COVID-19 but did not yet test positive, should he proactively isolate himself in another room at home?
The plan should depend on the exposure. How long was he around the person? How close was he to the infected person? There are many ways to approach this situation. The most conservative approach, if possible, would be to have the person who was exposed self-isolate while awaiting the test results. This is not always possible. There may be no tests available in your city or you may not be able to isolate yourself from your family. Each family is going to handle it differently. If you need guidance your local healthcare team may be able to help assess the risk.
Will there be potential supply issues for patients who are on peritoneal dialysis?
At this time, we haven’t heard about supply issues for patients on peritoneal dialysis.
Symptoms and Testing
When should we call PCP and when should we call transplant team?
If you or your child experiences a cough, sore throat, runny nose, shortness of breath, or fever above 99.6° F, call your primary care provider (PCP) and your transplant team before going into the Emergency Department (ED).
Primary care providers can play different roles in a transplant patient’s healthcare after surgery. Their role varies by institution, the location of your child from the transplant center, and even the PCP’s familiarity with transplant-related issues. If you have any doubt about who to call, we always encourage you to talk to your transplant team for guidance. We know this is a time of great uncertainty. We are always here for you.
Should we still call the transplant team just like we would before coronavirus?
You should still call your transplant team for symptoms like fever, elevated blood pressure, and vomiting— just like you would before coronavirus. These guidelines will differ between institutions. We suggest you touch base with your transplant team about their specific guidelines and recommendations.
For those who live 1 to 3 hours away from our transplant center, should we triage at a local hospital or PCP or head straight for the transplant center if our child experiences symptoms of COVID-19 or if they have troubling signs for transplant?
Always call before heading into an Emergency Department (ED), primary care office, or urgent care center. Recipients with a positive COVID-19 test should contact their transplant center to discuss where to go for evaluation, management or even whether to stay home. Based on early adult data, not all transplant recipients will require hospitalization. It’s important to know that if your child is very sick, getting to the nearest medical center may be more important than traveling to your transplant center.
Are children’s hospitals being allotted enough testing kits for COVID-19?
This is variable at this time. It depends on where you live in the country. Everyone is working to have tests available for the most at-risk patients.
What is the probability of a false positive or false negative result with a coronavirus test?
10% of the results from current COVID-19 tests are false negatives, meaning that the test comes back negative but the patient does, in fact, have COVID-19. False positives are very rare. False positive and false negative results can occur due to many factors. To decrease the risk of a false negative test, it is important that the sample is collected and stored optimally. A nasopharyngeal swab is generally uncomfortable and is more than just a simple swab of the front of the nose. Ask questions to ensure that an appropriate sample is collected. Most people without disease will have a negative test. Testing is continuously being improved to reduce both the number of false positives and false negatives. We don’t know if/how this is altered in transplant patients, but it should be similar based on our use of other routine tests to evaluate for viruses in transplant patients.
Are symptoms different in immune-suppressed people? For example, some immune- suppressed people don’t tend to generate a fever?
We don’t have enough information yet to know the answer to this question.
Is a simple antibody test for COVID-19 real or too good to be true?
A blood test to look for immune response (antibodies) to the coronavirus will likely be available eventually. Antibody tests may actually be quite important as this epidemic drags on, but is of limited use right now. The more useful test for now is the ability to detect active infection with a nasal swab. Additionally, we don’t yet know if having positive serology from a potential future antibody test will indicate protection from reinfection.
When the media says that people who are immunocompromised are at a higher risk, does that definitely include liver transplant children by virtue of being on immunosuppressants or no since doctors can technically control amount of immunosuppression?
The blanket term “immunocompromised” does include those with liver transplants. However, this statement is based on observations in adult patients.To date, there simply is not enough data to know whether immunocompromised children are at similarly increased risk. Because we do not know, out of caution, we recommend strict adherence to self-isolation.
Should we change our child’s immunosuppressant dosage?
While it is still early, COVID-19 does not appear to behave intuitively. The interaction between COVID-19 and immunosuppressants is still unknown at this point. Do not change your immunosuppressant medication without your transplant doctor’s instruction. Decreasing your child’s medication could lead to rejection, which itself could lead to hospitalization and potential exposure to COVID-19. You can be assured that if your child with a transplant develops COVID-19, your transplant doctors will consider whether to decrease the dose of immunosuppressive medications on a case-by-case basis.
Does having multiple comorbidities (i.e. post-transplant, hypertension, ESRD) put them at exponentially higher risk?
Older adult patients with existing comorbidities are at the highest risk of becoming very sick from COVID-19. There is not enough data to know whether children with comorbidities are at increased risk. Out of caution, we recommend that you strictly adhere to self-isolation until more is understood.
Are there cases of other types of immune-suppressed kids being affected yet?
There have been very limited numbers of immunosuppressed children being infected with COVID-19.
My daughter had a liver transplant and has asthma. We are concerned how she would handle this virus.
It is very natural to be anxious about this global pandemic. However, many people, including some transplant recipients, have handled this infection similar to people without transplants. At this point, it is not clear if children with transplants will have more trouble than other children without immunosuppression or without asthma.
To help keep your child from getting infected, we recommend avoiding crowds, avoiding people who are sick, staying healthy at home as much as possible, and washing your hands frequently. We strongly recommend social distancing and avoiding touching things that other people have touched.
In a pre-published study on a dialysis center in China, most infected patients had a mild case and they surmised that the immune suppression in Chronic Kidney Disease (CKD) might actually have helped these patients. Do we have any information to support or refute this?
The study on hemodialysis patients with COVID-19 notes some indications that immunosuppressed patients are not at higher risk than the general population. We have not seen additional reports about dialysis patients, but this report from China would fit with the other emerging data on immunosuppressed patients.
Treatment and Vaccination
If our transplant kids test positive, is there a different treatment for them compared to other kids?
Currently, we would not treat transplant children with mild COVID-19 differently from how we would treat other children. Most children with COVID-19 have mild symptoms that can be treated at home. If hospitalization is required, the current approach is to provide supportive treatments such as fluids to reduce dehydration, medication to reduce fever, and supplemental oxygen in more severe cases. Many centers are considering treatment based on ongoing studies and/or off-label or compassionate use.
Are there any proposed COVID-19 treatment medications (such as Chloroquine or Remdesivir) that are viable for transplant patients? Would any of the above interfere with Prograf or Cellcept?And are there other risks in taking those meds for transplant patients?
Currently, we would not treat transplant children with COVID-19 differently from how we would treat other children.
Some medications that may be used to treat COVID-19 can interact with immunosuppressive medications. Many of these can be anticipated, and modifications can be made if needed. You can ask your healthcare team how they are working together to ensure that drug-drug interactions and other potential side effects are minimized.
I have heard that COVID-19 could cause complications if you are on ACE Inhibitors?
There has been much speculation on whether ACE inhibitors (ACE-I), angiotensin receptor blockers (ARBs) or ibuprofen might have an effect on COVID-19 infections. At this time, there is no basis for a recommendation on this issue because some experts have speculated a protective effect from ACE inhibitors, while others think the opposite. We therefore recommend continuing your current treatment for now and discussing this issue with your provider at regular intervals as additional information becomes available. If you become ill with COVID-19 symptoms, your provider will assess this decision on a case-by- case basis since ACE-I/ARBs are often held in the setting of dehydration/acute kidney injury.
I have heard that COVID-19 could cause complications if you are getting Ibuprofen?
Several physicians in the news have recommended avoiding ibuprofen (and other NSAIDS like naproxen) based upon the observation that some patients who were taking ibuprofen experienced severe forms of COVID-19 pneumonia. However, this is not supported by any reliable information and the observations may or may not indicate a real risk.
At the same time, since acetaminophen (Tylenol) is felt to be safe for COVID-19 infections, it is recommended that people use acetaminophen to relieve symptoms if they are able to do so, and only use ibuprofen if they do not obtain relief from acetaminophen. Children with transplants or VADs do not take NSAIDS anyway due to the effects on the kidneys and the coagulation system.
Is it possible that a vaccine will be available sooner than Feb 2021? Will it be a “dead” vaccine that our transplant children can receive?
There is currently no vaccine for COVID-19. It is too early to tell which vaccine-candidates will make it through the clinical trials process and be effective. However, vaccines currently in development are “dead” vaccines. Public health officials have indicated that a vaccine will not be publicly available until 12-18 months from now.
Are treatment recommendations for transplant patients any different from the general population?
For pediatric kidney transplant patients with any kind of illness, the risk of dehydration is higher, so drinking lots of fluids would be more important than for the general population. A kidney transplant patient might need to have labs checked sooner than an otherwise healthy child to make sure levels are OK. Otherwise there would probably not be significant differences in the treatment and care for COVID-19 in liver transplant patients.
For children with liver transplants who get COVID-19, it will be important to pay careful attention to fluid balance by checking their weights regularly and monitoring their urine output. While adequate hydration is helpful, COVID-19 may well come with a significant amount of acute kidney injury (AKI) and associated fluid overload. Laboratory tests for children with liver transplants and COVID-19 may therefore look acutely worse than their baseline, and overhydration that results from fluid retention in AKI could make respiratory compromise from COVID-19 worse. For children with kidney transplants, it’s important to work in close collaboration with your transplant team to manage suspected or actual COVID-19.
Changes in Healthcare
Why did my clinic appointment get canceled?
Many clinic visits and elective procedures are now cancelled or postponed to protect you, your family, our communities, and the healthcare system. We all need to work together to slow down the spread of the coronavirus. Your appointments may change or happen in new ways, like telehealth.
What is telehealth?
Just like FaceTime calls or phone calls to your family and friends, now you get to virtually meet with your doctor online too. Talking with your doctor and healthcare team on the phone or computer is called “telehealth.” It’s a great way for us to stay connected and have our usual appointments and conversations without risking the spread of the coronavirus. There are times where telehealth will not be satisfactory and you will need a physical exam. In addition, your child will need to get scheduled lab work to monitor how they are doing.
How possible is at-home testing (like the ones offered at Seattle Children’s) for our centers to check basic lab work?
While the at-home blood spot testing offered by Seattle Children’s can be useful in certain situations, it can only check creatinine and tacrolimus levels at this time. This means that while this type of testing is possible, it’s also quite limited and may only be appropriate when other labs are not needed. These tests may or may not be available at your local lab. If these tests are available at your lab, your medical team will determine which situations may be appropriate to us this test instead of the regular laboratory testing that you are used to.
How might things be different in the hospital at this time?
If you need to visit the hospital during the coronavirus outbreak, there are some differences you should expect.
- When you enter the hospital, you might be screened for illness.
- The Emergency Department (ED) may be more crowded. Always call your healthcare team before going into your ED.
- You may notice different masks and gowns worn by healthcare team members.
- You might be in a different area or unit of the hospital than you are normally.
- Food offerings may be different and cafeterias may be closed.
- There may be different visitor policies.
If transplant patients require a hospital visit, do we do anything different?
This depends on the reason for your visit:
- If your child is ill, call your Transplant Team first before going to the Emergency Department.
- If your child has a routine visit, check with coordinators first to determine if it can be delayed or if telehealth or getting labs locally will be enough.
Are transplants deemed essential surgeries and will they still be done during this pandemic despite the change in the hospital systems?
As of March 24, 2020, CMS has deemed transplant surgeries as essential care. According to this guidance, transplant surgeries should be considered high priority and should not be postponed during the COVID 19 pandemic. The situation is being continually assessed both at the program level and also at the national level. Each case will be reviewed closely with consideration of urgency and the risk to the patient.
With elective surgeries being held off, what does this mean for catheterizations and biopsies?
The definition of elective procedures will vary by institution. In most centers, the decision to cancel patients is reviewed with key team members to determine if there is an immediate urgency for the procedure or if it can be delayed safely. Delaying procedures is helpful for supporting social distancing and reducing the risk of coming into contact with the virus. It is important to realize the catheterization/biopsy techniques vary greatly between institutions and that many routine procedures can be delayed safely without impacting the health of your child.
Are we exposing our child when we go to get labs?How can we protect them? Is delaying ok?
Laboratory blood draws do represent a potential risk of exposure to coronavirus. You can contact your transplant team to determine whether it’s possible to decrease or delay your lab visits at this time. However, this decision has to be balanced against maintaining good medical management of your child’s transplant care or heart care.
If you do need to go to the lab, we recommend calling your lab ahead of time to find out which times are the quietest. Try to go during the quietest times of day. When you go to the lab, bring only the people who must be there and not any extra family members.
Here’s what you can do to protect yourself and your child while at the lab:
- Wear face masks if available.
- Stay at least 6 feet away from other individuals waiting.
- Wipe the armrests of chairs you or your child are using.
- Avoid touching doorknobs or other objects if possible.
- Wash your hands frequently.
Are clinical trials like TEAMMATE going to be affected by this outbreak?
In many centers, patient-related research has been placed on hold both for the safety of the patients and the research staff. New patients will not be enrolled. The impact of this will be study-dependent and will vary by institution.
Is the hospital isolating specific rooms for transplant patients or other immune suppressed patients?
This will vary hospital to hospital depending on the amount of virus circulating in the community and the number of patients in the hospital.
Should my healthcare providers be wearing gowns, masks and gloves?
The CDC is only recommending Personal Protective Equipment (PPE) be worn when seeing a patient that has symptoms consistent with COVID-19. This is hospital-dependent and in some hospitals, masks are being worn for all patient encounters. At this point, it is variable and may change as more equipment is made available.
How is the shortage of masks affecting care in the pandemic?
The shortage of Personal Protective Equipment (PPE) is putting healthcare workers and patients at risk. This is one of the reasons that all elective visits and surgeries have been canceled. We are hopeful that more PPE is on the way to your hospitals soon. Many individuals are donating PPE and private companies are ramping up production.
Are they currently trying to get more ventilators/respirators in areas with shortages like NYC?
This is an area of significant concern nationwide, and every hospital is working to ensure they will have enough medical supplies to support their local areas through this pandemic. Efforts are being made to find creative ways to increase equipment and medical supplies, including companies shifting production to hand sanitizers, private clinics donating respirators, and other companies donating extra supplies. However, all these efforts will only be successful if our communities continue social distancing to slow down the ongoing spread of the virus.
How is the pediatric heart failure and transplant community coming together during this pandemic?
Our medical societies are actively working together to come up with answers to all of your questions. We are using weekly calls and message boards to share our learnings. We are also collectively accumulating data to learn in real time, since the coronavirus pandemic changes rapidly day-to-day. We have contacts in the states and in the countries that have been the epicenters of the pandemic. We are learning from many of the providers and families that have been through the worst of this weeks to months before us. The societies working together to bring these Questions & Answers together are ACTION, PHTS, and the Starzl Network.
What data is available on COVID-19 in children?
- NPC-QIC Research Explained Special Report
- CDC MMWR Report – released April 6, 2020
Have a question?
Is there something you are concerned about that hasn’t been answered? Send your questions to: [email protected]
Special thanks to the following for providing answers:
Cincinnati Children’s Hospital MedicalCenter
- Lara A. Danziger-Isakov, MD, MPH
- David K. Hooper, MD, MS
- Angela Lorts, MD, MBA
- Mark Murphy, DO
- Grant C. Paulson, MD
- Jens Goebel, MD
Children’s Hospital of Philadelphia
- Sandra Amaral, MD
Morgan Stanley Children’sHospital
- Marc E. Richmond, MD
UPMC Children’s Hospital of Pittsburgh
- Zachary T. Aldewereld, MD
- Michael Green, MD, MPH
- Marian Michaels, MD, MPH
- George V. Mazariegos, MD
Lucille Packard Children’s Hospital, Stanford University
- Abanti Chaudhuri, MD
- Paul Grimm, MD
- David Rosenthal, MD
Stollery Children’s Hospital, University of Alberta
- Jennifer L. Conway, MD
- Jason Misurac, MD
This information should not replace medical advice from your doctors or medical team. We encourage our readers to follow their transplant team’s medical advice and reach out to their doctors and medical team for further recommendations. | 1 | 2 |
<urn:uuid:e49ab986-5571-4eac-a388-9340184365fa> | Lots of electronics devices are now powered by open source software such as Linux, open source hardware is not as wide-spread, but gaining tracking traction thanks to the like of Arduino, Beagleboard.org, Olimex, and many projects on crowdfunding websites, and now we even start seeing some open source silicon. Existing open source processors include LEON3 (SparkV8) MCU, OpenRisc, and just very recently, LowRISC, based on 64-bit RISC-V instruction set architecture, has been announced with the backing of some of Raspberry Pi co-founders, Google ATAP, etc… and is currently being developed at the University of Cambridge, UK. Parallax Propeller 1 P8X32A is another MCU which has been open sourced last week.
Propeller 1 P8X32A had however been released in April 2006, and can be sourced as a 40-pin DIP chip for prototyping, and 44-pin QFP and QFN for production, and come with the following key features:
- Power Requirements: 3.3 VDC
- Operating Temperature: -55 to +125 degrees C
- Processor cores: Eight 32-bit cores
- I/O Pins: 32 GPIO CMOS
- External Clock Speed: DC to 80 MHz
- Internal RC Oscillator: ~12 MHz or ~20 kHz
- Execution Speed: 0 to 160 MIPS (20 MIPS/cog)
- Global ROM/RAM: 32768/32768 bytes
- Cog RAM: 512 x 32 bits/core
The MCU can be programmed with several languages including Spin (native, object-based), assembly (native low-level), and C/C++ (via open-source Propeller GCC toolchain). Each core can access all 32 I/O pins and other shared system resources, but comes with its own memory and set of configurable hardware for creating, releasing, and re-creating software-defined peripherals as needed.
All the design files ( Verilog files and top-level HDL) have been released under GPLv3 license, and you can load these to simulate the micro-controller on a computer or an FPGA board with access to I/Os. The license allows for derivative works, so anyone can use the Propeller P8X32A micro-controller as a base for research, development, or any form of experimentation.
There are two “education and development” FPGA boards by Terasic officially supported: DE0-Nano, and the more versatile and powerful Altera DE2-115, both powered by an Altera Cyclone IV FPGA. Beside the VDL files, the instructions to emulate Propeller 1 on the FPGA boards are provided for Linux and Windows, using Quartus II software. The development tools PropellerIDE (Spin/ASM) and SimpleIDE (C) are also open source.
You can find more information, and download all files on Parallax Propeller 1 Open Source page.
Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011. | 1 | 2 |
<urn:uuid:fb66b709-8e64-4d8c-a062-895abd95d238> | I have been using the BBC Micro:bit modules (~$20) with our kids to teach them basic hardware and software. The platform offers a block programming interface (very much like Scratch) and Python programming. The interesting thing is that the interface will toggle seamlessly between the two programming languages.
To add functionality to your projects Micro:bit supports extensions. This feature allows Arduino and Raspberry Pi sensors and devices to be connected to the basic module.
In this blog I wanted to:
- Show a extensions example with deviices that are not directly available in the basic list
- Comment on some limitations
- Document how to add Micro:bit parts in Fritzing wire drawing tool
An Extension Example
Use the Extension menu item to add a new set of functionality to your Micro:bit’s project.
For this example I wanted to use some devices/sensors that I was using on my Arduino projects. These devices included:
- DHT11 Temperature/Humidity Sensor – extension: https://github.com/alankrantas/pxt-DHT11_DHT22
- I2C 0.91 OLED Display – extension: https://github.com/adafruit/Adafruit_SSD1306
- TMP1637 Four Digit Display – extension: https://github.com/makecode-extensions/tm1637
It is important to note that the extension may not be readily available from the Microbit web page. For my project I did an Internet search to find the required github links. Once you have the URL it can be pasted into the Extension’s page:
Below is a picture of the Micro:bit with the three added devices
The Micro:bit logic used an on_start block to setup the pins for the TM1637 4-digit display, and initialize the OLED display.
The forever block:
- queried the DHT11 sensor (on Pin 0)
- showed the humidity on the Micro:bit display
- showed the temperature on the TM1637 display
- showed both the temperature and humidity on the 0.91″ OLED
- cycled every 5 seconds
The block code can be viewed (or edited)in Python:
""" Devices: DHT11 Temperature/Humidity Sensor TM1637 4-Digit Display I2C 9.91" OLED Display Show Temperature on TM1637 Show Humidity on Microbit screen Show both Temperature and Humidity on the the OLED """ tm = TM1637.create(DigitalPin.P13, DigitalPin.P14, 7, 4) MuseOLED.init() def on_forever(): dht11_dht22.query_data(DHTtype.DHT11, DigitalPin.P0, True, False, True) basic.show_string("H: " + str(dht11_dht22.read_data(dataType.HUMIDITY)) + " %") tm.show_number(dht11_dht22.read_data(dataType.TEMPERATURE)) MuseOLED.clear() MuseOLED.write_string("Temperature: " + str(dht11_dht22.read_data(dataType.TEMPERATURE)) + " C") MuseOLED.new_line() MuseOLED.write_string("Humidity: " + str(dht11_dht22.read_data(dataType.HUMIDITY)) + " %") basic.pause(5000) basic.forever(on_forever)
Some of the limitations that I found were:
- Not all Arduino sensors and devices were supported
- Not all Arduino functionality is available with Micro:Bit. For example fonts on OLED devices.
- Finding the correct extension can be tricky. For example searching 0.91 OLED doesn’t return any hits.
- Some devices were supported in software, however they required 5V. A good example of this is the 2×16 LCD display
Documenting Wiring in Fritzing
Fritzing is an excellent free tool for wiring drawings (Note: for some platforms a donation might be required).
To add some Micro:bit parts to Fritzing see:
Once a parts file is downloaded it is imported into a “My Parts” grouping.
By adding extension you can greatly extend the usability of Micro:bits.
I found that for many simple projects block programming was quicker to create than Python, but it nice that the Python code gets autogenerated. | 1 | 2 |
<urn:uuid:0ab73ed5-498a-48d9-89af-797a8bff6b0a> | In my research and teaching practices with video-game based learning I have identified a number of cinematic games that are currently on the market that I believe are ideal for foreign language learning. They all feature appealing, complex narratives, possess a task-based, problem-solving orientation, and present full voice-acted conversations between characters. These are all features that aid in stimulating learning and organizing group interactions in the language classroom setting. In my experience, they can also be successfully used for autonomous learning by second/foreign language students, starting from the ACTFL Novice-Mid level (which roughly corresponds to two semesters of foreign language in college, or two years in high-school).
This blog post focuses on games which I have personally used in the foreign language classroom. The games are multi-lingual, meaning that they have been localized in a fairly large number of languages (including Italian, which is my professional focus). These games go above and beyond the “usual” English, Spanish and French, which is the norm for games sold in North-America and, often, Europe. These are all games for mature teens and above,
Since the early 2000s, engaging, fully voice-acted narratives have become the hallmark of interactive digital stories in commercial video games. All the main games in the Assassin’s Creed (AC) series by Ubisoft lend themselves very well to game-based activities in second/foreign language & culture courses. The first game in the series, AC: Altair’s Chronicles (2008), took full advantage of technical advancements afforded by the new, at the time, generation of consoles (PlayStation (PS) 3, Xbox 360 and more powerful Windows PCs), presented players with a historical fiction that unfolded in an action-adventure, open world video game. The success was such that the game turned into a series, which at presents counts nine episodes plus a number of supporting “side stories,” each set in different eras and regions of the world. Other recent incarnations of game series that began in the late 1990s such as Tomb Raider have also recently evolved into full voice-acted, complex narratives.
Among the current or recent games, those that represent the best fully interactive, multi-media digital narrative “anime cinematic games” for foreign language & culture courses (senior year of high-school and college) are:
– Heavy Rain; Beyond: Two Souls (known in Italy as Beyond: Due anime) (respectively, 2010 and 2013 for the PS3, and 2016 for the PS4 version); and the recent Detroit: Become Human (PS4), developed by Quantic Dream and Sony exclusives.
– Assassin’s Creed – The Series, by Ubisoft. I personally worked specifically with the three chapters that take place in Renaissance Italy, which have allowed me to also deliver accurate cultural elements to my Italian language & culture college courses. [Assassin’s Creed II, for PS3, PS4, Xbox 360, Xbox One, Microsoft Windows and Mac OS (2009-2016); and its direct sequels, Assassin’s Creed Brotherhood, for PS 3, PS 4, Xbox 360, Xbox One, Microsoft Windows, and Mac OS (2010-2016); and Assassin’s Creed Revelations (for PS 3, PS 4, Xbox 360, Xbox One, Microsoft Windows, and Mac OS (2011-2016)]
– Tomb Raider, by Square Enix for PS 3, Xbox 360 and Microsoft Windows (2013) and its direct sequel, Rise of the Tomb Raider, initially an Xbox One exclusive (2015), and now available also for PS4 and PC. A new chapter, Shadow of the Tomb Raider, has been announced for fall 2018.
These are all games that I have used in my class instruction. They, in my view, present the best scenario for F/L2 acquisition. The games I select, besides having engaging narratives (with AC II, AC Revelations and AC Brotherhood even offering outstanding overviews of Italian Renaissance culture), also conform to my own personal rules on teaching through video games, that is, no war games nor any horror games. While there is some graphic violence in all games, they are still suitable for the average college student population, with ratings ranging from Teen through Mature (18+).
Some of the games I have mentioned date back as far as 2008. Keeping up to date with the latest video game offerings is not a requirement. In the gaming world, “retro” is cool. Also, we should bear in mind that given the Teen/Mature ratings of those games (or other similar games); many of our present-day students would not have been of suitable age to have experienced those games when they were first available. An additional advantage in using older games is that many of them are available at a much cheaper price than current releases, and often via convenient digital delivery.
The primary reason I chose and recommend the above mentioned games, however, is because they all have a higher emphasis on storytelling/narrative, animated scenes and voice acting, and more “casual gamer” oriented gameplay that does not require much in terms of previous experience with gaming. Any student can potentially take the controller and proceed through a section of the game. This is even more likely for students with some gaming experience, which at this point in time is the most likely scenario with our students.
NOTE: This blog post is a revised/edited/paraphrased extract from an upcoming publication.
Image: Assassin’s Creed II by Ubisoft. | 1 | 2 |
<urn:uuid:28a9bb21-70bf-4960-b713-705cfeab710e> | Music historians may disagree about whether the electric bass was first invented in the early 1920s, the early 1950s or somewhere in-between. But there’s no question that the technological catalyst required to make the concept a reality — a pickup installed to enable amplification of the bass — was a game-changer.
As my old friend and former bandmate/colleague Mac Randall has written about here on the Yamaha blog, guitars first started using pickups back in the 1930s, helping them project from the bandstand. But it wasn’t until the early 1950s that essentially the same invention helped the poor double bassist to be heard.
Basically, a pickup is a transducer — a device that converts a signal of one type of energy into one of another type of energy; a good example is a microphone, which converts physical sound waves traveling through air into an electrical signal. The earliest electric bass pickups were magnets wrapped a few thousand times in very fine copper wire. The magnetic field around these windings would react to the vibrations of a moving bass string, creating an electrical voltage that could be amplified.
Pickup design and construction have steadily improved since those early days (although some vintage pickups are highly sought after today and often serve as the inspiration for modern designs), giving us lots of great options to choose from. Today, there are a number of different types of pickups — magnetic, piezo and optical — and several intriguing pickup configurations, each with its own subtle but unique set of qualities and characteristics:
– Magnetic pickups are where it all began, with magnets wrapped in copper wire, and are still the most common type of electric bass (and electric guitar) pickup. They include single coil, split coil, double coil (aka humbuckers) and other minor variations on each theme.
– Piezo pickups react to pressure, rather than a change in a magnetic field.
– Optical pickups rely on light that’s shined onto a vibrating string — and the conversion of the resulting shadow caused by those vibrations — to create an amplifiable signal.
Let’s take a closer look at each type, and their common configurations.
A single coil pickup, as its name suggests, is a solitary straight copper wire wrapped around magnetic pole pieces aligned beneath the strings of your bass. Bright and aggressive-sounding, you’ll typically find single coils installed close to the bridge, but you’ll rarely encounter a bass with only one single coil pickup. More usually, you’ll find them coupled with another single coil pickup in the neck or middle position (as in the Yamaha BBNE2 Nathan East Signature Model), or paired with a split coil (see below) in the neck or middle position — the configuration used in all Yamaha BB and RBX Series basses, as well as Yamaha TRBX 200/170 Series basses.
A split coil is pretty much what it sounds like: a single coil split into two parts, each with its own smaller coil. Instead of running straight across the full width of your strings, those two individual coils are staggered, resulting in a signal that’s attractively out-of-phase, which in this case is a good thing. Compared sonically to the single coil, the split coil sounds juicier, with bouncier lows and punchier mids. Some of that comes from the traditional placement of split coils — roughly midway between the end of the fretboard and the bridge (as in the Yamaha TRBX204 shown below), where the string vibrations are more active and harmonic. There are also basses that have two split coil pickups, with the second one installed closer to the bridge.
One benefit of split coil pickups is that they naturally cancel out electrical hum from lightbulbs and amplifiers that single coils can transmit. As its name suggests, so does the humbucker, also known as a “double coil” pickup. It consists of two single coil pickups connected out of phase with one another, and with their magnets aligned as polar opposites. This effectively cancels out interference, with a side benefit of a hotter signal, making it a popular choice among bassists.
Their sound can vary quite a bit, too. For example, if you like a crisp, modern sound, you’ll love the humbuckers in Yamaha TRBX 300/500/600 Series basses, as well as those in the Yamaha TRBJP2 John Patitucci Signature Bass. But they can also provide a beefier vibe, as in the vintage “mudbucker” inspiration behind the DiMarzio Woofer Pickup found in the middle position of the new Billy Sheehan-designed Yamaha Attitude 30th bass.
Unlike magnetic pickups that rely on magnetism to convert string vibrations into sound, a piezo-electric pickup uses a thin, compressed layer of crystal installed inside the bridge (under the string saddles) to convert the pressure changes caused by string vibrations into an electrical signal. Piezo pickups aren’t exclusive to acoustic-electric guitars, but that’s where you usually see them. (They’re rarely found on basses.) On their own — that is, without EQ and/or unblended with another pickup — piezos can sound a bit thin, but they faithfully reproduce the unique airiness and zing that define acoustic string instruments. They also have the added benefit of under-bridge installation for a stealth look.
Optical pickups are an even more esoteric innovation in that they use light to convert string vibrations into an electrical signal. Similar to piezos in their stealth approach, the under-saddle pickup shines a light onto the string and converts the resulting shadow (which changes when the string vibrates) into an electrical signal. Optical pickups aren’t found on many basses just yet, but maybe one day they will be. They’re touted for delivering a noiseless signal with extended frequency range and flat response. And interestingly, they boast longer sustain than magnetic pickups because there’s no magnetic field exerting pressure on the string. This is definitely technology worth keeping an eye on.
What type of pickup (or pickups) your bass uses makes a big difference in the kinds of sounds you can create. Some players prefer one type — two single coils, for instance — while others prefer to combine types together, such as a middle position split coil paired with a single coil in the bridge position.
Whether you like your pickups underwound or overwound, blended or soloed, made with a specific type of Alnico material or with oversized pole pieces from rare earth materials is up to you. But whether you choose to obsess over it or ignore it, just be thankful there are so many options available. The electric bass pickup of today is a far cry from many decades ago. Bassists back then didn’t argue over how many pickup windings made for the best signal. To them, being heard at all was an improvement!
Check out Michael’s other blog posts.
Click here for more information about Yamaha basses.
Copyright © 2023 Yamaha Corporation of America and Yamaha Corporation. All rights reserved. | 1 | 4 |
<urn:uuid:efdfa24d-de0d-4fb1-bd0b-0350cd8fb564> | In 1990, Sir Tim Berners-Lee invented what will forever change the face of our world: the World Wide Web. Twenty-two years later, here I am, using this extraordinary tool to spread knowledge instantaneously throughout the world. Yet, I still believe that the full potential of the Internet is far from being reached. Twenty-two years only after the initial launch, our source of information is no longer called the television. It’s called Google. Our contact agenda is no more a scrappy book with deletions and overwriting. It’s known as Facebook. Our encyclopedia isn’t a dusty old large book forgotten in our libraries anymore. It is named Wikipedia. Hopefully, in a few years, your source of scientific information will no longer be some monthly magazine. It will be Science4All. After all, the initial purpose of the Web was to facilitate knowledge sharing for scientists.
In this article, I will explain to you how the languages of this amazing tool have been created. And how they work. I hope you’ll take advantage of that to create our next unavoidable tool in the future! But I’m not going to give you a tutorial of what codes to write to make your website cool. One reason for that is that I’m still learning. What I’ll do is give you a global understanding of how languages of the Web work so that you can understand this amazing new world, talk to specialists with their concepts and exploit the infinite possibilities of the Web for your own purposes.
Just like I’m using the World Wide Web to write and publish this article, Sir Tim Berners-Lee didn’t start from scratch to design the World Wide Web. Indeed, his work used the structure of the Internet.
I used to confuse those two concepts too! To understand the Internet, let’s go back in time. As computers started to arise, scientists started to design interactions between them in the 1960s. This was a first conceptual step. Quickly enough, computers started to interact, but this was usually done locally, that is, among computers of a same room, or of a same company, hence forming isolated networks, aka intranets. But in the 1970s, larger intranets started to be built by merging intranets, slowly transforming them into one single net linking most intranets. This single net is what we call the Internet, which emerged in the 1980s.
Connecting people is not enough to make them interact. Try to put in the same room farmers from different countries. They’ll have trouble to interact and I’ll let you guess why…
HTML invents Hyperlinks
Exactly! Just like humans, computers need to define a language they’ll speak to understand them well. That’s what Tim Berners-Lee defined in 1991. A language. This language is called HyperText Markup Language, better known as HTML. It’s a little bit hard to read for humans (especially if you’ve never learnt it nor practiced it, just like any other language), but you can read it on the browser you are currently using to read this article, by right clicking on the page and choosing “read source code” (or something similar, depending on your browser). Fortunately, the computer (well, the browser) translates the message of a web page from the HTML language to the visual page you are seeing.
But not just any language. In particular, Tim Berners-Lee, by inventing the HTML, designed a new concept that would revolutionize the structure of information: Hyperlinks. Before the HTML, we could only navigate with folders, just like your computer’s whether you’re on Windows, Mac OS or Linux. But now, thanks to the HTML, the navigation is made much simpler, as hyperlinks naturally appear in the content.
This very concept is actually the reason why the Web is called “web”. As a matter of fact, think of every web page as a dot, and link any two page A to a page B, if page A contains a link towards B. Then, you’ll have drawn a graph, aka network. This graph is the Web. Note that this graph is different from the Internet that links computers. Understanding this network and helping users to find their ways in the Web then becomes crucial. That’s what Google does. That’s what made and will make this company one of most influent company in the world.
Let’s add the fact that web pages can be found with an address called URL (Uniform Ressource Locator), like for instance http://www.Science4All.org. Well, that is not entirely true. For instance, if you go to URL http://www.facebook.com, you won’t get the same page as me, as I’ll be logged in. But that’s due to advanced languages of the web, which have been introduced later, and which we’ll discuss later as well. At the beginning of the Web, each page corresponded to one URL. Besides, even now, hyperlinks can only lead you to an URL. In particular, fortunately, they cannot lead you to my facebook page logged in as myself!
But the HTML alone wouldn’t have brought us to the Web we know today! In fact, developing web sites with the first version of the HTML today would be deeply inefficient. Indeed, all the information of a web page had to be inserted in a single file.
Mainly, the problem with that is that each web page’s code needs to be written. Yet, if you compare the html codes of two Science4All articles, you’ll see that they are very similar. Thus, somehow, we’ll have done some copy-paste two generate similar pages. And computer scientists (should) hate copy-pasting their codes.
Sure. Except, imagine I want to change the colors of the “Science 4 All”, the sizes of top-right buttons or the alignment of the texts of the sidebar. I would have to modify every html file I’ll have made. Well, so far, I only have two dozen important pages… so it would be long and boring but I guess I could do it. But imagine if Google found out that people didn’t like the color of their links… There would be absolutely no way every HTML file could be changed. That’s how emerged the idea of the introduction of a second complementary language in 1996, called CSS.
CSS creates separate formatting
Precisely! The idea of CSS is to separate the formatting from the content. This way, marketers can make wonderfully beautiful templates of pages, while journalists can focus on feeding the website with quality content. And the reason why this idea has revolutionized the Web is that we can define one CSS file that will be used for several HTML contents. To do so, HTML files simply need to have one line saying that the formatting will be done with such CSS file.
by simply modifying one CSS file! That’s exactly the reason why CSS is very efficient. Just like HTML, CSS has improved a lot since 1996. The last versions are HTML 5 (which had been highly expected because it enables the inclusion of multimedia supports!) and CSS 3.
Yes it is. But he’s not doing it all by himself. He actually founded the World Wide Web Consortium, aka W3C, which he is at the head of. The W3C’s role is to define new versions of HTML and CSS, which are then implemented by browsers to render the web pages we all enjoy.
However, we wouldn’t get far if we had to write HTML files whenever we wanted to generate a new page. In particular, Google is not having people writing HTML pages to find out the results of your searches. That’s where comes more computer science to perform better user of the Internet.
PHP provides dynamic pages
Google uses a source code that takes into account parameters such as the keywords you used for your search, but also information about websites that fit your search (and much more information!) to smartly automatically generate a web page that corresponds to what you were looking for. This generation will be done with the use of program. Now, this program is also implemented, by the web master.
Yes. And as you may have guessed, this program is not running on your computer. As you send your request to your router, the router will transfer it, and the request will finally arrive to a computer in charge of managing the website you requested. This computer is called a web server. In theory, it could be any computer, but since it needs to be always up and running, and since it might have to deal with numerous requests, web masters usually use the computers of companies specialized in providing web servers, called web hosting companies. Science4All’s web host is OVH.
The web server then receives the request, and a program runs to generate the HTML page corresponding to your request. This HTML page is then sent back to you, via your routeur. Your browser finally receives it, and displays nicely the HTML page to you.
What’s more, because of the information you send to the web server, the returned page can be personalized for you. That’s why, you’ll get a different page when searching for www.facebook.com, depending on the session you open. Similarly, on Science4All, you can log in, and the returned page will tell you that you’ll have access to the “article editing” page.
Now, the web master needs to talk some language to the web server to design the program.
The HTML language is very useful to say what a page will look like, but explaining how to customize each page can’t really be done in HTML. That’s why today’s web masters all use another language to generate HTML pages.
There are several languages which can be used to generate HTML pages. In fact, since a HTML page is simply a text message, any programming language can be used, including the most fundamental languages like C or C++. However, I would strongly recommend not to use those languages, as others are much more adapted for the purpose of creating websites, because of the frameworks constructed just for that.
A framework is a set of pre-defined useful functions. They can easily be re-used in any other program. They facilitate developers’ work as they no longer have to redefine any basic function, such as, displaying a title, designing an array, including sound… etc. Microsoft has developed, based on C#, the framework ASP .NET. Java introduced the framework Java Server Pages (JSP), which gives what’s known as JEE (and is very common for large institutional websites developed by professional web masters). Ruby defined Ruby on Rails. Python can be used with Django. Basically, for any fundamental language, an extension of functions has been created for a simpler web site development. And there’s also PHP…
Because it’s the one Science4All uses! There are also other major websites using PHP, including Facebook and Wikipedia. The main advantage of PHP is that it was made for websites. It’s also very easy to apprehend. Finally, and it’s very very important, it has a large community with plenty of ressources, so that you can easily find help whenever you need some!
PHP initial stood for Personal Home Page, since it was created in 1994 by Rasmus Lerdorf for his own website. However, for a long time, even though it was more popular because easier to use, it didn’t have the main advanced features of the other languages. In particular, it’s only in 2004 that PHP 5 included Object-Oriented Programming (which is too vast a topic for me talk about in this article). This has led to constructions of plenty of great frameworks such as Symfony or CakePHP.
If I had read this article before starting Science4All, I might have used one of those frameworks… But since I was a complete beginner in the Web, I started with a Content Management System (CMS).
A CMS is a totally already computed website, on which you’ll simply have to add your content. A Facebook fan page could almost be thought as a CMS, although not really customizable. The most famous CMS are probably Joomla and WordPress. I use WordPress, which is supposed to be suit for blogging, but can be highly adapted to most purpose. Indeed, plenty of plugins have been developed for WordPress and can easily be added, such as the Facebook box you see on the right, the Google Translate widget just above and BuddyPress that enables interactions between users through groups and forums. Without any previous knowledge, it took me a few days to get the website online thanks to WordPress.
WordPress is actually a set of files that you simply need to download on your computer or on your web server to get it working. Then, you can manage your website directly by using the generated pages. But, in fact, those files are mainly PHP files. Thus, you can modify them yourself to customize even more your website and get it doing what you want! That’s what I’m doing for Science4All. WordPress gives a great base to build the website, although it’s not as well structured as what a professional developer would do with a framework.
I’d rather not! PHP codes have got to be secret as they enable the management of any functionality of the website. Protecting them is important for safety reason. But I’ll get back to that in the next section.
What needs to be reminded from this section is that web server programs enable dynamic websites where the generated HTML page depends on information sent by the user. But that’s not all! These programs can use plenty of other information. Including those stored on the servers.
MySQL allows web storage
To go further in web developing, including adding forums, social networks and personal information, we need to store information on the Internet.
Yes we can do that. And that’s what’s being done for certain files such as pictures. But there are two problems with that. First, the structure of the information needs to be dealt with by hand, and this can get very complicated. Second, if badly stored, the information will take a lot of space. That’s why databases have been introduced, first for company information storage on their own computers, then for websites.
There are several softwares specialized in database management. The information is well structured. Its size is also highly optimized, which enables saving space on the servers. The best-known database softwares are Oracle and Microsoft SQL Server, which existed even before the Internet. Both are efficient, but both are also expensive (although training versions are free). However, there are also PostgreSQL and, of course, MySQL, invented in 1995, which Science4All uses.
Well, in fact, it came with WordPress. So I didn’t really choose. But I have to say that the combination “PHP + MySQL” is very common. Thus, once again, a large community on the Internet will be able to provide help. And, this combination has proved to work efficiently.
Yes! In practice, the database is placed on a different server than the web server. This enables web servers to really serve their purpose of quickly running web programs, while database servers can focus more on having large storage and on quickly answering database requests. Now, to access the database, the web server programs use special codes. These codes depend on the combination of your program language and the database software. That’s why this combination is important.
In WordPress, certain functions have been created to easily use the database without requiring the knowledge of actual database functions. That’s why I barely know how databases actually work. Because I don’t really need to.
Well, we are far from being done! One major problem of web pages is that they require a connection to the web server (and most of time to the database server) to be generated. This can take a while, not because the web program is slow, nor because answering database queries is long, but mainly because connecting to the web servers and transferring information can take a few seconds. A few seconds are not that much, but as speed is more and more important in today’s world, we’d like quicker interaction with web pages…
Because none of the other languages is really suitable for data transfer. XML, which uses tags, is the most common language specially designed for data transfer, because of the simplicity and efficiency of its structure. XML files can be found elsewhere, including storing data for companies. They are also a great way to store the parameters of a program. Computer scientists (should) hate inserting parameters in their codes, because they then have to get back into the code to modify parameters. Inserting parameters in a separate well structured XML file makes things so much simpler. XML has plenty of derived languages based on the same idea, including HTML, as well as your Microsoft DOCX, XLSX or PPTX files, which are compressions of XML files.
Here is a figure that recapitulates all the languages we have seen.
Let’s sum up
The Web is a new exciting world with an impressive potential. We’re still at the beginning of it. Thus, it’s crucial to know how it works to make good use of it. I hope this article has given you an overview of what’s being done and a hint at what can still be done. Unfortunately, things on the Web are very complex and it’s a difficult new world to apprehend. Only talking about its different languages is hard. Yet, this is just the tip of the iceberg, as many more important matters have not been discussed here, such as protocols of communication, routeurs, data centers, cloud computing, domain name servers, firewalls, censorship… My knowledge in these other fields is quite limited and I’d love it if someone could explain each of these simply to us all! | 2 | 3 |
<urn:uuid:edb8d11e-b639-40e1-a490-93dc613d50fa> | Human Anatomy and Physiology
Human anatomy and physiology is the study of two disciplines rolled into one. The first discipline is anatomy and the second is physiology.
Human anatomy is the study of body structure and organization.
Anatomy can also be further broken down into either gross anatomy or microanatomy.
Gross anatomy refers to the study and disection of the human body without the aid of a microscope or magnifying lens.
Microscopic anatomy refers to the study of human tissues and cells utilizing a magnifying lens or microscope.
Physiology is the study of how the body functions and the functions of it's parts. This includes the observation of the human body, experiments conducted on the body, and/or the use of special equipment or materials. The study of the human body at a cellular and chemical level are particularly of interest here.
Learning Human Anatomy
It's not easy to learn human anatomy. We've located a home study course that's custom made to help you learn anatomy by using visual material that's really helpful.
With over 3,000 pages (wow) of physiology and anatomy materials, this guide has proven to be a great aide for anatomy students to help them learn this material and pass their exams. It contains:
Human Anatomy & Physiology Course
When studying human anatomy and physiology you will be viewing the human body on six different levels:
Organism Level: This is the highest level of organization. This level looks at the human body as a whole and includes all other levels with in it.
Organ System Level: This level breaks down the human body into eleven functional groups called organ systems. These organ systems include:
Organ Level:This level looks at the tissues that compose each organ and each organ's specific function.
Tissue Level: This level looks at each tissue in the body microscopically to study the cell types of each tissue.
Cellular Level: This level looks at the composition of each cell and the functions of each cell part.
Chemical Level: This level looks at chemical composition, atoms and molecule, and how chemicals effect the funtions of the human body.
When learning anatomy and physiology you will study each of the eleven organ systems at all six levels listed above. In addition to this you will also be introduced to a number of medical terms.
It is helpful to have a prior understanding of medical terminology when starting out. Knowing locations on the body and dirrectional terms is also recomended.
If learning medical terminology is not an option it is helpful to at least be familiar with the most common anatomy terms. Taking the time to learn these first may make learning human anatomy and physiology less overwhelming.
Anatomy & Medical Coding
What does human anatomy and physiology have to do with medical billing and coding?
When interperting a medical record the coder must be able to read the physician's dictation and understand exactly what was provided to the patient.
Dictation may include medical terminology, surgical approaches, organ names, and details of a single procedure.
Without a general understanding of gross anatomy a medical coder simply cannot select a correct code.
At time gross anatomy cannot be understood without understanding the physiology first.
And although medical billing is not directly involved, the actions of the medical coder do effect the billing processes.
The surgeon states he made use of a pediatric endoscope by passing it through the oropharynx, past the gastroesophageal junction, and into the stomach proper to the duodenum. He also states a biopsy was taken before retraction.
If a medical coder where to look up each of the individual terms stated above in the CPT book they would be lead to the following codes:
42800 - Biopsy; Oropharynx
43600 - Biopsy of stomach; by capsual, tube, peroral (one or more specimens)
44010 - Duodenotomy; for exploration, biopsy(s), or foreign body removal
**No codes found for the term gastroesophageal junction**
Had the same coder known their human anatomy and physiology they would know that the oropharynx (mouth and pahrynx), the gastroesophageal junction (where the esophagus meets the stomach), the stomach proper (a section of the stomach), and the duodenum (the sphincter where the stomach and small intestine join); is one long passageway that is commonly traveled by a video camera for exploration purposes. They would also know that this passageway, when traveled in this fashion, is abbreviated E.G.D. (esophagogastroduodenoscopy).
By look up the singel term esophagogastroduodenoscopy the coder would then be led to the one correct code
43239 - Upper gastrointestinal endoscopy including esophagus, stomach, and either the duodenum and/or jejunum as appropriate; with biopsy, single or multiple
Anatomy & Coding
Human anatomy and physiology is important to medical coders for many reasons including proper ICD-9 and CPT code selection, chart and dictation interpretation, and physician interaction.
Knowledge in this area will not only make you a more accurate and knowledgeable coder but may also give you a competitive edge in the job market among other coders.
Medical terminology and anatomy are closely related.
Usually when learning one the other is inherently learned.
It is suggested that those starting out begin by learning basic medical terminology and then progress to human anatomy and physiology.
The CPT book holds a wealth of anatomical information.
In the surgery section each organ system has an anatomical illustration located in its guidelines.
Each illustration depicts major organs and structures in that system.
There are also many smaller diagrams depicting common procedures throughout the code book.
Learning medical coding can seem over whelming at times due to the large amount of information.
By learning medical terminology and gross anatomy first individuals find medical coding easier to understand and are more accurate in their code selection.
Enjoy This Site?
Then why not use the button below and add us to your favorite bookmarking service?
| Homepage | ICD-10 | About Us | Contact | Affiliates | CPC Practice Exam |
Return to top | 2 | 4 |
<urn:uuid:59b84a7c-36ed-493f-81da-e5a3c89ca100> | Depression is prevalent and costly, but despite effective treatments, is often untreated. Recent efforts to improve depression care have focused on primary care settings. Disparities in treatment initiation for depression have been reported, with fewer minority and older individuals starting treatment.
To describe patient characteristics associated with depression treatment initiation and treatment choice (antidepressant medications or psychotherapy) among patients newly diagnosed with depression in primary care settings.
A retrospective observational design was used to analyze electronic health record data.
A total of 241,251 adults newly diagnosed with depression in primary care settings among five health care systems from 2010 to 2013.
ICD-9 codes for depression, following a 365-day period with no depression diagnosis or treatment, were used to identify new depression episodes. Treatment initiation was defined as a completed psychotherapy visit or a filled prescription for antidepressant medication within 90 days of diagnosis. Depression severity was measured with Patient Health Questionnaire (PHQ-9) scores on the day of diagnosis.
Overall, 35.7% of patients with newly diagnosed depression initiated treatment. The odds of treatment initiation among Asians, non-Hispanic blacks, and Hispanics were at least 30% lower than among non-Hispanic whites, controlling for all other variables. The odds of patients aged ≥ 60 years starting treatment were half those of patients age 44 years and under. Treatment initiation increased with depression severity, but was only 53% among patients with a PHQ-9 score of ≥ 10. Among minority patients, psychotherapy was initiated significantly more often than medication.
Screening for depression in primary care is a positive step towards improving detection, treatment, and outcomes for depression. However, study results indicate that treatment initiation remains suboptimal, and disparities persist. A better understanding of patient factors, and particularly system-level factors, that influence treatment initiation is needed to inform efforts by heath care systems to improve depression treatment engagement and to reduce disparities.
In the United States, more than 16 million adults (6.7%) experience an episode of major depression each year.1 Depression is among the costliest of public health conditions in the US, with an estimated annual cost of $210 billion, due to medical care and lost productivity.2 The reported prevalence of depression is higher among women, younger adults, and non-Hispanic whites (NHWs) than among men, older adults, and minority populations.1, 3 Despite the wide availability of effective treatments for depression, it is estimated that more than half of those with depression do not initiate treatment.2, 4
Some patient populations appear to be particularly vulnerable to lack of treatment for depression. Studies have reported that treatment initiation among racial and ethnic minority populations is lower than in NHW populations.5,6,–7 In addition, among patients who do start treatment, the use of antidepressant medications (AD) is reportedly higher in NHW populations than in minority populations.8,9,10,–11 These differences in treatment engagement and treatment choice likely reflect a combination of patient preference and provider and health system factors, although the precise mechanisms underlying these differences are not well understood.
Over the past decade, there has been increasing focus on the role of primary care in the detection and treatment of depression. Many people with depression, even those who have died by suicide, have never had a mental health diagnosis or received treatment.12, 13 While behavioral health services may be underutilized, most people do seek primary care; therefore, this care setting has been identified as presenting an opportunity to detect and treat depression. The US Preventive Services Task Force (USPSTF) recommendations include depression screening in the general adult population.14 The National Committee for Quality Assurance (NCQA) currently includes a measure for AD adherence and is planning measures for depression screening and follow-up.15, 16 Efforts to improve the quality of depression treatment in primary care settings include increased use of brief screening tools, such as the Patient Health Questionnaire (PHQ-9),17 and implementation of various forms of collaborative and integrated care models.18,19,–20 These initiatives have firmly established depression screening and treatment as essential components of primary care.16, 21,22,–23
This study describes patient characteristics associated with depression treatment initiation and specific treatment choice among a sample of over 240,000 patients who received a new diagnosis of depression in primary care settings across five large, integrated health care systems between 2010 and 2013. These health care systems are members of the Mental Health Research Network (MHRN), a consortium of 13 health care systems providing primary and specialty care to a combined patient population of over 12 million.24 Based on a large and diverse patient population, this study describes current depression treatment initiation patterns among multiple US health care systems striving to enhance depression care in primary care settings in response to national initiatives.
Setting: Health Care Systems
Study data were obtained from five US health care systems with diversity in size, geographic location, and patient populations: Kaiser Permanente regions of Southern California, Washington, Colorado, and Hawaii, and HealthPartners in Minnesota. At the time of analysis, these systems provided both private, primarily commercial, and subsidized public insurance coverage and health care to over 5.1 million people in five states. Electronic health records (EHR), insurance claims, and other data are organized in a virtual data warehouse (VDW) to facilitate population-based research across all systems.24, 25 Protected health information remains at each site, but the VDW uses common data definitions and formats to ensure equivalent de-identified data for analysis. The institutional review boards at each health care system approved this study.
During the study period (2010–2013), these health care systems were using the Patient Health Questionnaire (PHQ-9), a brief depression screening tool,17 in some capacity in primary care settings. All five systems had implemented evidence-based guidelines for diagnosis and treatment of depression in primary and specialty care clinics, four systems were monitoring and reporting NCQA measures of depression care quality, three systems were following recommendations for the use of standard outcomes assessments for patients receiving depression treatment, and three systems had integrated mental health providers in primary care clinics, during all or part of the study period.
The study included adult patients (age ≥ 18) with a new diagnosis of depression made in a primary care setting in one of the five study sites between 2010 and 2013 (N = 241,251). Using VDW information, new episodes of depression were defined by the presence of an ICD-9 code for depression following a 365-day period without evidence of a depression diagnosis or treatment (either psychotherapy or AD). Cases were followed for 90 days after the diagnosis (index) date to look for the initiation of AD medication or psychotherapy treatment. PHQ-9 scores measured on the index date were available for 27,347 patients, 11% of the sample. Patients who were disenrolled from the health system within 90 days after diagnosis were excluded (n = 6207, 2.6%).
VDW data was the source for demographic variables, including age, gender, and race/ethnicity. The Charlson index was used as a comorbidity index, using diagnosis codes appearing in the EHR during the year prior to the index date.26 Neighborhood income and education were imputed using geocoded patient addresses and census data at the block group level. VDW data were assessed for evidence of prior AD use, the receipt of mental health specialty care, and prior hospitalization with a mental health diagnosis during the 5-year period prior to the index date. Treatment initiation was measured as a filled prescription for any AD or at least one completed psychotherapy visit within 90 days after the index date. A psychotherapy visit was defined as any visit greater than 30-min duration to a specialty mental health provider with a Current Procedural Terminology (CPT) code indicating either initial evaluation or individual psychotherapy. Detailed specifications for antidepressant prescription fills and psychotherapy visits are available at: https://github.com/MHResearchNetwork/MHRN-Central (MHRN_psychotherapyList.xls, mhrn2_ndc2016a.zip). Where available, PHQ-9 scores were used in analyses as a dichotomous measure of depression severity, with a score of ≥ 10 indicative of probable major depression.17
Logistic regression models were first used to estimate the odds of initiating treatment (AD and/or psychotherapy) by patient characteristics. Among the subgroup of patients who did initiate treatment, logistic regression models were then used to estimate the odds of initiating psychotherapy (as opposed to AD) by patient characteristics. Patients who initiated both psychotherapy and AD were included in the models for treatment initiation but were excluded from the models for type of treatment initiated. All models included a variable for health care system. Wald tests were used to calculate the p values for the association between each variable and model outcomes (treatment initiated or not, and psychotherapy [vs. AD] initiated). The models were then used for analyses of the subgroup of patients who had PHQ-9 scores on the index date. All analyses were conducted using SAS version 9.4 software.27
Of the 241,251 patients who received a new diagnosis of depression, 48% were NHW, 26% were Hispanic, 7% were non-Hispanic black, and 5% were Asian. Nearly 69% were female, and about 66% were under the age of 60 (Table 1). The subgroup with a PHQ-9 score recorded at the time of diagnosis included 27,347 patients (11%). This screened group had a larger proportion of NHW (62%) and younger patients (73% under age 60). Race/ethnicity was unknown or “other” for 12% of the cohort and 5% of the subgroup with PHQ-9 scores.
Initiation of Treatment
Overall, of the 241,251 new episodes of diagnosed depression during this period, 86,115 (35.7%) initiated AD medication and/or psychotherapy (Table 1). Age, race/ethnicity, and health care system were the strongest predictors of treatment initiation. The adjusted odds ratio (aOR) for starting treatment declined with increasing age, with the aOR for those aged 60 and older less than half that for those age 44 years and under. In comparison to NHWs, all racial and ethnic minority groups had significantly lower odds of starting treatment: aORs ranged from 0.65 for Asians to 0.83 for Native Americans, with non-Hispanic blacks, Hispanics, and Native Hawaiians/Pacific Islanders in the 0.67–0.72 range. Men had slightly higher odds than women (aOR 1.07, CI 1.05–1.09). The aORs for starting treatment ranged from 0.66 to 1.03 across the five study sites.
Increasing levels of clinical comorbidity were associated with higher odds of treatment initiation. Prior use of specialty mental health care (aOR 0.86, CI 0.84–0.88) and prior hospitalization with a mental health diagnosis (aOR 0.78, CI 0.71–0.78) were associated with lower odds of starting treatment for this new depression episode. Prior AD use was not associated with starting treatment. Higher education and income levels were associated with slightly increased odds of starting treatment.
Results for the 27,347 cases with PHQ-9 scores simultaneously recorded on the index date are shown in Table 1. Overall, 45% of these patients initiated treatment. In this model, PHQ-9 score, study site, race/ethnicity and age were the strongest predictors of treatment initiation. As expected, the odds of patients with high PHQ-9 scores (≥10) starting treatment were 3.34 times higher than those with low PHQ-9 scores. Fifty-three percent of patients with PHQ-9 scores ≥ 10 initiated treatment. Similar to the total cohort, the aORs for treatment initiation among this subsample of patients were lower for racial/ethnic minority groups than for NHWs. The lowest odds were among Hispanics (0.60, CI 0.56–0.65), Asians (0.65, CI 0.58–0.72), and non-Hispanic blacks (0.67 CI 0.61–0.75). The odds of initiating treatment again decreased with increasing age, but the differences were smaller in this subsample of patients. Adjusted odds ratios of treatment initiation ranged from 0.80 to 1.98 across the five study sites.
Initiation of Psychotherapy Compared to Antidepressant Medications
More than 80% of patients who initiated treatment started an AD. As shown in Table 2, the strongest predictors of initiating psychotherapy rather than medication were age, race/ethnicity, prior AD use, and health care system. Prior use of mental health specialty care, fewer medical comorbidities, and male sex were associated with increased odds of initiating psychotherapy.
The proportion of patients initiating psychotherapy decreased with increasing age, with 25% of 18–29-year-olds starting psychotherapy compared to 7% of patients aged 75 and older. While fewer patients of racial minorities initiated treatment, among those who did, all racial and ethnic minority groups had higher proportions of psychotherapy initiation than NHWs. Thirty percent of non-Hispanic black patients started psychotherapy, in comparison to 14% of NHWs, 24% of Hispanics, 24% of Asians, and 21% of Native Hawaiians/Pacific Islanders. Men had higher odds of starting psychotherapy than women.
Prior use of AD medications was associated with lower odds of starting psychotherapy (aOR 0.46, CI 0.44–0.49), but patients who had prior use of specialty mental health care had higher odds of starting psychotherapy. The aORs for initiating psychotherapy varied from 0.54 to 1.39 across the five study sites.
Results for the 9871 patients who initiated treatment and also had a PHQ-9 score are shown in Table 2. Health care system, previous use of AD medications, race/ethnicity and age were the strongest predictors of psychotherapy initiation. The aORs for psychotherapy initiation varied from 0.35 to 1.94 across study sites. The aOR of initiating psychotherapy was 0.39 (CI 0.34–0.45) among patients who had used AD medications in the past. The aOR for psychotherapy initiation remained higher for all racial/ethnic groups in comparison to NHWs. Initiation of psychotherapy decreased with increasing age. Patients with high PHQ scores (≥ 10) had 30% lower odds of initiating psychotherapy than patients with low PHQ scores. The odds of starting psychotherapy among men in this group were 23% higher than those for women.
The results of this study highlight persistent suboptimal levels of treatment initiation for depression, as well as age and racial/ethnic disparities in the initiation of treatment, despite initiatives to enhance depression care in primary care. The study also highlights variations among health care systems in the initiation of treatment for depression, underscoring the importance of better understanding the effectiveness of depression care integration mechanisms and processes. Importantly, the study results reflect recent practice patterns among multiple health care systems, and are based on a significantly larger (n = 241,251) and more diverse patient population than those included in previous studies. The results are strengthened by the inclusion of important covariates (e.g., socioeconomic indicators, prior mental health service use) by leveraging uniform EHR data.
Depression Treatment Initiation in Primary Care
The observed proportion of patients initiating treatment for new episodes of depression diagnosed in primary care was low—36%. This finding is consistent with previous reports.28,29,30,–31 Reasons cited in previous studies for low levels of treatment initiation—stigma, patient resistance, insufficient training or discomfort of primary care providers, and access barriers, particularly for behavioral health services—likely contributed to the low levels of treatment initiation observed in this study.32, 33
Barriers to treatment initiation specific to primary care settings were also likely contributors. These include greater aversion to depression treatment among primary care patients than behavioral health patients, competing demands, time constraints, and different priorities for patients and providers.34, 35
Patient Characteristics, Treatment Initiation, and Treatment Choice
The results demonstrate significant, persistent differences in depression treatment initiation associated with race/ethnicity. The odds of Asians, non-Hispanic blacks, and Hispanics initiating treatment were at least 30% lower than those for NHWs, after controlling for all other variables. This finding is consistent with previous evidence of racial and ethnic disparities in depression treatment initiation.6, 7, 9, 36
This study also provides evidence of a preference among minority patients for psychotherapy over AD treatment.8,9,10,–11 Access to one’s preferred choice of depression treatment has been found to enhance treatment initiation, adherence, and outcomes.37, 38
Persistent significant age-related differences were found in treatment initiation. The odds of patients aged 60 years and older starting treatment were half those of patients age 44 years and under. With a rapidly growing aging population, the importance of addressing the mental health needs of this group will increase. Previously reported depression treatment gaps for older patients are further supported by this study.39, 40 Lower treatment initiation among older patients has been attributed to a common misconception of depression as a natural part of the aging process, a generational culture of personal responsibility, attribution of depression to non-medical causes, and stigma.39, 41, 42 Resistance to AD treatment has also been identified in this population.43
Comorbid medical conditions among older patients may compete for the attention of primary care providers and potentially mask or overlap with depression symptoms.39 The results of this study show slightly higher odds of initiating treatment as comorbidities increase. Improved medical disease and depression outcomes have been reported with collaborative care approaches for patients with depression and common comorbidities, particularly diabetes and cardiovascular disease.18, 44,45,–46
Although depression is more common among women than men (8.2% vs. 4.6%),1 the study results revealed a slightly higher proportion of men initiating treatment. In addition, the odds of starting psychotherapy were 18% higher for men than for women. These gender-related differences in treatment initiation are a particularly important and positive finding, given that men account for more than three-quarters of suicides among middle-aged adults.47
Among those who started treatment, psychotherapy was initiated by 17%, and 83% started AD medications. This high proportion of AD use may reflect the large proportion of NHWs in the study population (47%), greater familiarity with AD among primary care providers, a desire to rapidly address a newly identified condition, and possibly access barriers to behavioral health specialty care.
Health Care System Factors
Large differences in treatment initiation were observed across the five participating health care systems, with aORs ranging from 0.66 to 1.03. While all sites had taken steps to enhance depression care, including the use of the PHQ-9 depression scale in some manner in primary care, the specific features and full scope of efforts to improve the quality of depression care in primary care varied across sites and within sites over time. For example, mechanisms for psychotherapy referral might have ranged from an instant consultation with a behavioral health specialist co-located in primary care at the time of the visit, to simply giving the patient a phone number. While it was not possible in this study to retrospectively reconstruct patient-level exposure to health care system initiatives to enhance depression care, the study results highlight the importance of doing so in the future. The proportion of new diagnoses with a concurrent PHQ-9 score (an important feature of care integration) ranged from 5% to 33% across sites (not shown). Higher treatment initiation among the screened group of patients could reflect a more focused approach to screening versus a general screening approach.
Study limitations include the omission of any brief counseling provided by primary care physicians upon diagnosis. An important limitation is that we have little information about the reasons for failure to initiate treatment, or the relative contribution of patient, provider, and system factors. In addition, since depression is common in the study population, odds ratios may slightly overestimate the associations with predictive factors.48 Finally, while all study sites were using the PHQ-9, this study lacks detailed information about the specific conditions under which the tool was utilized, and particularly methods of care integration at the study sites during the study period.
Efforts to integrate behavioral health care within primary care settings have been under way and evolving across the United States for more than 15 years. The features of collaborative care models vary widely, but there is evidence that these models can be effective in improving depression management and outcomes19, 20, 49, 50 and that they are cost-effective.51,52,–53 There is also evidence that collaborative care models can be effective for particular patient groups, including younger54 and older populations51, 55,56,–57 and racial minorities.58,59,60,61,–62 Some models of care have been reported to reduce racial/ethnic disparities.63 While these models may not be universally effective,62, 64, 65 regulatory requirements and the desire to better meet patients' mental health care needs will lead to further implementation efforts.
The results of this study provide evidence that a substantial number of patients with diagnosed depression do not receive treatment, even in leading health care organizations. Ongoing efforts to address this problem, coupled with more thorough and sophisticated evaluation methods, will enhance our understanding of the mechanisms by which various models succeed in improving treatment engagement, incorporating patient preferences, improving adherence and outcomes, and reducing disparities.
National Institute of Mental Health: Major Depression Among Adults. https://www.nimh.nih.gov/health/statistics/prevalence/major-depression-among-adults.shtml. Accessed 21 Dec 2017.
National Network of Depression Centers: Facts. http://www.nndc.org/the-facts/. Accessed 21 Dec 2017.
Coleman KJ, Stewart C, Waitzfelder BE, et al. Racial-Ethnic Differences in Psychiatric Diagnoses and Treatment Across 11 Health Care Systems in the Mental Health Research Network. Psychiatric Serv (Washington, DC). 2016;67(7):749-757.
Simon GE. Evidence review: efficacy and effectiveness of antidepressant treatment in primary care. Gen Hosp Psychiatry 2002;24(4):213-224.
Sclar DA, Robison LM, Schmidt JM, Bowen KA, Castillo LV, Oganov AM. Diagnosis of depression and use of antidepressant pharmacotherapy among adults in the United States: does a disparity persist by ethnicity/race? Clin Drug Investig 2012;32(2):139-144.
Alegria M, Chatterji P, Wells K, et al. Disparity in depression treatment among racial and ethnic minority populations in the United States. Psychiatric Serv (Washington, DC). 2008;59(11):1264-1272.
Virnig B, Huang Z, Lurie N, Musgrave D, McBean AM, Dowd B. Does Medicare managed care provide equal treatment for mental illness across races? Arch Gen Psychiatry 2004;61(2):201-205.
Jung K, Lim D, Shi Y. Racial-ethnic disparities in use of antidepressants in private coverage: implications for the Affordable Care Act. Psychiatr Serv (Washington, DC). 2014;65(9):1140-1146.
Miranda J, Cooper LA. Disparities in care for depression among primary care patients. J Gen Intern Med 2004;19(2):120-126.
Quinones AR, Thielke SM, Beaver KA, Trivedi RB, Williams EC, Fan VS. Racial and ethnic differences in receipt of antidepressants and psychotherapy by veterans with chronic depression. Psychiatr Serv (Washington, DC). 2014;65(2):193-200.
Givens JL, Houston TK, Van Voorhees BW, Ford DE, Cooper LA. Ethnicity and preferences for depression treatment. Gen Hosp Psychiatry 2007;29(3):182-191.
Ahmedani BK, Simon GE, Stewart C, et al. Health care contacts in the year before suicide death. J Gen Intern Med 2014;29(6):870-877.
Ahmedani BK, Stewart C, Simon GE, et al. Racial/Ethnic differences in health care visits made before suicide attempt across the United States. Med Care 2015;53(5):430-435.
U.S. Preventive Services Task Force. Screening for depression: recommendations and rationale. Ann Intern Med. 2002;136(10):760-764.
NCQA HEDIS Depression Measures Specified for Electronic Clinical Data Systems. http://www.ncqa.org/hedis-quality-measurement/hedis-learning-collaborative/hedis-depression-measures. Accessed 21 Dec 2017.
Anderson B. HEDIS antidepressant medication management measures and performance-based measures: an opportunity for improvement in depression care. Am J Manag Care 2007;13(4 Suppl):S98-102.
Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med 2001;16(9):606-613.
Katon WJ, Lin EH, Von Korff M, et al. Collaborative care for patients with depression and chronic illnesses. N Engl J Med 2010;363(27):2611-2620.
Gilbody S, Bower P, Fletcher J, Richards D, Sutton AJ. Collaborative care for depression: a cumulative meta-analysis and review of longer-term outcomes. Arch Intern Med 2006;166(21):2314-2321.
Hedrick SC, Chaney EF, Felker B, et al. Effectiveness of collaborative care depression treatment in Veterans' Affairs primary care. J Gen Intern Med 2003;18(1):9-16.
Siu AL, Bibbins-Domingo K, Grossman DC, et al. Screening for Depression in Adults: US Preventive Services Task Force Recommendation Statement. JAMA 2016;315(4):380-387.
Robinson RL, Long SR, Chang S, et al. Higher costs and therapeutic factors associated with adherence to NCQA HEDIS antidepressant medication management measures: analysis of administrative claims. J Manag Care Pharm 2006;12(1):43-54.
Pincus HA, Houtsinger JK, Bachman J, Keyser D. Depression in primary care: bringing behavioral health care into the mainstream. Health Aff (Project Hope) 2005;24(1):271-276.
Mental Health Research Network. http://hcsrn.org/mhrn/en/. Accessed 21 Dec 2017.
Ross TR, Ng D, Brown JS, et al. The HMO Research Network Virtual Data Warehouse: A Public Data Model to Support Collaboration. EGEMS (Washington, DC). 2014;2(1):1049.
Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis 1987;40(5):373-383.
SAS Institute Inc., SAS Software 9.4, Cary, N.C.
Thornicroft G, Chatterji S, Evans-Lacko S, et al. Undertreatment of people with major depressive disorder in 21 countries. Br J Psychiatry 2016.
Hirschfeld RM, Keller MB, Panico S, et al. The National Depressive and Manic-Depressive Association consensus statement on the undertreatment of depression. JAMA 1997;277(4):333-340.
Oquendo MA, Malone KM, Ellis SP, Sackeim HA, Mann JJ. Inadequacy of antidepressant treatment for patients with major depression who are at risk for suicidal behavior. Am J Psychiatry 1999;156(2):190-194.
Pence BW, O'Donnell JK, Gaynes BN. The depression treatment cascade in primary care: a public health perspective. Curr Psychiatry Rep 2012;14(4):328-335.
Campbell DG, Bonner LM, Bolkan CR, et al. Stigma Predicts Treatment Preferences and Care Engagement Among Veterans Affairs Primary Care Patients with Depression. Annals Behav Med 2016;50(4):533-544.
Nutting PA, Rost K, Dickinson M, et al. Barriers to initiating depression treatment in primary care practice. J Gen Intern Med 2002;17(2):103-111.
Klinkman MS. Competing demands in psychosocial care. A model for the identification and treatment of depressive disorders in primary care. Gen Hosp Psychiatry 1997;19(2):98-111.
Van Voorhees BW, Cooper LA, Rost KM, et al. Primary care patients with depression are less accepting of treatment than those seen by mental health specialists. J Gen Intern Med 2003;18(12):991-1000.
Creedon TB, Cook BL. Access To Mental Health Care Increased But Not For Substance Use, While Disparities Remain. Health Aff (Project Hope) 2016;35(6):1017-1021.
Raue PJ, Schulberg HC, Heo M, Klimstra S, Bruce ML. Patients' depression treatment preferences and initiation, adherence, and outcome: a randomized primary care study. Psychiatr Serv (Washington, DC). 2009;60(3):337-343.
Lin P, Campbell DG, Chaney EF, et al. The influence of patient preference on depression treatment in primary care. Annals Behav Med 2005;30(2):164-173.
Charney DS, Reynolds CF, 3rd, Lewis L, et al. Depression and Bipolar Support Alliance consensus statement on the unmet needs in diagnosis and treatment of mood disorders in late life. Arch Gen Psychiatry 2003;60(7):664-672.
Lebowitz BD, Pearson JL, Schneider LS, et al. Diagnosis and treatment of depression in late life. Consensus statement update. JAMA 1997;278(14):1186-1190.
Switzer JF, Wittink MN, Karsch BB, Barg FK. "Pull yourself up by your bootstraps": a response to depression in older adults. Qual Health Res 2006;16(9):1207-1216.
Wittink MN, Givens JL, Knott KA, Coyne JC, Barg FK. Negotiating depression treatment with older adults: primary care providers' perspectives. J Ment Health (Abingdon, England). 2011;20(5):429-437.
Givens JL, Datto CJ, Ruckdeschel K, et al. Older patients' aversion to antidepressants. A qualitative study. J Gen Intern Med 2006;21(2):146-151.
Coventry P, Lovell K, Dickens C, et al. Integrated primary care for patients with mental and physical multimorbidity: cluster randomised controlled trial of collaborative care for patients with depression comorbid with diabetes or cardiovascular disease. BMJ (Clin Res ed). 2015;350:h638.
Huffman JC, Mastromauro CA, Beach SR, et al. Collaborative care for depression and anxiety disorders in patients with recent cardiac events: the Management of Sadness and Anxiety in Cardiology (MOSAIC) randomized clinical trial. JAMA Intern Med 2014;174(6):927-935.
Stewart JC, Perkins AJ, Callahan CM. Effect of collaborative care for depression on risk of cardiovascular events: data from the IMPACT randomized controlled trial. Psychosom Med 2014;76(1):29-37.
Suicide among adults aged 35-64 years--United States, 1999-2010. MMWR. 2013;62(17):321-325.
McNutt LA, Wu C, Xue X, Hafner JP. Estimating the relative risk in cohort studies and clinical trials of common outcomes. Am J Epidemiol 2003;157(10):940-943.
Garrison GM, Angstman KB, O'Connor SS, Williams MD, Lineberry TW. Time to Remission for Depression with Collaborative Care Management (CCM) in Primary Care. J Am Board Fam Med 2016;29(1):10-17.
Archer J, Bower P, Gilbody S, et al. Collaborative care for depression and anxiety problems. Cochrane Database Syst Rev. 2012;10:Cd006525.
Unutzer J, Katon WJ, Fan MY, et al. Long-term cost effects of collaborative care for late-life depression. Am J Manag Care 2008;14(2):95-100.
Liu CF, Hedrick SC, Chaney EF, et al. Cost-effectiveness of collaborative care for depression in a primary care veteran population. Psychiatr Serv (Washington, DC). 2003;54(5):698-704.
Simon GE, Von Korff M, Ludman EJ, et al. Cost-effectiveness of a program to prevent depression relapse in primary care. Med Care 2002;40(10):941-950.
Richardson LP, Ludman E, McCauley E, et al. Collaborative care for adolescents with depression in primary care: a randomized clinical trial. JAMA 2014;312(8):809-816.
Chang-Quan H, Bi-Rong D, Zhen-Chan L, Yuan Z, Yu-Sheng P, Qing-Xiu L. Collaborative care interventions for depression in the elderly: a systematic review of randomized controlled trials. J Investig Med 2009;57(2):446-455.
Unutzer J, Katon W, Callahan CM, et al. Collaborative care management of late-life depression in the primary care setting: a randomized controlled trial. JAMA 2002;288(22):2836-2845.
Hegel MT, Unutzer J, Tang L, et al. Impact of comorbid panic and posttraumatic stress disorder on outcomes of collaborative care for late-life depression in primary care. Am J Geriatric Psychiatry 2005;13(1):48-58.
Bauer AM, Azzone V, Alexander L, Goldman HH, Unutzer J, Frank RG. Are patient characteristics associated with quality of depression care and outcomes in collaborative care programs for depression? Gen Hosp Psychiatry 2012;34(1):1-8.
Ell K, Katon W, Xie B, et al. Collaborative care management of major depression among low-income, predominantly Hispanic subjects with diabetes: a randomized controlled trial. Diabetes Care 2010;33(4):706-713.
Ratzliff AD, Ni K, Chan YF, Park M, Unutzer J. A collaborative care approach to depression treatment for Asian Americans. Psychiatr Serv (Washington, DC). 2013;64(5):487-490.
Ayalon L, Arean PA, Linkins K, Lynch M, Estes CL. Integration of mental health services into primary care overcomes ethnic disparities in access to mental health services between black and white elderly. Am J Geriatr Psychiatry 2007;15(10):906-912.
Arean PA, Ayalon L, Jin C, et al. Integrated specialty mental health care among older minorities improves access but not outcomes: results of the PRISMe study. Int J Geriatr Psychiatry 2008;23(10):1086-1092.
Angstman KB, Phelan S, Myszkowski MR, et al. Minority Primary Care Patients With Depression: Outcome Disparities Improve With Collaborative Care Management. Med Care 2015;53(1):32-37.
Callahan CM, Hendrie HC, Dittus RS, Brater DC, Hui SL, Tierney WM. Improving treatment of late life depression in primary care: a randomized clinical trial. J Am Geriatr Soc 1994;42(8):839-846.
Solberg LI, Crain AL, Jaeckels N, et al. The DIAMOND initiative: implementing collaborative care for depression in 75 primary care clinics. Implement Sci 2013;8:135.
This study was supported by NIMH Cooperative Agreement U19MH092201.
Conflict of Interest
Dr. Simon receives royalties from Wolters Kluwer for editing chapters of the UpToDate decision support system. In the last 3 years, he has received research grant support from Novartis Pharmaceuticals for research regarding suicidal behavior in psoriasis.
Dr. Daida receives research grant support from Gilead Sciences and Intercept Pharmaceuticals for research regarding hepatitis and fibrotic liver diseases, respectively. Research funds are received through the Henry Ford Health System (prime site for both studies).
All other authors declare that they do not have a conflict of interest.
Rights and permissions
About this article
Cite this article
Waitzfelder, B., Stewart, C., Coleman, K.J. et al. Treatment Initiation for New Episodes of Depression in Primary Care Settings. J GEN INTERN MED 33, 1283–1291 (2018). https://doi.org/10.1007/s11606-017-4297-2
- primary care
- race and ethnicity | 1 | 2 |
<urn:uuid:71fff080-8ddc-4f3a-b69c-78b537dc61b7> | Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
--> Executive Summary Unfavorable aircraft-pilot coupling (APC) events are rare, unexpected, and unintended excursions in aircraft attitude and flight path caused by anomalous interactions between the aircraft and the pilot. The temporal pattern of these pilot-vehicle system (PVS) excursions can be oscillatory or divergent (non-oscillatory). The pilot's interactions with the aircraft can form either a closed-loop or open-loop system, depending on whether or not the pilot's responses are tightly coupled to the aircraft response. When the dynamics of the aircraft (including the flight control system [FCS]) and the dynamics of the pilot combine to produce an unstable PVS, the result is called an APC event. Although it is often difficult to pinpoint the cause of specific APC events, a majority of severe APC events result from deficiencies in the design of the aircraft (especially with regard to the FCS) that result in adverse coupling of the pilot with the aircraft. In certain circumstances, this adverse coupling produces unintended oscillations or divergences when the pilot attempts to precisely maneuver the aircraft. If the PVS instability takes the form of an oscillation, the APC event is called a ''pilot-involved oscillation" (PIO). PIOs differ from aircraft oscillations caused by deliberate, pilot-imposed periodic control motions, such as "stick-pumping," that are open-loop in character. An open-loop, forced oscillation does not constitute a PIO. If the unstable motions of the closed-loop PVS are divergent rather than oscillatory in nature, they are referred to as either APC events or as non-oscillatory APC events. APC events can result if the pilot is operating with a behavioral mode that is inappropriate for the task at hand, and such events are properly ascribed to pilot error. However, the committee believes that most severe APC events attributed to pilot error are the result of adverse APC that misleads the pilot
OCR for page 2
--> into taking actions that contribute to the severity of the event. It is often possible, after the fact, to carefully analyze an event and identify a sequence of actions that the pilot could have taken to overcome the aircraft design deficiencies. However, it is typically not feasible for the pilot to identify and execute the required actions in real time. PIO phenomena comprise a complete spectrum. At one end of the spectrum is a momentary, easily corrected, low-amplitude bobble, a type of oscillation often encountered by pilots getting used to new configurations—basically a learning experience. This type of oscillation can happen on any aircraft and has been experienced by most pilots at one time or another. At the other end of the spectrum is a fully-developed, large amplitude PIO, a chilling and terrifying event that jeopardizes the safety of the aircraft, crew, and passengers. Fortunately, severe PIOs are rare. Other severe APC events have been noted in which the excursions in aircraft motion diverge over time rather than oscillate. The few events of this nature that have been positively identified have had serious consequences. Large amplitude, dangerous PIOs and non-oscillatory APC events are the particular concerns of this report. Recently, there have been several highly visible APC-related accidents involving military aircraft, as well as a number of incidents involving civil aircraft. At the same time, there has been widespread introduction of new fly-by-wire (FBW) FCSs into commercial transports. Almost all new FBW-equipped aircraft have exhibited APC events at some time during development, and these untoward coincidences have captured the attention of policymakers, test pilots, technical managers, and engineers. Although FBW systems are not inherently more or less susceptible to severe APC events, the flurry of incidents in aircraft development programs suggests that some side effects have not been fully explored or anticipated. Thus, as a matter of prudence the National Aeronautics and Space Administration asked the Aeronautics and Space Engineering Board of the National Research Council to conduct a study to assess APC-related aspects of recent incidents and accidents, aircraft development processes, the introduction of FBW and fly-by-light technology into FCSs, and national and international efforts devoted to APC research. This report is the result of that study, and it recommends steps that could be taken to improve aviation safety by reducing the kinds of APC problems seen recently and countering new types of APC problems that may arise. The following high-level conclusions of the study committee are worth highlighting. (Subsequent sections include the committee's key findings and recommendations, and all findings and recommendations are listed in Chapter 7.) There are many varieties of oscillatory and non-oscillatory APC events. Although none of these is welcome, only a rare subset is dangerous. Among the dangerous ones are events that exhibit "cliff-like"
OCR for page 3
--> characteristics, which means that a PVS may fly superbly up to the sudden onset of a dramatic and potentially catastrophic APC event. What these severe APCs are, when they are likely to occur, and how to find (and fix) them are key issues. Most of the severe PIOs for which flight recordings exist have exhibited oscillations characterized by rate limited responses in control surface actuators or effectors. (Control surface actuators and effectors are rate or position limited when commanded movement exceeds limits imposed by design intent or physical structure on the rate of movement or extreme position of the control surface.) In most cases the pilots indicated that the onset of the PIO was sudden, unexpected, and cliff-like. Piloted simulations have proved to be useful for investigating APC tendencies. However, neither piloted simulations nor available design and testing criteria can guarantee that a new aircraft will not be involved in an APC event. Severe APC events are invariably new "discoveries" that often occur in transient and highly unusual circumstances. To avoid their discovery by operational pilots under unfavorable circumstances, test pilots must be allowed some freedom to search for APC tendencies in simulations and flight tests. Data on recent APC events indicate that they are not uncommon in development testing where data recording and pilot reports are sufficient for causes to be determined and solutions developed. There are only a few reports of severe APC events in operational aircraft, but because there are no mandatory reporting requirements and recordings are often inadequate, the danger cannot be assessed adequately. The committee was disturbed by the lack of awareness of severe APC events among pilots, engineers, regulatory authorities, and accident investigators. The Aircraft-Pilot Coupling Experience APC events usually occur when the pilot is engaged in a highly demanding, closed-loop control task. For example, many of the reported APC events have taken place during air-to-air refueling operations or approaches and landings, especially if the pilot is concerned about low fuel, adverse weather, emergencies, or other circumstances. Under these conditions, the pilot's involvement in closed-loop control is intense, and rapid response and precise performance of the PVS are necessary. Even so, these operations usually occur routinely without APC problems. APC events do not occur unless there is a transient triggering event that interrupts the already highly-demanding
OCR for page 4
--> PVS operations or requires an even higher level of precision. Typical triggers include shifts in the dynamics of the effective aircraft (the combination of the aircraft and FCS) caused by increases in the amplitude of pilot commands, FCS changes, minor mechanical malfunctions, or severe atmospheric disturbances. Other triggers can stem from mismatches between the pilot's expectations and reality. PIOs have been part of aviation history since the beginning of manned flight, and severe PIOs persist in spite of major efforts to eliminate them. When one kind of PIO occurs, usually unexpectedly, it stirs corrective actions. The experience is generally useful, in that the conditions thought to underlie that type of PIO tend to be avoided in designing new aircraft. As other PIOs occur under different circumstances, the cycle is repeated. With time, understanding improves and some causes are circumvented, but the occurrence of closed-loop oscillations remains a constant; only the details change with the aircraft and FCS technology. From the pilot's perspective, there are three varieties of PIO experiences, ranging from benign learning experiences to severe and potentially dangerous oscillations. The benign "bobbles" are easily countered by the pilot's exit from the closed-loop PVS. By contrast, in many severe PIOs the pilot becomes locked into behavior that sustains the oscillation, even though the pilot often feels totally disconnected from the system. If the deficiencies in effective aircraft dynamics are essentially linear in nature, such as excessive time lag in response to a pilot input, a Category I PIO may result. If the effective aircraft dynamics change as a function of pilot-command amplitude or of FCS mode shifts, thereby creating a nonlinear sudden-onset change (a "cliff") in the effective aircraft dynamics, the resulting PIO is assigned either to Category II (when the dominant nonlinearities are associated with rate or position limiting of the control surfaces) or Category III (when the nonlinear changes are more complex). The Category II and III PIOs are particularly insidious because the effective aircraft dynamics and the associated flying qualities can be good right up to the instant the PIO begins. Identifying the potential for these PIOs, which almost always occur under unusual conditions when the PVS is operating near the margins, is a major challenge to test pilots and engineers. An extensive search process with a ''discovery" mentality is needed to ensure that Category II or III tendencies are not overlooked. Non-oscillatory APC events are not as well defined or understood as PIOs. Even if the pilot is extremely active and initiates many control reversals, the aircraft does not necessarily respond in an oscillatory fashion. Instead, a buildup of lags in the response of the aircraft's control effectors to the pilot's commands may ultimately lead to a divergence from the intended aircraft movement. As in the case of severe PIOs, pilots in these cases often report a sense of feeling detached from the aircraft behavior in terms of both awareness of what is happening and in terms of the temporal connections between pilot command and aircraft response.
OCR for page 5
--> Finding. Adverse APC events are rare, unintended, and unexpected oscillations or divergences of the pilot-aircraft system. APC events are fundamentally interactive and occur during highly demanding tasks when environmental, pilot, or aircraft dynamic changes create or trigger mismatches between actual and expected aircraft responses. Impact Of New Technology As phenomena in aviation history, APC problems have often been associated with the introduction of new technologies, functionalities, or complexities. There is a time lapse before flight experience with a new technology reveals the subtle changes in effective aircraft dynamics that may increase the susceptibility of a new aircraft to APC events. This partly explains why APC problems are more prevalent in military aircraft, which have traditionally introduced advanced technologies, and less common in civil aircraft, which have tended to adopt new technologies only after they have been proven in military aircraft. The prevalence of APC problems in military rather than commercial aircraft may also be associated with the nature of military operations, which frequently include maneuvers that require higher pilot gains than are commonly used on commercial aircraft. FBW technology, which for this report includes fly-by-light technology, is a recent example of a new technology that has migrated from military to civil aircraft. The application of FBW technology has created FCSs that confer important overall system advantages in terms of performance, weight reduction, stability and control, operational flexibility, and maintenance requirements. FBW also offers opportunities for novel approaches to solving all kinds of problems with aircraft stability and control (including correcting APC tendencies). Yet, the flexibility inherent in FBW technology has the potential for creating unwanted new side effects and unanticipated problems. In an aircraft equipped with a FBW FCS, information is transmitted from the cockpit to the control surfaces entirely by electrical means. The cockpit control device may not indicate to the pilot when the control surfaces are rate or position limited. The result may be a mismatch between the pilot's expectations and the aircraft's actual response, which can directly contribute to an APC event. In addition, FBW technology allows aircraft designers to design an FCS that features an elaborate set of system modes intended to enhance aircraft performance for a variety of missions under all expected flight conditions. When properly implemented, shifts between these system modes are smooth and unobtrusive and do not interfere with the pilot's operation of the aircraft. However, the complexity inherent in an advanced multiredundant FBW FCS makes it difficult for the designers, much less the pilots, to anticipate all of the possible interactions between the FCS and the pilot. The
OCR for page 6
--> FCS may operate in ways that the pilot does not expect and does not recognize, thereby increasing the potential of encountering an APC event. As the potential for untoward events expands with the introduction of new technologies, increased vigilance is necessary to ensure that new systems do not inadvertently increase the susceptibility of new aircraft to APC events. Finding. APC problems are often associated with the introduction of new designs, technologies, functions, or complexities. New technologies, such as FBW and fly-by-light flight control systems, are constantly being incorporated into aircraft. As a result, opportunities for APC are likely to persist or even increase, and greater vigilance is necessary to ensure that new technologies do not inadvertently increase the susceptibility of new aircraft to APC events. Aircraft-Pilot Coupling Events As A Current Problem In Aviation A major task of the committee was to assess the current status of APC events as a safety problem in aviation. In the context of aircraft development and testing, the record clearly shows that although adverse APC events are rare, they can pose a major safety concern. The same record also provides an extraordinary set of recent examples that should alert project and engineering managers, design engineers, test pilots, and aircraft operators to the need to address concerns about APC events as a central flying qualities and safety issue. These concerns can be addressed through detailed test plans, elaborate flight-test data recorders, and highly trained pilots like the ones who participate in the developmental stages of new aviation technology. Addressing these concerns will ensure that APC events that occur during development become matters of record. When an aircraft enters operational service, the elaborate flight data recorders are routinely removed. The flight data recorders that are installed on many commercial aircraft employ a limited number of channels and sample rates; many military aircraft have no flight data recorders at all. For these and other reasons, confirmed APC-related incidents or accidents on operational FBW aircraft are quite rare. The occurrence of PIOs or other APC events at some point in the development of almost all FBW aircraft, contrasted with the almost total absence of APC events reported in operational stages, is viewed by the committee as a "curious disconnect." The hope is that all major APC tendencies have been discovered and corrected in the course of development, but because of the limited recording and reporting procedures in operations, this cannot be confirmed. Consequently, the committee was not able to assess fully the exposure of operational fleets to APC events.
OCR for page 7
--> Finding. APC problems have occurred more often in military and experimental aircraft, which have traditionally introduced advanced technologies, than in civil aircraft. Finding. Recently, civil and military transport FBW aircraft have experienced APC problems during development and testing, and some APC events have occurred in recent commercial aircraft service, although they may not always have been recognized as such. Increasing Awareness The committee has observed that APC events are perceived by the majority of the aviation community as exotic happenings that are occasionally documented by spectacular video footage shown on the evening news but are not of major concern. This complacent attitude is reinforced by a lack of awareness, understanding, and relevant experience. This shortcoming should be addressed through improved education and training of personnel involved in aircraft design, simulation, testing, certification, operations, and accident investigation. A dramatic way to enhance awareness is to expose flight test pilots and engineers to actual APC events in flight and thereby indelibly imprint on them the insidious character and the danger of such phenomena. Although this could be done at relatively little expense using existing variable stability aircraft, this kind of training for test pilots and engineers is not common in industry, the Federal Aviation Administration, or the Department of Defense. (It may also be possible to use ground-based simulators for APC awareness training, especially for Category I APC events, but they are not likely to make the same sort of dramatic impression on pilots as in-flight experiences.) The committee believes test pilots need specialized training to improve their ability to detect adverse APC characteristics. Test pilots tend to adapt very quickly to new aircraft, and they may unconsciously compensate for deficiencies in a FCS that, in some circumstances, could contribute to an APC event. Therefore, their training should also include aggressive searches for tendencies that could lead to APC events. Because most line pilots have not been trained to recognize and report adverse APC characteristics, they often attribute PIOs to deficiencies in their flying skills. The committee suspects that this tends to limit reporting of adverse APC events to safety reporting systems. Appropriate training is equally important for accident investigators and others involved in evaluating flight operations. Investigators should be knowledgeable about APC hazards and how to identify them. The improving capabilities of flight data recording systems will aid investigators in
OCR for page 8
--> determining whether APC phenomena contributed to specific incidents and accidents. Recommendation. Insufficient attention to APC phenomena generally seems to be associated with a lack of understanding and relevant experience; this shortcoming should be addressed through improved education about APC phenomena for pilots and other personnel involved in aircraft design, simulation, testing, certification, operation, and accident investigation. Eliminating Aircraft-Pilot Coupling Events To increase the likelihood of finding major APC tendencies during the development process, the committee recommends that a disciplined and structured approach be taken in the design, development, testing, and certification of aircraft. This approach is intended to improve existing techniques for mitigating the risk of adverse APC and to expedite the adoption of new techniques as they become available. Management The elimination of APC events requires both an effective technical approach and a highly supportive management structure. In the past, a possible susceptibility to APC was sometimes detected during simulations and analysis early in the development of new aircraft but was dismissed by managers or designers as premature or irrelevant because the susceptibility was associated with tasks that were viewed as uncharacteristic of actual flight operations. In other cases, APC susceptibility has been inadvertently introduced into new aircraft with design changes that were not fully assessed for their impact on APC characteristics. Program managers and designers should implement a highly structured systems-engineering approach that involves all relevant disciplines in the APC-elimination process from early in the program through entry into service. Design Criteria Good "flying qualities" are fundamental to the elimination of adverse APC. The starting point for military aircraft is compliance with the requirements in MIL-STD-1797A and Draft MIL-STD-1797A Update.70,71 Compliance lessens APC tendencies in classical fixed-wing aircraft with modest stability augmentation systems and conventional fully-powered surface actuating systems. Rotorcraft that meet the requirements of ADS-33D68 are
OCR for page 9
--> also likely to be more resistant to APC events. However, these specifications, like the criteria upon which they are based, do not adequately address the susceptibility of aircraft to Category II and III PIOs and to non-oscillatory APCs. These requirements should be supplemented early in the design process by appropriate criteria and metrics selected and tailored, as necessary, to guide development teams in assessing the flying qualities and susceptibility of new aircraft to adverse APC. The APC criteria should emphasize highly demanding, closed-loop operations of the PVS, as well as precision maneuvering characteristics. The criteria should be viewed as a means of alerting the analysis and design teams to features that can increase the risk of APC. Current design criteria cannot guarantee that a given design will be free of adverse APC characteristics in flight. Appropriate combinations of available APC criteria are generally useful for assessing the susceptibility of aircraft designs to most types of linear, oscillatory APC events (i.e., Category I PIOs). Available criteria do not effectively address more complex types of APC events—Category II and III PIOs and non-oscillatory APC events. Research on APC design assessment criteria should focus on these less understood types of APC events; a coordinated approach that combines experiments with the development of new analysis approaches is essential. Simulation and Flight Tests Ground and in-flight simulators and pilots who are sensitive to APC tendencies can contribute to the development of a FCS with satisfactory APC characteristics. The potential of simulators to reproduce APC events that have been encountered in flight has been repeatedly demonstrated. However, the continuing occurrence of unexpected APC events in flight also illustrates the limited effectiveness of current simulation technologies and procedures for predicting APC events. Existing simulation and analysis tools should be refined to be more specific, selective, and accurate predictors. A high priority should be placed on research to develop predictive simulation protocols and tasks and to validate simulation test results with flight tests. Fixed-base simulators may not always reveal the existence of adverse APC tendencies because of (1) the lack of acceleration cues; (2) less-than-satisfactory visual systems; (3) inadequate simulation of major FCS details, especially inceptors and FCS characteristics that come into play when PVS operations are at or near transitions or other conditions that define margins; and (4) the difficulty of instilling stress and a sense of urgency in the pilot. Moving-base simulators may be more effective than fixed-base simulators in some parts of the flight envelope, although they too can have the deficiencies listed above, as well as the oddities of motion washout and other artifacts. The committee believes that a high-quality visual display is more effective than a
OCR for page 10
--> moving base because most simulations involve instrument-rated pilots who are trained to rely upon visual rather than acceleration cues. In-flight simulation solves many of the problems inherent in ground simulation if the effective aircraft dynamics, including inceptors, are well simulated. In-flight simulation can be especially valuable for increasing the APC awareness of test and operational pilots and flight test engineers and for demonstrating and conducting research on cliff-like APC phenomena (Category II and III PIOs and non-oscillatory APCs). Highly focused flight-test evaluations of prototypes or pre-certification aircraft can be particularly helpful for identifying flight situations that might be susceptible to APC, as well as for providing the final measures of performance. Throughout the simulation and flight test process, pilots must be assigned appropriate tasks (see Chapter 4) in order to evaluate APC characteristics effectively. Because APC events are commonly associated with highly demanding, precisely controlled aircraft movements, simulation and flight tests used for assessing APC tendencies should include such tasks as aggressive acquisition maneuvers, aggressive tracking maneuvers, mode transitions, formation flying and aerial refueling, approach and landing, and special tracking tasks. It is important that a variety of repeatable tasks be included to ensure that APC assessments are comprehensive and verifiable. In addition, many pilots should be involved in simulation and flight tests to ensure that the aircraft will accommodate a wide range of piloting skills; two or three test pilots are not enough to conduct a thorough evaluation and examination if APC characteristics are marginally acceptable. An aggressive search for APC tendencies is especially important in flight regimes where cliff-like phenomena are most likely to appear. Recommendation. A disciplined and structured approach should be taken in the design, development, testing, and certification stages to maximize the effectiveness of existing techniques for mitigating the risk of adverse APC tendencies and for expediting the incorporation of new techniques as they become available. This is especially important in areas where effective procedures and standards do not currently exist (e.g., FAA certification standards). Interim Prescription For Avoiding Severe Aircraft-Pilot Coupling Events This report stresses the need for enhanced awareness of APC phenomena and an orderly and structured design and development process to address this problem. Although no definitive criteria are applicable to all types of APC
OCR for page 11
--> events, the technical guidelines that appear below can confer immunity to most severe APC events. The committee recognizes that readers concerned with specifics may find the following discussion of processes and criteria too general, even as other readers who are unfamiliar with APC phenomenology may find the details of some technical descriptions difficult to understand. Reduce Category I Pilot-Induced Oscillation Tendencies Implications for Design of the Effective Aircraft Dynamics Reduce time lags in the high-frequency effective aircraft dynamics. To reduce tendencies for attitude-dominant PIOs, increase the frequency range over which a pilot hypothetically operating in a pure-gain (proportional control) mode can exert closed-loop control on aircraft attitude. Counter possible interactions between the pilot and higher-frequency modes of the effective aircraft dynamics. Suitable Metrics and Criteria Ensure that inceptor characteristics, flexible modes of the aircraft structure, and other elements of a PVS that incorporates a pure-gain pilot do not create high frequency closed-loop resonances. Three criteria (i.e., the Gain/Phase Template Plus ω180/Average Phase Rate criterion, the Dropback criterion, and the Aircraft-Bandwidth/Phase Delay criterion) can provide useful warnings and design guidance. Minimize Category II and III Pilot-Induced Oscillation Tendencies Implications for Design of the Effective Aircraft Dynamics Provide seamless transitions when the FCS switches between control modes or control laws. Minimize transitions that create large increases in the phase lag or gain that the FCS applies to the pilot's commands, especially simultaneous increases in both. Suitable Metrics and Criteria Develop metrics and criteria for predicting Category II and III PIO tendencies. (Currently, such criteria do not exist.) Reduce the effects of phase
OCR for page 12
--> lag introduced by rate limiting by providing liberal rate limits and minimizing the need for large pilot commands during critical closed-loop tasks. Command-gain changes and pre- to post-transition dynamic shifts of no more than about 3 dB (50 percent) are tentative lower limits for tasks that require the pilot to exert tight closed-loop control. Examine the Possibility of Non-Oscillatory Aircraft-Pilot Coupling Events In searching for unexpected non-oscillatory APC events, consider special maneuvers, pilot commands, and FCS inputs that may effectively increase the time lag between the pilot's command and its reflection at the control surface. Conduct Assessments and Evaluations Using Simulators Implications for Design Provide simulator characteristics that are valid reflections of effective aircraft dynamics, especially for high PVS frequencies and conditions where FCS operations are nonlinear. Extensively examine situations that analysis has indicated are marginal with respect to the occurrence of Category I APC events. Conduct a specialized and detailed search for potentially critical Category II and III (cliff-like) situations using an impartial team of experienced FCS engineers. Include circumstances that may require large pilot inputs, high pilot gain, or FCS shifts between modes and/or control laws. Implications for Test Execution Use test input sequences that put maximal stress on the PVS. Include periods of active, freelance pilot operations to search for potential limiting conditions (see Table 4-2). Also include a broad spectrum of test pilots and operational pilots. Examine maneuvers and command sequences that may effectively increase the time lag between the pilot's command and the control surface effector's reflection of this command. Conduct Flight Evaluations Use flight evaluations, which are closely related to simulation tests, to build on the results of simulation. In particular, use test input sequences that stress the PVS to extremes and include a spectrum of pilots. Conduct tests of
OCR for page 13
--> situations where PVS performance was previously determined or suspected to be marginal, as well as conditions that have no parallel in simulation (e.g., situations that involve very high frequency modes or acceleration-sensitive phenomena). Devote an investigatory phase, with appropriate safety measures, to an active and aggressive search by pilots for potential, cliff-like PIO conditions, such as conditions involving rate or position limits. Include carefree freelance operations that provide test pilots with "open time" to experiment freely. Alternative Approaches The approaches used to address APC risk in the U.S. and international civil and military aviation communities are not consistent. Some organizations rely heavily on the analysis of new designs in accordance with formal APC criteria. Others rely primarily on empirical methods and rules of thumb based on experience with prior aircraft. The committee did not find any approach that consistently produces aircraft free of adverse APC characteristics. APC events thus remain a threat, and the potential for tragedy will persist until the goal of reducing APC risk is aggressively pursued. Manufacturers of civil and military aircraft often consider the approaches they use to reduce the risk of adverse APC as a component of their proprietary design and manufacturing process. In addition, the APC characteristics of current aircraft are often treated as proprietary or classified performance data. These attitudes tend to inhibit the exchange of APC-related information and interfere with cooperative efforts to reduce the risk. Nevertheless, the committee believes that, in the interest of aviation safety, the free exchange of APC-related information on design and manufacturing processes and on aircraft performance characteristics should be encouraged throughout the military and civil aviation communities, nationally and internationally. This report, which contains a great deal of data, information, and procedures that would normally be considered proprietary, is a step in this direction.
Representative terms from entire chapter: | 1 | 2 |
<urn:uuid:9a2e6354-3989-4ae4-9baf-a6999f62c1b3> | To Be Published in Observer's Handbook 2002, Royal Astronomical Society of Canada
During the year 2002, two solar and three penumbral lunar eclipses occur as follows:
2002 May 26: Penumbral Lunar Eclipse
2002 Jun 10: Annular Solar Eclipse
2002 Jun 24: Penumbral Lunar Eclipse
2002 Nov 20: Penumbral Lunar Eclipse
2002 Dec 04: Total Solar Eclipse
Predictions and maps for the solar and lunar eclipses are presented in a number of figures linked to this document. World maps show the regions of visibility for each eclipse. The lunar eclipse diagrams also include the path of the Moon through Earth's shadows. Contact times for each principal phase are tabulated along with the magnitudes and geocentric coordinates of the Sun and Moon at greatest eclipse.
All times and dates used in this publication are in Universal Time or UT. This astronomically derived time system is colloquially referred to as Greenwich Mean Time or GMT. To learn more about UT and how to convert UT to your own local time, see Time Zones and Universal Time.
2002 May 26: Penumbral Lunar Eclipse
The first eclipse of the year is a deep penumbral lunar eclipse visible from parts of the Western Hemisphere. First and last penumbral contacts occur at 10:13 UT and 13:54 UT, respectively. The Moon's path through Earth's penumbra as well as a map showing worldwide visibility of the event is shown in Figure 1.
Greatest eclipse occurs at 12:03 UT with a maximum penumbral eclipse magnitude of 0.7145. Observers will note subtle yet distinct shading across the southern portions of the Moon. The Moon's southern limb actually lies 9.1 arc-minutes north of the umbra at its closest approach.
The first solar eclipse of 2002 is annular with a path that stretches the breadth of the Pacific Ocean. The partial phases are visible from eastern Asia and most of North America except for the northeast (Figure 2).
The Moon's antumbral shadow first touches down on Earth at 20:53 UT along the north coast of Sulawesi. Racing across the Celebes Sea, the antumbra engulfs the Indonesian islands of Pulau Sangihe and Kepulauan Talaud. The annular phase lasts just over one minute with the early morning Sun 6° above the horizon.
Leaving Indonesia, the shadow's trajectory takes it over a long track across the Pacific. As it does so, the curvature of Earth's surface causes the path width and central duration to gradually decrease. The antumbra reaches the southern end of the Northern Mariana Islands chain at 22:10 UT (Figure 3). Guam lies just 40 kilometres south of the 47 kilometre wide path and will experience a deep partial eclipse of magnitude 0.975. About 180 kilometres northeast of Guam, the islands Saipan and Tinian span the northern limit of the annular track. Tinian's southern tip extends a dozen kilometres into the path but still falls 10 kilometres short of the center line. Nevertheless, most of the 53 second long annular phase of magnitude 0.988 will be seen from this location with the Sun 32° above the horizon.
From this point on, the antumbra encounters no other populated islands across the Pacific. Greatest eclipse occurs at 23:48:15 UT about 2600 kilometres northwest of the Hawaiian Islands. The duration of the annular phase lasts a scant 23 seconds, but the event takes place in open ocean with no landfall in sight.
As the track begins to swing to the southeast, its width and central duration begin to increase once again but no other islands lie in its path. Just before reaching its terminus, the antumbra passes 50 kilometres south of the southern tip of Baja, Mexico at 01:32 UT (June 11) (Figure 4). In the final seconds of its earthbound trajectory, the shadow reaches the Pacific coast of Mexico, 30 kilometres south of Puerto Vallarta. Under favorable weather conditions, observers on the center line will witness a spectacular ring of fire on the horizon as the Sun sets just after annularity. The central duration is 1 minute 7 seconds and the magnitude is 0.981. Atmospheric refraction will actually displace the end of the path to the southeast so that the entire annular phase will occur before sunset for observes on or near the coast.
The antumbral shadow leaves Earth's surface at 01:35 UT (June 11). Over the course of 3 hours and 47 minutes, the Moon's antumbra travels along a path approximately 14,700 kilometres long and covering 0.2% of Earth's surface area. Path coordinates and centreline circumstances are presented in Table 1.
Partial phases of the eclipse are visible from much of North America, the Pacific and western Asia. Local circumstances for a number of cities are listed in Table 2. All times are given in Universal Time. The Sun's altitude and azimuth, the eclipse magnitude and obscuration are all given at the instant of maximum eclipse. Additional information is also available at the 2002 annular solar eclipse web site:
2002 Jun 24: Penumbral Lunar Eclipse
The year's second lunar eclipse follows two weeks after the annular solar eclipse. Unfortunately, the event is a very shallow penumbral eclipse which is nominally visible from the Eastern Hemisphere. First and last penumbral contacts occur at 20:19 UT and 22:36 UT, respectively.
Greatest eclipse takes place at 21:27 UT with a maximum penumbral eclipse magnitude of only 0.2347. At that time, the Moon's northern limb will dip a meager 7.4 arc-minutes into the pale penumbral shadow. Such a minor eclipse will be all but invisible, even for the sharpest eyed observers. The following Figure shows the Moon's path through the penumbral shadow and the region of global visibility.
2002 Nov 20: Penumbral Lunar Eclipse
The last penumbral lunar eclipse of 2002 is also the deepest lunar eclipse of the year. The event will be observable from the Americas, Europe, Africa and central Asia. First and last penumbral contacts occur at 23:32 UT (Nov 19) and 04:01 UT (Nov 20), respectively. The Moon's path through Earth's penumbra and a map depicting worldwide visibility is shown in Figure 5.
At greatest eclipse (01:47 UT), the penumbral magnitude reaches its maximum value of 0.8862 as the Moon's northern limb passes just 6.6 arc-minutes from the edge of the umbra. Observers should be able to see a subtle yet distinct shading across the northern portion of the Moon's disk. Alas, the striking colors present during total eclipses will be absent from this event. We must wait until 2003 when two total lunar eclipses take place.
The final event of the year is a total solar eclipse visible from a narrow corridor that traverses the Southern Hemisphere. The path of the Moon's umbral shadow begins in the South Atlantic, crosses southern Africa and the Indian Ocean and ends at sunset in southern Australia. A partial eclipse will be seen within the much broader path of the Moon's penumbral shadow, which includes most of Africa, western Australia and Antarctica (Figure 6).
The eclipse begins in the South Atlantic where the Moon's umbral shadow first touches down on Earth at 05:50 UT (Figure 7). Along the sunrise terminator, the duration is only 26 seconds as seen from the centre of the 31 kilometre wide path. Seven minutes later, the umbra reaches the Atlantic coast of Angola (05:57 UT). Quite coincidentally, the first track of Angolan land to experience totality was also within the path of the total solar eclipse of 2001 June 21. The local residents are indeed fortunate to witness a total eclipse twice within the span of eighteen months.
The early morning eclipse lasts 51 seconds from the center line with the Sun 19° above the horizon. The umbra carves out a 50 kilometre wide path as it sweeps across Angola in a southeastern direction. Briefly straddling the Angola/Zambia border, the shadow crosses eastern Namibia before entering northern Botswana (06:09 UT). The path width has grown to 60 kilometres and totality lasts 1 minute 11 seconds. Following the political boundary between Zimbabwe and Botswana, the umbra travels with a ground speed of 1.2 km/s. Bulawayo, Zimbabwe lies just north of the track and its residents witness a deep partial eclipse of magnitude 0.987 at 06:14 UT.
The umbra crosses completely into Zimbabwe before entering northern South Africa at 06:19 UT. One minute later, the northern third of Kruger National Park is plunged into totality which lasts 1 minute 25 seconds as the hidden Sun stands 42° above the horizon. Quickly crossing southern Mozambique, the shadow leaves the dark continent at 06:28 UT and begins its long trek across the Indian Ocean.
The instant of greatest eclipse occurs at 07:31:11 UT when the axis of the Moon's shadow passes closest to the centre of Earth (gamma = -0.302). The length of totality reaches its maximum duration of 2 minutes 4 seconds, the Sun's altitude is 72°, the path width is 87 kilometres and the umbra's velocity is 0.670 km/s. Unfortunately, the umbra is far at sea ~2000 kilometres southeast of Madagascar.
During the next hour and a half, no land is encountered as the eclipse track curves to the northeast and begins to narrow. In the final ninety seconds of its terrestrial trajectory, the umbra traverses South Australia. The coastal town of Ceduna lies at the center of the 35 kilometre wide path. Totality lasts 33 seconds while the Sun stands 9° above the western horizon. The accelerating ground speed of the umbra already exceeds 5 km/s. In the remaining seconds, the increasingly elliptical shadow sweeps across 900 kilometres of the Australian Outback.
The umbra leaves Earth's surface at the sunset terminator at 09:12 UT. Over the course of 3 hours and 21 minutes, the Moon's umbra travels along a path approximately 12,000 kilometres long and covering 0.14% of Earth's surface area. Path coordinates and centreline circumstances are presented in Table 3.
Local circumstances for cities throughout the path are given in Table 4. All times are given in Universal Time. The Sun's altitude and azimuth, the eclipse magnitude and obscuration are all given at the instant of maximum eclipse.
A detailed report on this eclipse is available from NASA's Technical Publication series (see: NASA Solar Eclipse Bulletins). Additional information is also available at the 2002 total solar eclipse web site:
1The instant of greatest eclipse occurs when the distance between the Moon's shadow axis and Earth's geocenter reaches a minimum. Although greatest eclipse differs slightly from the instants of greatest magnitude and greatest duration (for total eclipses), the differences are usually quite small.
2Minimum distance of the Moon's shadow axis from Earth's center in units of equatorial Earth radii.
h = 15 (GST + UT - ra ) + l a = ArcSin [ Sin d Sin f + Cos d Cos h Cos f ] A = ArcTan [ - (Cos d Sin h) / (Sin d Cos f - Cos d Cos h Sin f) ] where: h = Hour Angle of Sun or Moon a = Altitude A = Azimuth GST = Greenwich Sidereal Time at 0:00 UT UT = Universal Time ra = Right Ascension of Sun or Moon d = Declination of Sun or Moon l = Observer's Longitude (East +, West -) f = Observer's Latitude (North +, South -)During the eclipses of 2002, the values for GST and the geocentric Right Ascension and Declination of the Sun or the Moon (at greatest eclipse) are as follows:
Eclipse Date GST ra d Penumbral Lunar 2002 May 26 16.259 16.231 -20.027 Annular Solar 2002 Jun 10 17.277 5.268 23.055 Penumbral Lunar 2002 Jun 24 18.191 18.224 -24.785 Penumbral Lunar 2002 Nov 20 3.928 3.708 18.654 Total Solar 2002 Dec 04 4.863 16.697 -22.225
2003 May 16: Total Lunar Eclipse
2003 May 31: Annular Solar Eclipse
2003 Nov 09: Total Lunar Eclipse
2003 Nov 23: Total Solar Eclipse
A full report Eclipses During 2003 will be published in the Observer's Handbook 2003.
The next total eclipse of the Sun is visible from southern Africa. The path of the Moon's umbral shadow begins in the South Atlantic, off the west coast of equatorial Africa. It crosses through Angola, Zambia, Namibia, Botswana, Zimbabwe, South Africa and Mozambique (Figure 8). Totality takes place in the morning hours with a central duration ranging from 1 to 1.5 minutes. The track continues across the Indian Ocean and ends in southern Australia north of Adelaide.
Complete details will eventually be posted on the NASA TSE2002 web site as well as in the next NASA bulletin scheduled for publication in late 2000. The TSE2002 web site address is:
Special bulletins containing detailed predictions and meteorological data for future solar eclipses of interest are prepared by F. Espenak and J. Anderson, and are published through NASA's Publication series. The bulletins are provided as a public service to both the professional and lay communities, including educators and the media. A list of currently available bulletins and an order form can be found at:
Single copies of the eclipse bulletins are available at no cost by sending a 9 by 12 inch self-addressed envelope stamped with postage for 11 ounces (310 grams. Please print the eclipse year on the envelope's lower left corner. Use stamps only, since cash or checks cannot be accepted. Requests from outside the U. S. and Canada may send ten international postal coupons. Mail requests to: Fred Espenak, NASA/Goddard Space Flight Center, Code 693, Greenbelt, Maryland 20771, USA. The NASA eclipse bulletins are also available over the Internet, including out-of-print bulletins. Using a Web browser, they can be read or downloaded via the World-Wide Web from the GSFC/SDAC (Solar Data Analysis Center) eclipse page:
The original Microsoft Word text files and PICT figures (Macintosh format) are also available via anonymous ftp. They are stored as BinHex-encoded, StuffIt-compressed Mac folders with .hqx suffixes. For PC's, the text is available in a zip-compressed format in files with the .zip suffix. There are three sub directories for figures (GIF format), maps (JPEG format), and tables.
A special solar and lunar eclipse web site is available via the Internet at:
The site features predictions and maps for all solar and lunar eclipses well into the 21st century. Special emphasis is placed on eclipses occurring during the next two years with detailed path maps, tables, graphs and meteorological data. Additional catalogs list every solar and lunar eclipse over a 5000 year period.
All eclipse predictions were generated on a Power Macintosh 8500/150 using algorithms developed from the Explanatory Supplement with additional algorithms from Meeus, Grosjean, and Vanderleen . The solar and lunar ephemerides were generated from Newcomb and the Improved Lunar Ephemeris. A correction of -0.6" was added to the Moon's ecliptic latitude to account for the difference between the Moon's centre of mass and centre of figure. For partial solar eclipses, the value used for the Moon's radius is k=0.2724880. For lunar eclipses, the diameter of the umbral shadow was enlarged by 2% to compensate for Earth's atmosphere and the effects of oblateness have been included. Text and table composition was done on a Macintosh using Microsoft Word. Additional figure annotation was performed with Claris MacDraw Pro.
All calculations, diagrams, tables and opinions presented in this paper are those of the author and he assumes full responsibility for their accuracy.
Special thanks to National Space Club summer intern Bailey McCreery for his valuable assistance in preparing this web page. (July 2001)
Webmaster: Fred Espenak
Planteary Systems Laboratory - Code 693
NASA/Goddard Space Flight Center, Greenbelt, MD 20771, USA
Last revised: 2007 Jun 18 - F. Espenak | 1 | 2 |
<urn:uuid:94934298-7405-4346-9ab9-ebaa52ef9055> | GPS, or Global Positioning System, tracking is becoming increasingly common as a solution for locating and keeping on top of the movements of commercial and personal items. Using satellites and receivers to triangulate the location of a device, GPS has developed from military uses through to being part of smartphones and other consumer devices, and has also become an important part of business asset tracking and personal security. It’s worth, then, reviewing how GPS tracking works, as well as what some of its basic types and applications are.
GPS tracking depends on a satellite network that can transmit microwave signals to the Earth, where they can be received by devices. 27 satellites are in orbit around the planet, with 24 in operational use. The others act as backups. These satellites emit signals that can be received by sensors in a device, and converted into coordinates that can accurately map the exact location and movement of the device. The process of detecting and coordinating a location is known as trilateration, and requires there to be a clear matching up of signals with the available satellites.
This technology was first developed in the 1960s and 1970s as a military solution for tracking items, and became more common through the use of real time and passive mapping through receivers, becoming part of commercial uses and personal security. The technology has also advanced to the point where everyday consumer technology and receivers uses software to record and create navigation maps, and histories of movement, for items.
A basic GPS receiving device typically, then, contains a flash memory card, a 32 bit processor, LCD modules for light, motion sensors, a modem, and a receiver that can convert the satellite signal. Smaller chips can also be used as basic receivers and emitters of signals. This technology has also been adapted for use in Radio Frequency Identification, whereby a limited range radio signal can be activated through a scanner to load up information and track the movement of important assets.
Different Types of GPS Receivers
There are several different kinds of GPS receivers. A passive receiver, or a data logger, only stores the information from satellites via a flash memory. A data logger can then be connected to a computer to check on the movement of a route, and is typically used for tracking runs and routes, as well as for creating a record of the movement of pets and people that require tracking.
By contrast, an active GPS receiver is connected to a cellular network, and uploads coordinates in realtime for downloading and viewing by computer software. In this way, an item can be directly tracked. In the case of asset tracking, valuable items like goods deliveries or equipment can be tracked from a warehouse to a point of delivery by bringing up the active GPS signal, which will send out a continuous stream of data. This active link is also used by smartphones, and as part of satellite navigation systems.
Finally, a data puller GPS device fulfills much the same role as an active receiver, but only provides coordinates at intervals, rather than continuously. This kind of GPS data pulling is used for non essential items, and as part of the basic recording of the geographic position of a computer.
GPS consequently has many important uses. Asset tracking is perhaps the most common, with businesses able to keep tabs of their goods, which can in turn prevent losses, and ensure that deliveries are made accurately. However, asset tracking can also mean being able to identify and track the movement of loved ones and pets. Chips in pets can be invaluable for actively tracking a lost dog or cat. Chips that are attached to clothes, bags, and shoes, are also ideal for receiving and sending out satellite information on a child, or an elderly relative that has become lost.
Rob James is a technophile, and recommends using Asset Tracking via Ninja Tracking. Rob can be found blogging about all things technology related from GPS tracking, to mobile phones. | 1 | 2 |
<urn:uuid:f85b65bf-b8eb-4396-8617-8c74ea795ff1> | - opossum n. , pl. , opossum , or -sums . Any of various nocturnal, usually arboreal marsupials of the family Didelphidae, especially Didelphis. — “opossum: Definition from ”,
- Opossum Didephis virginin*** control and management information. An opossum (Didelphis virginiana) is a whitish or grayish mammal about the size of a house cat (Fig. 1). — “Opossum Control and Management Information”,
- The order Didelphimorphia includes only New World marsupials, which are all species of opossums. For example, the opossum has 50 teeth; more than any other mammal, where as most placental mammals have only 44. — “OPOSSUM”,
- Opossums (Didelphimorphia, pronounced /daɪˌdɛlfɨˈmɔrfi.ə/) are the largest order of marsupials in the Western Hemisphere. The Virginia Opossum was the first animal to be named an opossum; usage of the name was. — “Opossum - Wikipedia, the free encyclopedia”,
- Opossum is the common name for various small- to medium-sized marsupials comprising the mammalian order Didelphimorphia and found in the Western Hemisphere. The Virginia opossum (Didelphis virginiana), the original animal named opossum, is the only marsupial native to North America north of Mexico. — “Opossum - New World Encyclopedia”,
- Nonprofit wildlife rehabilitation and educational organization dedicated to providing care for injured and orphaned wild opossums. — “Opossum Society of the United States”,
- The opossum (Didelphis virginiana) is the only native North American marsupial. The opossum is not native to California, but was introduced many years ago from the east coast of the United. — “Opossum Management Guidelines--UC IPM”, ipm.ucdavis.edu
- Opossum definition, a prehensile-tailed marsupial, Didelphis virginiana, of the eastern U.S., the female having an abdominal pouch in which its young are carrie See more. — “Opossum | Define Opossum at ”,
- The theory that the opossum descended from a family of Irish O'Possums has been entirely discredited by modern scholarship. Once a female opossum mates, she gives birth a mere 13 days later to a litter of roughly a dozen baby opossums that are each no. — “Opossum Facts”,
- Article on opossums in Loudoun County Virginia as published in the Loudoun Wildlife Conservancy's Habitat Herald, an environmental education newsletter centered on providing information on local wildlife, plants and habitats. — “Habitat Herald Newsletter: Opossum”,
- Definition of opossum in the Online Dictionary. Meaning of opossum. Pronunciation of opossum. Translations of opossum. opossum synonyms, opossum antonyms. Information about opossum in the free online English dictionary and encyclopedia. virginia. — “opossum - definition of opossum by the Free Online Dictionary”,
- The opossum has been around for at least 70 million years and is one of Earth's oldest The opossum is about the size of a large house cat. It has a. — “Virginia Opossum - Didelphis virginiana - NatureWorks”,
- opossum , name for several marsupials , or pouched mammals, of the family Didelphidae, native to Central and South America, with one species. — “opossum Facts, information, pictures | ”,
- The opossum is a marsupial commonly known for its ability to play dead. Learn more about the opossum at HowStuffWorks. — “HowStuffWorks "Opossum"”,
- Opossum Info: Opossums are unique for several reasons. They are the only North American Opossums also have a prehensile tail, from which they can occasionally hang. — “How To Get Rid of Opossums / Possums”,
- Welcomes you to the world of the Virginia Opossum. remember that it's still nice to "stop and smell the roses ", and watch an occasional opossum waddle by. — “National Opossum Society”,
- Learn all you wanted to know about opossums with pictures, videos, photos, facts, and news from National Geographic. — “Opossums, Opossum Pictures, Opossum Facts - National Geographic”,
- opossum (plural opossums, diminutive opossumpje, diminutive plural opossumpjes) opossum" Categories: Webster 1913 | English nouns | Zoology | Dutch. — “opossum - Wiktionary”,
- Opossums are a member of the order Marsupialia, a primitive group of mammals found most commonly in Australia. Its feet are plantigrade (shaped so the opossum walks of the sole of its foot with the heel touching the ground) and its toes are dexterous (skillful,. — “Opossum”,
- Diagrams and photos of tracks, basic facts, personal notes, and information about opossum babies. — “Animal Tracks - Opossum (Didelphis virginiana)”, bear-
- The Opossum can adapt to human environments easily and usually by choice. Your home is the den site due to increases in urbanization or land development in Opossum inhabited areas, and the overpopulation of Opossums in these areas does not help. — “Blue Ridge Wildlife Management, LLC”,
- Encyclopedia article of opossum at compiled from comprehensive and current sources. — “Opossum encyclopedia topics | ”,
related images for opossum
- all others and placing primary emphasis on promotion of its culture and interests as opposed to those of other nations or supranational groups Get ready for another syrupy sappy stupid opossum faced speech when he comes back from Cancun He will tell you how great it is that he just Pimped America out again That song that won at the Academy Awards this year was it about
- opossum03 jpg
- and don t detach for months Months Even after the babies are done with this parasitic suckling they spend most of their time as a tangled mess of claws and teeth and tails splayed out on their mom as if she were a Twister mat My newfound repulsion fascination with this animal led me to one of those sites that makes you fall in love with the Internet all over again a
- Virginia opossum Didelphis virginiana Three Lakes Wildlife Management Area
- Opossum To use any of the clipart images above including the thumbnail image in the top left corner just click and drag the picture to your desktop You may also control click Mac or right click
- Finita la cena Dario ci ha portati nel parco botanico di Brisbane dove abbiamo visto gli opussum e li abbiamo nutriti con delle mele Marisa e opossum Opossum Indice
- Opossum 26879 jpg
- Opossum 26883 jpg
- will help to save these docile creatures Mammals The most commonly seen mammals at the National Seashore are coyotes spotted ground squirrel kangaroo rats and deer others such are opossum and raccoon are seldom seen and a few such as badgers and bobcats are rarely encountered Following are a few details over the more commonly occuring mammals in the park The
- bobbie opossum mug jpg
- Opossum Guard QBP Kowloon 1954
- These works are on brown or white paper using acrylic paints Big Guy with a Tree Opossum Close up of Opossum
- HMS m Opossum S19 pre mod
- HMS m Opossum S19 after mod
- HMS m Opossum S19 after mod
- Opossum 062705 6115 JPG 27 Jun 2005 19 17 170K Opossum 062705 6114 JPG 27 Jun 2005 19 17 181K Opossum 062705 6113 JPG 27 Jun 2005 19 17 192K Opossum 062705 6116 JPG 27 Jun 2005 19 17 215K
- opossum3 jpg
- opossum4 jpg
- Wallabies Opossum Top
- Opossum 26745 jpg
- possum jpg
- possumfabric jpg
- HMS Opossum Destroyer
- B they like to eat food that people throw out of their cars C they hitch hike frequently D they have genetically implanted suicidal tendencies
- Opossum To use any of the clipart images above including the thumbnail image in the top left corner just click and drag the picture to your desktop You may also control click Mac or right click
- opossum jpg
- opossum jpg
- opossum1 jpg
related videos for opossum
- Brazilian Short Tailed Opossum Care I've had many questions on how to care for these animals. This quick video explains some of the basics. Here is some repeated + more information. Brazilian Short Tailed Opossums: In nature STO's could be considered scavengers. They will eat a variety of things from fruits, veggies, nuts, and insects, to small rodents. They are small animals only reaching about 5" body length. Cage - STO's love to run around. Provide the largest cage possible. 1/4" or 1/2" bar spacing. Inside the cage you need a wheel, food dish, water bottle, and sleeping area/hide. Food - For their food dish which should be kept full at all times: Dog food, cat food, ferret food, sugar glider food, hedgehog food. "Treats" such as fruits, veggies, worms, chicken, or rodents should be fed 3 to 4 nights a week. *If you feed rodents - please only feed pinkie or fuzzy mice live. Adult mice pose a threat to your STO. My STO's have been raised on live foods. Lucy was born in my house, and she has eaten live rodents since she was 3 months old. She is a trained/experienced hunter. ***I am one of the few STO owners who feeds LIVE rodents! If you purchase a STO from a pet store or another breeder it is unlikely they have ever eaten a live rodent! This puts them at a MUCH GREATER risk of being bitten/hurt/killed by the mouse as they are un-experienced and may not know what to do! STO's as pets: STO's are wonderful pets! They can really connect/bond with their human. Some people even carry their STO's around with ...
- Opossum Opossum we found running around in circles in our field!
- Varmint Hunting Opossum Kill in Nebraska with Mizell's Monsters A nice opossum I actually killed in daylight in Nebraska. Possoms are not our normal game animals but we have hunts for almost every huntable game animal on the planet and we have a good time doing it. We film most hunts and also will film any hunt, anywhere on the planet. Please call for any questions at all or if you would like to book your hunt of a lifetime. 251-504-4709 you may email us from our website Please Subscribe! Fan us on Facebook!
- Virginia Opossum - HD Mini-Documentary Transcript: "The Virginia Opossum also known as the North American Opossum is the only marsupial in North America. Marsupials have pouches for carrying their young through early infancy. Baby opossums live in these pouches for 2-3 months and then ride on their mothers backs for an additional month or two. This is a huge portion of their life, considering that opossums have an average life span of only two years in the wild. Opossums are slow, solitary, nocturnal animals and prefer to be left alone. When threatened they will hiss and bear their teeth, but under extreme circumstances they will play dead. Playing Possum serves two purposes it discourages predators that only eat live prey and it convinces some larger animals that they are not a threat to their young. Virginia Opossums are the largest Opossums in the world. They are 15-20 inches long and weigh 9 to 13 lbs about the size of a domestic cat. They have clawless thumbs on their hind feet and hairless prehensile tails that can grab objects and help them to balance when they are climbing."
- opossum playing dead #3
- Land of the Opossum Naturalist Ryan Wofford takes you into the land of the Opossum for an up close look at what these amazing creatures are all about.
- ME Pearl Presents PROPER OPOSSUM DENTAL HYGIENE.mov
- My opossums Eating treats.
- Titmouse vs. Opossum A titmouse takes hair from an opossum to use for nesting material.
- Cross-Eyed Opossum Captures the Hearts of Germans For more news visit ☛ or Follow us on Twitter ☛ http Heidi, the cross-eyed opossum, becomes Germany's newest star. Thousands of fans are already following the animal's Facebook page. She's middle-aged, grey-haired and stays up all night. Still, Heidi the cross-eyed opossum is Germany's biggest media sensation, and she has not even made her debut at the Leipzig Zoo. Heidi appears to be the next in a line of animal celebrities in Germany. The two-and-a-half-year-old opossum has grown popular on Facebook, pushing 82000 fans. She has sparked a popular song on YouTube and will soon star as a soft toy. Zoo officials believe that Heidi's crossed eyes could be the result of a poor diet when she was young, causing fat deposits to develop behind her eyes -- neither of which causes her pain or poses a health risk. [Stephan Kraa, Vet]: "There could be many reasons for this. In the majority of cases it is inherited and is caused by a lack of synchronization with the eye muscles of the opossum. It can also be as a result of fat deposits behind her eyes and as Heidi's sister also has squint eyes, it is probably a genetic problem with them." Poor vision is not much of a problem for Heidi. As a nocturnal animal, opossum's rely heavily on their sense of smell instead of their sight to get around. The public will not get their first glimpse of the opossum until July when the zoo opens its tropical wildlife exhibit. Demand for information about Heidi has been so high the ...
- Opossum in the house *******THE OPOSSUM WAS NOT HURT IN THIS VIDEO******* My parents went out and it was just my brother and I at home when the dogs brought a opossum into the house via the doggie door. My brother grabbed a shovel and I got the video camera. This is what happened. PS sorry for the bad job I did filming. It was all very exciting... especially at the end. *******THE OPOSSUM WAS NOT HURT IN THIS VIDEO******* Why would I post this video if it was?!?
- Opossum Opossums are the only North American Marsupials. They have a pouch that they use to carry their young around. They give birth to 4-24 young that are approximately the size of a grain of wild rice. The newborns spend the next 100 days with mom. They feed on insects, carrion and fruit. Opossums have a relatively short life span, living only about two years.
- Possum Opossum Cat This poor old hungry lonely possum, the only one of his kind around, and unfortunately in the city, decided that being a possum wasn't paying off. So he decided to try his hand at being a cat. He is not very convincing, but little does he know he is welcome for dinner anyway. He is quite polite for a possum, he doesn't bother the cats, and waits for his dinner. The possum and the cats touched noses,became friends afterwards, and respect each others territory. Possums are not carriers of rabies, as their body temperature is too low to support the virus. The pitched roof in the background belongs to the outdoor cat house, which is about 7 ft tall. It has thermostatically controlled electric baseboard heat,a thermopane window, three floors,wood siding and looks like a small cottage. It has been home to 7 cats for most of the past 14 years. The 3 cats in the picture are the last 3 of the family of 7 that lived in the house I just lost the one in the center the summer of 2010, his name was PJ, and he was 15.. I also lost their mother who was 18 years old a year earlier. The cats very seldom if ever leave their back yard, and not too many cats have their own real estate. I do have a few indoor cats as well. In addition to the possum a very much afraid one eyed cat that has been showing up for a hand out for the past week. I do what I can to rescue animals get the care they need and find them homes. I found two puppies last winter someone had thrown out on a busy street in zero ...
- Baby opossum eating a grape This little opossum is between three and four months old. She arrived as a very tiny, very late season orphan about two months ago and is too small to be released this year. She'll be overwintered and released in the spring. As you can see, opossum table manners include picking things up with your fingers and chewing with your mouth wide open.
- Clutch - Opossum Minister Artist: Clutch Song: Opossum Minister Alblum: From Beale Street To Oblivion (c)2007 Clutch/DRT Entertainment
- opossum playing dead #1
- Proper Opossum Pedicure
- Opossum Bob eating, Animal Advocates, Mary Cummins Opossum Bob is an educational opossum. He is eating a specially made nutritional "pancake" with fruit, vitamins and minerals. He cannot be released back to the wild because he has webbed feet. He goes to schools and community meetings to teach people about opossumsI am a licensed wildlife rehabilitator. Animal Advocates, Mary Cummins, www.animaladvocates.us
- Heidi the Cross-Eyed Opossum Meet Heidi, the cross-eye opossum! Heidi's star is rising fast, with more than 176000 people following the animal's Facebook page.
- Heidi - the cross-eyed Opossum This sweet little girl lives in the Zoo of Leipzig (Germany)
- Cross-eyed opossum "Heidi" from North-Carolina is a media-star in Germany Not able to survive in the wild, a cross-eyed opossum from North Carolina now lives in the zoo in Leipzig, Germany.
- Cross-eyed Opossum Catches Public's Gaze A cross-eyed opossum in Leipzig, Germany is becoming the latest animal sensation in that nation. Heidi is already spawning stuffed toys and glances from admirers. (Jan. 12)
- Proper Opossum ***ysis Part 1 Safeguarding Sanity
- Nursing, feeding baby opossum, Animal Advocates, Mary Cummins This baby opossum is being nursed with an oral 3 cc syringe with Esbilac puppy milk. We try to keep the baby at a 90 degree angle to the syringe. He will lap off the end of the oral syringe. They do not suckle. He will be released back to the wild when he is able.I am a licensed wildlife rehabilitator. Animal Advocates, Mary Cummins, www.animaladvocates.us
- V-22 Osprey Tiltrotor The best video I have seen with the V-22 Osprey. I don't have a clue who made it but it ROCKS!!!
- Honda Element and Friends - Opossum The third of the five Honda Element and Friends commercials.
- Baby opossum calling for mom, Animal Advocates, Mary Cummins I just got this baby opossum in and she is calling for her mom. She is making a sound that sounds like "che" or a little sneeze. Generally after a day they give up calling for mom. If you hear this sound, there is probably an orphaned opossum nearby. This opossum will be released back to the wild when she is able.I am a licensed wildlife rehabilitator. Animal Advocates, Mary Cummins, www.animaladvocates.us
- In Defense of Opossums - A TWRC Education Video As a wildlife rehabilitation and education facility, we see and hear every day how the public is misinformed, or uninformed, with respect to the opossum. Designed for children and adults, this video has been created as an education tool to show the opossum as it truly is, present facts, and to dispel some of the myths about this wonderful creature. TWRC is an urban wildlife emergency and rehabilitative care facility serving the Greater Houston area. Established in 1979, TWRC focuses on conservation, public education, and wildlife rehabilitation, and is operated by part-time staff and volunteers who are permitted rehabilitators and animal lovers. TWRC is a 501(c)(3) non-profit organization which receives no federal or state funding. Because of this, we rely on individual, corporate, and foundation contributions to continue our efforts in preserving and caring for Texas wildlife. Visit us at www.twrc- or call 713-468-8972 to learn how you can become a volunteer or donate to our organization.
- Proper Opossum Service Animal
- ME Pearl Presents PROPER OPOSSUM MASSAGE
- Heidi the opossum - The Trailer A tribute to Germanys coolest opossum. A new addition to the Leipzig Zoo has yet to be seen by the public, but that hasn't stopped her from becoming a star. Heidi, a young cross-eyed opossum, is shaping up to be the most popular furry critter in Germany since Knut the celebrity polar bear. She won't be visible to the public until July, but a cross-eyed opossum has turned into Germany's new media darling. The reason for Heidi's crossed eyes is unclear, but zoo officials speculate that it might be because of fat deposits behind her eyes, caused by a bad diet early in life. The eyes might look off, but they cause the animal no pain, and don't affect her ability to get around, according to the zoo. She is, aside from her looks, a normal opossum. Hot news! Officials at Leipzig Zoo, where Heidi lives, say she will appear on the "Jimmy Kimmel Show" on the US network ABC on Oscars night.
- Opossum eating strawberries, Animal Advocates, Mary Cummins Click to see in higher quality An educational opossum eating over ripe strawberries on the lawn. They tip their head back when they eat so they don't spill the juice out of their mouthI am a licensed wildlife rehabilitator. Animal Advocates, Mary Cummins, www.animaladvocates.us
- opossum era glaciale 2
- David Letterman - Heidi, Cross-Eyed Opossum From YouTube fame to rehab, find out everything you need to know about the cross-eyed opossum.
- Proper Opossum Gourmet Cooking
- Matt Duke- "Opossum" Matt Duke performs "Opossum" live on Fearless Music in NYC.
- My new exotic pets - Brazilian short tailed opossums & African pigmy hedgehogs My new exotic pets!
- Baby Opossums!! Still newly-born at the time of filming, my uncle happened to see a dead mother opossum on the side of the road with only two of her babies still alive. He has been taking very good care of them ever since the rescue and has even constructed a replica nest for them. They don't bite and are very friendly as you can see! Enjoy :D Figured I would add this in here since it's been a while.. They are both still alive and are doing very very well! By the way, their names are George Jefferson and Wheezy Jefferson. So it's been a while, but I plan to make another video with these cuties sometime soon. They are almost full grown and are doing fantastic. Hope to get a video up soon!
- Baby Opossums Baby Opossums raised and released by the Rainbow Wildlife Rescue at
- Das schielende Opossum Heidi Heidi Heydi Das schielende Opossum schielt viral schielendes
- Heidi the Cross-Eyed Opossum: Germany's latest love An opossum in Germany has become a worldwide internet sensation. The cross-eyed 2 year old, named Heidi, is yet to make her debut at Leipzig zoo, but already she's attracted over 110000 Facebook fans. That's more than the German Chancellor Angela Merkel. Heidi's the latest in a line of much-loved creatures to be given celebrity status by the German media after Knut the orphan polar bear and Paul the psychic octopus. RT on Facebook: RT on Twitter:
- Opossum Eating At Feeder More Videos: More Nature
twitter about opossum
Blogs & Forum
blogs and forums about opossum
“Posted in goatee-stroking musing, or something | Tagged eh, genius, opossum | 1 Comment contact me. geo mashup. not me. ad free blog. scruss's Twitter feed. scruss: now has a huge magenta sign outside his office window”
— opossum | We Saw a Chicken,
“VR Show Blog. VR Website. VR on MySpace. Info. News Archive. V-Links. About Us. Non-Profits Blog Entries. Opossum's Blog. This blog has no entries so far. home”
— Veganica > Opossum > Blog,
“ Blog Images Animaux Opossum”
— Gallery - Opossum (Animaux, Images),
“Even a dead and smashed opossum. That was pretty gross since some of it flung onto the decapitating a rabbit in your front fork http://forum.slowtwitch”
— Slowtwitch Forums: Triathlon Forum: Hit my first live opossum,
“Blog Post, club.ks95.com”
— Opossum > Blog, club.ks95.com
“Saber Blog " Opossum. Chupacabra x Bear x Rat= My Friendly Neighborhood Opossum. Neighborly Opossum”
— Saber Blog " Opossum,
“Blog page about nuisance opossum removal and control”
— Opossum Control Blog - Possum Information & Stories,
“Art - community of artists and those devoted to art. Digital art, skin art, themes, wallpaper art, traditional art, photography, poetry / prose. Art prints”
— #Opossum-Lovers Blog on deviantART: Mascot Contest!, opossum-
“Lucky was our beloved pet, but like most opossum, he was a loner and he tended to hang then traveled across the country and ultimately wound up as many opossum do: road kill”
— AlienZoo " opossum,
related keywords for opossum
- opossum facts
- opossum rabies
- opossum diet
- opossum pictures
- opossum wiki
- opossum heidi
- opossum habitat
- opossum vs possum
- opossum sounds
- opossum tracks
- opossum facts for kids
- possum facts for kids
- possum facts australia
- opossum facts rabies
- opossum facts and pictures
- opossum facts wikipedia
- possum facts for children
- possum facts nz
- virginia opossum facts
- opossum fun facts | 1 | 7 |
<urn:uuid:b5d1da22-a3ea-4795-b4e1-bbdb911e4f81> | Not an actual black person.
is the tradition of a performer putting on stylized black makeup to appear as a stereotyped character of African descent. Blackface performances often took the form of Minstrel Shows
. As much as some people would like to forget it, blackface performances were mainstream American entertainment for almost 100 years until racial backlash finally capsized them. Blackface imagery was also transported to other countries, where the lesser stigma against it allowed the tradition to prosper for longer. Blackface characters still pop up in Japanese culture from time to time.
Due to sensitivity over this issue, particularly in America, any attempt by a non-black actor to play a black character will usually be labelled by someone
as outright blackface, even when it's really just a Fake Nationality
is a similar practice involving Asian characters, while Brownface
is for characters of various "brown" races.
Tropes associated with Blackface
- Ash Face: Older animated works would often segue from an Ash Face incident to a Blackface gag.
- Big Lipped Alligator Moment: (A rather unfortunately named trope for this page.) A common trope of early cinema was to find some excuse for the main character to intentionally or accidentally take on the appearance of blackface, then pause for a minstrel show-style musical number.
- Black Like Me: A white person makes himself look black and experiences everyday life as a black person and learning An Aesop about tolerance.
- Dead Horse Trope: Blackface never appears in modern mainstream media played straight. If it shows up at all, it's for Black Comedy (No Pun Intended) or satirical purposes, or it comes from a country where it's not as taboo. Of course, racists still cling to it for warmth.
- Modern Minstrelsy
- Older Than They Think: The tradition of blackface extends hundreds of years before its rise in popularity in America.
- Old Shame: In its heyday, blackface was a major part of America's distinct artistic culture. Today it's treated as an embarrassing episode in American history. Many beloved film and cartoon characters appeared in blackface in the early days of film and animation, when the trope was still mainstream. The companies who now own these intellectual properties are understandably reluctant to air them out.
- Paper-Thin Disguise: Often when blackface cropped up in old cartoons and slapstick it was being used as a disguise. This was seen as no more wrong than having the character cross-dress at the time.
- Pretty Fly For A White Guy: The modern-day successor to blackface, in which non-black people will behave in ways that are stereotypical for black people. However, the intention in this case is usually to be cool rather than to mock black culture.
- Values Dissonance: Blackface was once considered as harmless good fun, but appearing in blackface today is about as acceptable as burning a cross.
- Zeroth Law Of Trope Examples: The eponymous role in Othello was traditionally played by a white actor in blackface, and this remained the case long after blackface had become unacceptable in most media. Shakespeare himself at least had the excuse of black actors being in rather short supply in England at the time.
Works in which Blackface appears
open/close all folders
Anime and Manga
- Dragon Ball: Mister Popo and other black characters have the appearance of blackface characters.
- The Galoot Sect assassins from Flag invoke this aesthetic with their creepy, golliwog-like masks, possibly meant to represent the black goddess Kali.
- The Black Looks, an anti-robot hate group, hide their identities by wearing blackface in the classic Astro Boy story Capetown Lullaby (aside from their leader who wears a weird mask that looks like a black Gonzo the Muppet).
- An unknown character in an omake from The World God Only Knows appears to don this (bottom left panel). The character was going to perform a ritual to curse people.
- Spike Lee's Bamboozled: A modern African-American filmmaker creates a television minstrel show in which black actors perform in classic blackface. He's trying to make a point, but to his horror, the show becomes successful. Real-life audiences didn't respond well to the use of blackface in making a heavy-handed point about modern portrayals of black people.
- Al Jolson's The Jazz Singer features the main character performing in blackface in a minstrel show as part of his journey to self-expression. Ironically, Jolson's character can only express himself by putting on the mask of a black man. This was also semi-biographical, as Al Jolson really did perform in blackface and felt a special kinship with African-Americans. He actually helped a lot of blacks break into the music business, demanded that they receive equal treatment, and was famously the only white man allowed in the all-black nightclubs in Harlem.
- Gangs of New York features a propogandized performance of Uncle Tom's Cabin, in which actors playing parts of slaves wear blackface.
- Fred Astaire does a blackface number in Swing Time (1936). Many fans regard this as more of an "Othello" than a "minstrel show" situation, as it was an homage to a specific black performer (Bill "Bojangles" Robinson).
- The otherwise squeaky-clean classic Holiday Inn shows for Lincoln's Birthday a full minstrel show featuring dancers in blackface.
- In the 1936 film version of Jerome Kern's Show Boat, Magnolia and the show boat troupe don blackface for the "Gallivantin' Around" number. Since one of the themes of this musical is the destructive nature of race prejudice, this may be deliberate irony — or it may just be a lamentable lapse of taste.
- In Whoopee!, Eddie Cantor tries to pass himself off as a black man, performing a pretty racist shuffling darky routine, then belts out a performance of "My Baby Just Cares for Me" in his classic singing style. Eddie Cantor was the last major vaudeville performer to use blackface in his act, and his character was Fair for Its Day - an intelligent, in fact nerdy character, as opposed to the "Carry Me Back to Ol' Virginee" standard.
- The Three Stooges disguised themselves as slaves using blackface in the Civil War-themed short "Uncivil Warbirds".
- Laurel and Hardy disguised themselves in blackface after breaking out of prison in Pardon Us.
- The Marx Brothers, evading the law, momentarily done blackface to hide among a bunch of stable hands in A Day at the Races. Harpo only paints half his face.
- The Birth of a Nation used blackface not as a comedic device, but as a means to allow white actors to portray black and "mullato" characters in an overtly racist film.
- The Eighties comedy Soul Man features a Harvard Law student who darkens his skin to get a scholarship for black students. The film caused some controversy during its release.
- In the Polish film Vabank, set in 1930s' Poland, one of the protagonists, Moks, at one point sings in blackface.
- In Bob Dylan's film Masked And Anonymous, Ed Harris appears as the ghost of a murdered minstrel named Oscar Vogel, very much in the Al Jolson mode. Dylan has invoked minstrelsy on other occasions, notably naming his 2001 album Love and Theft after Eric Lott's academic book Love and Theft: Blackface Minstrelsy and the American Working Class.
- Lampshaded/parodied in Tropic Thunder—Robert Downey, Jr..'s white Australian character Kirk Lazarus portrays black Sergeant Lincoln Osiris by getting his face and body surgically altered to look like a fairly realistic black man. His character's personality, however, is embarrassingly over-the-top, and he stays in character at all times, much to the chagrin of the actually-black Alpa Chino. The fact that the whole thing is meant to be a parody of Oscar Bait and extreme Method Acting went over the heads of some critics and viewers, who claimed it was tantamount to blackface.
- Pops up, of course, in C.S.A.: The Confederate States of America, where in the Alternate Universe in which the South won the Civil War it never becomes taboo.
- The Paper-Thin Disguise variant shows up in Silver Streak. Con man Richard Pryor helps to disguise traveler Gene Wilder, who's been framed for murder. Presumably it helps that Wilder has naturally curly hair. Hearing Wilder's clueless Soul Brother patter, Pryor says, "I sure hope we don't run into any brothers."
- Trading Places also uses it as a Paper-Thin Disguise for Dan Aykroyd in the scene on the train. The others with him are also in disguise/costume, but their target for a theft has met Aykroyd's character before, necessitating something more drastic: brown shoe polish.
- The white voice actors of Amos N Andy appear in blackface in their sole feature film Check and Double Check. The comedy duo was at the height of their fame, but fans were apparently disappointed to see their favorite radio characters looking like white guys in blackface. The film was not a success.
- An interesting example is found with Tommy Chong as the blues singer Blind Melon Chitlin' in Still Smokin: the humor is not based around the character being black, but being blind.
- This is how the villain of The Zebra Killer disguised himself while committing murders.
- Inverted in White Chicks in which Shawn and Marlon Wayans play two detectives who disguise themselves as white women.
- In South Park: Bigger, Longer and Uncut, Cartman briefly appears wearing blackface during his performance of "Kyle's Mom is a Bitch."
- In The Last Emperor, the deposed Emperor Pu-yi performs a concert backed up by Chinese musicians in blackface.
- A character seen briefly at the start (and in a throwaway gag near the middle) of Forbidden Zone is a slumlord and crack dealer played by a man in blackface; there are several others in bit parts throughout, done for comedic shock value.
- Rochester the butler in Gross Out is not only done up in blackface, but is a grotesque stereotype.
- In Django Unchained, Samuel L. Jackson put on darker makeup to play Boomerang Bigot house slave Stephen. Jackson conceived of the look when deciding that Stephen had no white ancestry at all. Funnily enough, he actually wore more than needed because he thought he didn't looked dark enough yet, only to later see the film and realize the photography was making him even darker.
- In one of the Little House on the Prairie books, the town has a contest where different townsfolk each put on a little show. Laura's father and a few his friends win it with a Blackface routine. Don't bother looking for this scene on the television series, obviously.
- Part of the immense, immense controversy involved in the self-published Revealing Eden book is that, in its future where blacks have all the cultural cache and whites are a minority, a number of whites use "Midnight Luster" cream that not only protects against UV radiation but allows them to "pass" as black. And yes, there were trailers for the book featuring white actresses in blackface.
- The 1960 German children novel Jim Button features the main character with a clear blackface design on the cover. Even the 1986 TV puppet adaptation follows the design very closely, as it did not hold the same negative conotations in Germany as it did in the United States.
- The Mad Men season 3 episode "My Old Kentucky Home" features Roger Sterling in blackface, singing the title song to his new, twenty-something wife. Some of the characters are horrified, but more about a respectable businessman making an ass of himself than moral indignation over the racial insensitivity.
- Lampshaded in It's Always Sunny in Philadelphia, when the gang decide to do a sequel to Lethal Weapon with both Mac and Dennis playing Murtaugh. Dennis refuses to be in blackface, but has no problem doing a "black voice." Mac dons full shoe polish and tries to retroactively use Laurence Olivier as justification. Nobody remarks on the fact that Frank spends the entire movie playing a villainous Native American stereotype.
- In Australia, The Footy Show memorably did this one time when Indigenous player Nicky Winmar was unable to appear, and the show was speculating what happened to him. Well on the final segment of the show host Eddie McGuire was going to review the teams for Freemantle and West Coast when the audience starts laughing. He twigs that Sam Newman has done something and is almost scared to look at Sam impersonating Winmar wearing black facepaint. Despite the cries of outrage over the incident Winmar had the last laugh. One year later, there would be a knock at the guest door after Sam laments they never had that anymore. He gets up to answer it to reveal Winmar, all smiles, apologizing for being a year late.
- In another Australian incident, an act called "The Jackson Jive" appeared on Hey Hey, It's Saturday, a variety show, shortly after Michael Jackson's death. Five men, doctors in everyday life, danced in blackface and afro wigs, while a sixth, dressed as Michael, replete with ghost-white makeup, sang "Can You Feel It." American guest judge Harry Connick Jr. was understandably offended. The host apologized to Connick on air. Chalk it up to cultural differences - Australian media has historically never used blackface in the same way the US did. Also the Jackson Jive are all decidedly not white.
- In the Polish comedy series Alternatywy 4 (1983), one character was a black American exchange student named Abraham Lincoln, played by a white man wearing blackface. Poles attempted to justify the portrayal due to the relative lack of black people in Poland at the time.
- In a Halloween episode of the Irish video-diary sitcom Dan and Becs, the main characters plan to go to a party in fancy dress as a couple. Due to a miscommunication, Bec thinks they're going as Richard Gere and Julia Roberts from Pretty Woman and dresses accordingly. Dan thinks they're going as... Ike and Tina Turner. They give their diary entries on the night, still in costume, Dan still covered in blackface. Their black taxi driver is not impressed with Dan's get-up until, that is, he remarks, "Oh, you're meant to be Ike Turner. Why didn't your girlfriend go as Tina?"
- According to The Goodies, their ancestors were cruelly kidnapped by the BBC and forced into blackface. They eventually fought for equal rights, no matter what colour paint, be it black, white, green, polka-dot. The episode was actually labelled: DO NOT BROADCAST - RACIST in the BBC archive. The Goodies also appear in blackface in the South Africa and Eckythump episodes. The show also mocked The Apartheid Era racists by showing how horrified they are of blackface performers: white people imitating black people.
- In Jeeves and Wooster, Bertie Wooster and Jeeves both disguise themselves in as part of a troupe of blackface minstrels in order to escape J. Washburn Stoker, father of Pauline, the girl in of Bertie's ill-fated engagements. The minstrels were there as part of Stoker's son's birthday party. Wooster ends up having to perform Lady of Spain in blackface with the minstrels before being able to escape. Later the the Harley St. doctor Sir Roderick Glossop has to dress up in blackface to entertain a young boy who was promised that he could see the minstrels at Stoker's son's birthday party, but is unable to go in the confusion.
- The Sarah Silverman Show played with this. Sarah argued with a black man that being Jewish is harder than being black, and the two agreed to go through one day as the other ethnicity for a day to test it. Sarah dressed up in a horribly stereotypical and offensive way, receiving very unpleasant remarks, thinking they actually thought she was black and their responses were genuine racism. When she met the man in the usual spot she and the gang get their coffee and said that she agreed that being black was harder, the black man said he realised being Jewish was actually harder. He was wearing a yarmukle, peot, a long false nose, and a shirt saying ‘I <3 Money’. The man left the place as the two exchanged suspicious looks.
- The Python crew occasionally donned blackface to play Indian or black roles for Monty Python's Flying Circus sketches.
- In an episode of The Mighty Boosh, Julian Barret plays "Rudy", a partially two-dimensional guitarist/sage with an appearance resembling Jimi Hendrix. Barret has darkened skin as well as fake teeth to make him appear to be gap-toothed. He also has a large fake afro with a door to another dimension. In another episode, Rudy is fully three-dimensional and is no longer blackface, though he does retain his magic afro.
- Are You Being Served?.
- Mr. Grainger does himself up in blackface to perform "Mammy" in the B-plot of an episode. Ultimately, this rolls back into the main plot: In order to replace a malfunctioning animatronic Santa, the Men's and Lady's Wear staffers are auditioning for the role (and its extra pay). Grainger doesn't have time to remove the blackface before the audition... which makes him more attractive to the child brought in to select who'll get the role. The kid is black.
- Another episode has the staff performing a minstrel number, in blackface, to celebrate Old Mr Grace's supposed African heritage. As you can see on his face at the end of the show as his staff is strutting about in blackface, he is horrified at this spectacle.
- Seen in the pilot episode of Boardwalk Empire (which takes place in the 1920s) during the New Year's celebration.
- Parodied by Spitting Image in "The White & White Minstrel Show" that features the polar opposite of this trope: black people wearing whitefaces. The sketch itself is a bitting satire of the apartheid in South Africa where they think "that blackfaces don't belong with black".
- The Black and White Minstrel Show performed musical numbers in blackface on a primetime BBC slot from 1958 to 1978. The show scaled back the blackface numbers toward the end of its run. It's cancellation was not due to the blackface, however, but due to cutbacks on variety shows. The stage show version continued until 1987!
- Scrubs has a flashback to an incident where Turk convinced JD to wear blackface (where Turk himself would be wearing whiteface) while they met with some friends of Turk's. Turk ends up being distracted at an inopportune moment, meaning that JD seems to be alone when the guys see him. It does not end well.
- In an episode of Community, Chang dresses up as his Dungeons & Dragons Drow character, which includes jet black skin. Both Shirley and Pierce think he's in blackface.
- In an episode of Gimme A Break, Samantha dresses Joey up in blackface to perform at Nell's church. Hilarity does not ensue.
- In the All in the Family episode "Birth of the Baby", Archie is forced by his lodge to appear in blackface in a minstrel show. Right before he's supposed to go onstage, he's informed that his daughter has gone into labor, so he ends up in the hospital in blackface.
- Played for Laughs in an episode of Soap when the Major, Chester, and Donahue are in blackface in preparation for a night raid on the Sunnies church/bunker along with Benson in order to rescue Billy. (The blackface in this case is an ordinary stealth technique so as not to be easily seen.) As they're planning, Jessica comes in and apologizes for interrupting Benson's reunion with his family. After they explain that they're them, Jessica asks why they're dressed as 'Negros'. During the raid they get caught, and Benson covers for them by saying they're the Step Brothers, and leads the others in a dance "audition."
- 30 Rock
- Jenna Maroney has appeared in blackface twice. The first time, not unlike the The Sarah Silverman Show example, arose from an argument with Tracy Jordan about whether it is harder to be black or a woman. The second occurred when Jenna dressed as Pittsburgh Steeler Lynn Swann while her crossdressing boyfriend dressed as Natalie Portman in Black Swan making them ...two black swans.
- A live episode, where Kenneth defends live television, has a flashback to an Amos And Andy expy show - Tracy Jordan plays one half of the team, and Jon Hamm plays the other, in poorly applied blackface and horribly over-the-top mannerisms that finally got on Jordan's last nerve. Kenneth explains that the network thought two black people on the same show would make the audience nervous - "...a rule NBC still uses today!"
- Saturday Night Live
- When Fred Armisen started playing Barack Obama, a minor stink was raised about whether it constituted blackface. The issue died out after it was argued that Fred Armisen and Barack Obama are both mixed race and the fact that they're not the same mix just makes it a standard case of Fake Nationality. Plus the darkening of Armisen's skin looks like the spray tan it probably is, and most people aren't offended by spray tans.
- Darrell Hammond played Jesse Jackson with fewer complaints.
- In the pilot episode of Life's Too Short, Warwick watches a performance of "Ebony and Ivory" performed by two dwarfs, one of whom is a woman in blackface. Warwick says that he's pretty sure you can't "black up" these days, but "maybe in the North."
- Billy Crystal sparked some very minor controversy when he appeared as Sammy Davis Jr. during his intro to the 84th Academy Award presentation. Crystal used realistic makeup to resemble Davis, not stylized blackface. Critics were apparently unaware that Crystal had been doing Davis impressions for years, including his time on Saturday Night Live, with Davis's personal blessing.
- It's implied Barney once used this on How I Met Your Mother. He mentions that the worst lie he ever told to get a woman into bed (and that is a very competitive category) was when he used a seduction technique called "The Soul Man". We're not told the details of what it involved, but he used it to hook up with a woman who would only date black guys, and he did it while going by the alias "Barnell".
- Inverted in Chappelle's Show in sketches where the black host Dave Chappelle lightens his skin and plays a stereotypical white man.
- In The Office US, Dwight has a warehouse employee dress up as Black Peter, complete with blackface. Luckily he figures out that the office staff will find it offensive, so he texts Black Peter to not show up. He appears later with most of the makeup wiped off.
- In the episode "Korzenie" ("Roots") of Polish sitcom Swiat wedlug Kiepskich, the main protagonist wakes up to discover that not only him, but everyone around is suddenly black, including people on TV. Nobody understands his surprised reaction, all the pictures in the family album suggest that he has always been black, and his apartment holds a church service with gospel music. The episode ends with him finding out that his asshole neighbour is still white, and upon pointing it out he responds with "So what? Just because I'm white doesn't mean you can bully me! I'm a human being just like the rest of you!".
- The Man Show had two recurring skits where Jimmy Kimmel browned his skin. One skit had Jimmy dressing up like Utah Jazz forward Karl Malone and dispensing Cloudcuckoolander ramblings about nothing. Kimmel had previously developed his Malone impression for the radio, where costuming was not an issue. The other skit had him dressed up like Oprah Winfrey as a parody of Oprah's feminine lifestyle segments on her own show.
- An entire genre of music, the "coon song" was dedicated to mocking black people, sung by performers in blackface. Paradoxically, such songs were often written by African American composers such as Ernest Hogan, Sam Lucas, and Bob Cole. The genre was a precursor to ragtime and was eventually replaced by it. Note that in popular usage, "coon song" was often applied to music sung, originating from, or merely in the style of, Negro music, without regard to content. One who sang Negro songs was a "coon shouter."
- Appears rather shockingly in the video for Culture Club's "Do You Really Want To Hurt Me?" as Boy George is convicted by a jury of jazz-handing minstrels.
- Florence + the Machine's music video for "No Light, No Light" has two savage people in blackface menacing the white singer, who is saved by white choirboys and seen at the end with a white lover. Supposedly, the men in blackface are meant to be demons.
- In 1998 D-Generation X mocked the all-black Power Stable Nation of Domination by dressing as them, complete with blackface. This somehow received little to no complaints, and is still considered one of Raw's funniest moments.
- At Wrestle Mania VI Roddy Piper fought Bad News Brown with half his body in blackface, after Bad News Brown called him racist. Apparently he did it to show that color doesn't matter.
- Most black vaudevillians wore blackface. Some were light enough that they needed to put on burnt cork to make it clear to the audience; others just bowed to vaudeville standards.
- Popular vaudeville actor Bert Williams often performed in blackface. As he gained more success, his works phased out the extreme racial humor. His popularity among white and black audiences ultimately made him a force for increased racial tolerance.
- Bill Robinson was among the first black performers to make it big without blackface.
- Many opera roles, such as Otello in Verdi's opera, and Monostatos in The Magic Flute have been portrayed in blackface. There is still a shortage of black opera singers, but white singers playing these roles no longer black up.
- Similarily, the title character in Shakespeare's Othello was traditionally played by a while actor in makeup, though the original King's Men might not have used it. It wasn't until 1943 that a black actor played the role in a major stage production of the play, but the success of that production didn't stop the common practice of using blackface to last well through the 60s.
- Referenced in the play No Sugar, which revolves around a family of Australian aborigines in the 1930s. In one scene, they recall a recent trip to the cinema, where they saw an American film with a blackface performer, who they joke must have been having a really rough time as a whitefella if he saw becoming black as a step up.
- "Golliwogg" dolls are dolls made in the style of a person in blackface. They can still be purchased in some areas.
- Jynx was originally designed after this aesthetic before its coloring was changed to purple due to complaints. It is sometimes claimed to have been designed after the "ganguro" style, though it was only just beginning to come to prominence in Japan at the time. A more likely candidate is the European Zwarte Piet holiday character mentioned below, considering its Ice typing and appearance as Santa's helpers in the anime.
- Whiscash, the catfish pokemon, also has a similar face, though he's actually more dark blue. This may be unintentional, as many catfish do have rather pronounced "lips" and dark skin in real life. It is probably worth mentioning, however, that catfish is a popular food in many black communities in the southern US...
- The turtle shell in Doki Doki Panic has a blackface appearance.
- MadWorld has the Black Baron, revealed to be white by the announcers. He has the mannerisms of a stereotypical pimp. Given the entire game is all about shock value, it's not surprising to see this trope in action.
- Kingdom of Loathing features a status effect called Black Face, which raises your Muscle stat and damage, but lowers your combat initiative. The effect description simply reads: "Yeah, we went there."
- Unintentionally occurs with Passionate Patti in Leisure Suit Larry 5 due to a malfunctioning copy machine.
- An extremely unfortunate example occurs in Square's NES adaptation of The Adventures of Tom Sawyer, released only in Japan. The caricature used for Jim would almost be cause for Torches and Pitchforks in the U.S.
- One of Xeno Blade's main antagonists is actually named Blackface, though he's actually kind of an inversion, being a black and gold, baroque-looking robot with a sinister white mask. Nonetheless his name was changed to Metalface in the English version for fear of the name's nasty associations.
- Oil Man from Mega Man Powered Up. He was recolored blue and yellow in the English version, but his voice actor still plays him as sounding black-ish and worse yet he's portrayed as being somewhat lecherous, shiftless and not overly bright.
- Looney Tunes characters such as Bugs Bunny would often get soot blown in their faces, causing them to spontaneously parody The Jazz Singer or Eddie Rochester.
- Appeared in quite a few of the early Tom And Jerry shorts as well. When re-aired these scenes tend to be edited out, and a couple shorts which can't be easily edited have been all-out banned and don't even appear on the DVDs.
- Really, nearly every studio during The Golden Age of Animation did this as a gag at some point. At least one Terry Toons short has done this, as seen on Jerry Beck's site.
- South Park riffs on this trope in "Summer Sucks." The town gets covered in ashes, causing all the ash-covered residents to look like they're in blackface. Needless to say, Chef, the resident black man who'd just returned from his vacation, isn't pleased.
- Parodied on Family Guy in season 9, when Chris wanted to dress up as Bill Cosby for Halloween, using blackface as well as his trademark sweater. His mother tried telling him it’s wrong, but Chris just said, ‘Why, don’t I look like him?’ His mother agreed that he did, but then said, ‘You can’t just go out on the street in blackface, it’s racist! Now go put on that Indian head gear I bought you!’
- The book Kaboom! Explosive Animation from America and Japan mentions that "Even today, the question can be legitimately asked: How much of Mickey Mouse is mouse, and how much is blackface clown?"
- The Little Mermaid contains a one-off blink-and-you'll-miss-it blackface gag with the blackfish in "Under The Sea". This wouldn't be racist if the blackfish was actually black, but as it isn't...
- The Smiths in American Dad show up to a black organization's party in blackface after misreading the invitation.
- This happens in The Philippines, particularly in the province of Aklan in the Visayas every January, when the native brown people smear their bodies with black coal to celebrate a feast with the indigenous black minority called the Aetas, with a help of a miracle from a memento from the Spaniards that helped develop a culture. It's called the Ati-atihan, which literally means "festival of acting like Aetas".
- A Japanese earthquake safety pamphlet passed out as late as 2004 featured a cartoon "sambo" character with a blackface appearance. After some complaints, the pamphlet was redrawn.
- The helpers of Sinterklaas (the Dutch Santa Claus) are usually white people in blackface and colourful costumes called Zwarte Piet ("Black Pete").
- In the lates 90's, Japanese girls would do a style called Ganguro. It was said to be started by a clubber by the name of Buriteri, who had darker skin and more extreme makeup than most gonguro. Over the years there were othe styles called Manba, Banba, etc., but today 'gals' tend to just have bronze skin or even paper white skin.
- There is a picture of Eva Braun dressed as Al Jolson, in suit and full blackface. Yes, you read that right.◊
- New York Assemblyman Dov Hikind landed in hot water when a picture surfaced of him costumed as a black basketball player for Purim, complete with darkened skin and an afro wig. Critics labeled the costume blackface, while Hikind insisted that no offence was intended. | 1 | 3 |
<urn:uuid:0a94963d-ab3f-456c-9bd0-6ce683b857aa> | What is Myalgic Encephalomyelitis? A historical, medical and political overview This paper provides a historical, medical and political overview of Myalgic Encephalomyelitis. A must-read paper for anyone with an interest in M.E.
This paper is available in four different sizes; super-small, small, medium and extra-large.
This page features the medium sized/standard version of the text.
See the Downloads section below to download this paper in Word or PDF format.
Copyright © Jodi Bassett 2004. This version updated March 2009. From www.hfme.org
Myalgic Encephalomyelitis (M.E.) is a debilitating acquired neurological disease which has been recognised by the World Health Organisation (WHO) since 1969 as a distinct organic neurological disorder with the code G.93.3. M.E. can occur in both epidemic and sporadic forms, over 60 outbreaks of M.E. have been recorded worldwide since 1934. M.E. is similar in a number of significant ways to multiple sclerosis, Lupus and poliomyelitis (polio). M.E. can be extremely severe and disabling and in some cases the disease is fatal.
Is Myalgic Encephalomyelitis a new illness? What does the name M.E. mean?
The illness we now know as Myalgic Encephalomyelitis is not a new illness. M.E. is thought to have existed for centuries. (Hyde 1998, [Online]) (Dowsett 1999a, [Online])
In 1956 the name Myalgic Encephalomyelitis was created. The term was invented jointly by Dr A Melvin Ramsay who coined this name in relation to the Royal Free Hospital epidemics that occurred in London in 1955 - 1957 and by Dr John Richardson who observed the same type of illness in his rural practice in Newcastle-upon-Tyne area during the same period. It was obvious to these physicians that they were dealing with the consequences of an epidemic and endemic infectious neurological disease (Hyde 1998, [Online]) (Hyde 2006, [Online]). The term Myalgic Encephalomyelitis means: My = muscle, Algic = pain, Encephalo = brain, Mye = spinal cord, Itis = inflammation (Hyde 2006, [Online]). As M.E. expert Dr Byron Hyde writes:
The reason why these physicians were so sure that they were dealing with an inflammatory illness of the brain is that they examined patients in both epidemic and endemic situations with this curious diffuse brain injury. In the epidemic situation with patients falling acutely ill and in some cases dying, autopsies were performed and the diffuse inflammatory brain changes are on record (2006, [Online]).
In 1957, the Wallis description of M.E. was created. In 1959 Sir Donald Acheson (a former UK Chief Medical Officer) conducted a major review of M.E. In 1962 the distinguished neurologist Lord Brain included M.E. in the standard textbook of neurology. In recognition of the large body of compelling research that was available, M.E. was formally classified as an organic disease of the central nervous system in the World Health Organisation’s International Classification of Diseases in 1969 with the code G.93.3 In 1978 the Royal Society of Medicine held a symposium on Myalgic Encephalomyelitis at which M.E. was accepted as a distinct entity. The symposium proceedings were published in The Postgraduate Medical Journal later that same year. The Ramsay case description of M.E. was published in 1981 (Hooper et al. 2001, [Online]).
Since 1956 the term Myalgic Encephalomyelitis has been used to describe the illness in the UK, Europe Canada and Australasia. This term has stood the test of time for more than 50 years. The recorded medical history of M.E. as a debilitating organic neurological illness affecting children and adults is substantial; it spans over 70 years and has been published in prestigious peer-reviewed journals all over the world (Hyde 1998, [Online]) (Hooper 2003a, [Online]) (Dowsett 2001b, [Online]). As microbiologist and M.E. expert Dr Elizabeth Dowsett explains: ‘There is ample evidence that M.E. is primarily a neurological illness, although non-neurological complications affecting the liver, cardiac and skeletal muscle, endocrine and lymphoid tissues are also recognised’ (n.d.b, [Online]).
Myalgic Encephalomyelitis is not defined by mere ‘fatigue’
Myalgic Encephalomyelitis is not synonymous with being tired all the time. If a person is very fatigued for an extended period of time this does not mean they are having a ‘bout’ of M.E. To suggest such a thing is no less absurd than to say that prolonged fatigue means a person is having a ‘bout’ of multiple sclerosis, Parkinson’s disease or Lupus. If a person is constantly fatigued this should not be taken to mean that they have M.E. no matter how severe or prolonged their fatigue is. Fatigue is a symptom of many different illnesses as well as a feature of normal everyday life – but it is not a defining symptom of M.E., nor even an essential symptom of M.E.
The terms ‘fatigue’ and ‘chronic fatigue’ were not associated with defining this illness at all until the new name (and definition) of ‘Chronic Fatigue Syndrome’ was created in 1988 in the USA (Hyde 2006, [online]). But M.E. and CFS are not synonymous terms.
‘Fatigue’ and feeling ‘tired all the time’ are not at all the same thing as the very specific type of paralytic muscle weakness or muscle fatigue which is characteristic of M.E. (and is caused by mitochondrial dysfunction) and which affects every organ and cell in the body; including the brain and the heart. This causes – or significantly contributes to – such problems in M.E. as; cardiac insufficiency (a type of heart failure), orthostatic intolerance (inability to maintain an upright posture), blackouts, reduced circulating blood volume (and pooling of the blood in the extremities), seizures (and other neurological phenomena), memory loss, problems chewing/swallowing, episodes of partial or total paralysis, muscle spasms/twitching, extreme pain, problems with digestion, vision disturbances, breathing difficulties, and so on. These problems are exacerbated by even trivial levels of physical and cognitive activity, sensory input and orthostatic stress beyond a patient’s individual limits. People with M.E. are made very ill and disabled by this problem with their cells; it affects virtually every bodily system and has also lead to death in some cases. Many patients are housebound and bedbound and often are so ill that they feel they are about to die. People with M.E would give anything to instead only be severely ‘fatigued’ or tired all the time (Bassett 2009, [Online]).
Fatigue or post-exertional fatigue (or malaise) may occur in many different illnesses such as various post-viral fatigue states or syndromes, Fibromyalgia, Lyme disease, and many others – but what is happening with M.E. patients is an entirely different (and unique) problem of a much greater magnitude. These terms are not accurate or specific enough to describe what is happening in M.E. M.E. is a neurological illness of extraordinarily incapacitating dimensions that affects virtually every bodily system – not a problem of ‘chronic fatigue’ (Hyde 2006, [Online]) (Hooper 2006, [Online]) (Hooper & Marshall 2005a, [Online]) (Hyde 2003, [Online]) (Dowsett 2001, [Online]) (Hooper et al. 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Dowsett 1996, p. 167) (Dowsett et al. 1990, pp. 285-291) (Dowsett n.d., [Online]).
If Myalgic Encephalomyelitis and ‘Chronic Fatigue Syndrome’ are not synonymous terms, why do some groups claim that they are? What is CFS?
The disease category of CFS was created in a response to an outbreak of what was unmistakably M.E., but this new name and definition did not describe the known signs, symptoms, history and pathology of M.E. It described a disease process that did not, and could not exist.
Why were the renaming and redefining of the distinct neurological disease Myalgic Encephalomyelitis allowed – indeed intended – to become so muddied? Indeed why did Myalgic Encephalomyelitis suddenly need to be renamed or redefined at all? Money. There was an enormous rise in the reported incidence of Myalgic Encephalomyelitis in the late 1970s and 1980s, alarming medical insurance companies in the US. So it was at this time that certain psychiatrists and others involved in the medical insurance industry (on both sides of the Atlantic) began their campaign to reclassify the severely incapacitating and discrete neurological disorder known as Myalgic Encephalomyelitis as a psychological or ‘personality’ disorder, in order to side-step the financial responsibility of so many new claims (Marshall & Williams 2005a, [Online]). As Professor Malcolm Hooper explains:
In the 1980s in the US (where there is no NHS and most of the costs of health care are borne by insurance companies), the incidence of ME escalated rapidly, so a political decision was taken to rename M.E. as “chronic fatigue syndrome”, the cardinal feature of which was to be chronic or on going “fatigue”, a symptom so universal that any insurance claim based on “tiredness” could be expediently denied. The new case definition bore little relation to M.E.: objections were raised by experienced international clinicians and medical scientists, but all objections were ignored… To the serious disadvantage of patients, these psychiatrists have propagated untruths and falsehoods about the disorder to the medical, legal, insurance and media communities, as well as to government Ministers and to Members of Parliament, resulting in the withdrawal and erosion of both social and financial support [for M.E. patients]. Influenced by these psychiatrists, government bodies around the world have continued to propagate the same falsehoods with the result that patients are left without any hope of understanding or of health service provision or delivery. As a consequence, government funding into the biomedical aspects of the disorder is non-existent. (2003a, [Online]) (2001, [Online])
The psychiatrist Simon Wessely – arguably the most powerful and prolific author of papers which claim that M.E. is merely a psychological problem of ‘fatigue’ – began his rise to prominence in the UK at the same time the first CFS definition was being created in the USA (1988). Wessely, and his like-minded colleagues – a small group made up mostly but not exclusively of psychiatrists (colloquially known as the ‘Wessely School’) has gained dominance in the field of M.E. in the UK (and increasingly around the world) by producing vast numbers of papers which purport to be about M.E.
Wessely claims to specialise in M.E. but uses the term interchangeably with chronic fatigue, fatigue or tiredness plus terms such as neurasthenia, CFS and ‘CFS/ME’ (a confusing and misleading term he created himself). He claims that psychiatric states of ongoing fatigue and the distinct neurological disorder M.E. are synonymous. Despite all the existing contradictory evidence, Wessely (and members of the Wessely School) assert that M.E. is a behavioural disorder (with no physical signs of illness or abnormalities on testing) that is perpetuated by ‘aberrant illness beliefs’ and by ‘the misattribution of normal bodily sensations’ and that patients ‘seek and obtain secondary gain by adopting the sick role’ (Hooper & Marshall 2005a, [Online]).
The Wessely School and collaborators has assiduously attempted to obliterate recorded medical history of Myalgic Encephalomyelitis even though the existing evidence and studies were published in prestigious peer-reviewed journals and span over 70 years. Wessely’s claims (and those of his colleagues around the world) have flooded the UK (and worldwide) literature to the extent that medical journals rarely contain any factual and unbiased information on M.E. Thus most clinicians are effectively being deprived of the opportunity to obtain even the most basic facts about the illness.
For at least a decade, serious questions have been raised in international medical journals about possible scientific misconduct and flawed methodology in the work of Wessely and his colleagues. It is only relatively recently however that his long-term involvement as medical adviser – and board member – to a number of commercial bodies having a vested interest in how M.E. is managed have been exposed.
This is the sole reason why the charade that M.E. could be a psychiatric or behavioural ‘fatiguing’ disorder or even a ‘aberrant belief system’ continues: not because there is good scientific evidence – or any evidence – for the theory, or because the evidence proving organic causes and effects is lacking – but because such a ‘theory’ is so financially and politically convenient and profitable on such a large scale to a number of extremely powerful corporations (Hooper et al 2001, [Online]). As Dr Elizabeth Dowsett comments, these ridiculous financially motivated theories bear as much relation to legitimate science ‘as Astrology does to Astronomy’ (1999b [Online]).
Professor Malcolm Hooper goes on to explain:
Increasingly, it is now "policy-makers" and Government advisers, not experienced clinicians, who determine how a disorder is classified and managed in the NHS: the determination of an illness classification and the provision of policy-driven "management" is a very profitable business. To the detriment of the sick, the deciding factor governing policies on medical research and on the management and treatment of patients is increasingly determined not by medical need but by economic considerations. There is a gross mismatch between the severity and complexity of M.E. and the medical and public perception of the disorder (2003a, [Online]).
Members of the ‘Wessely school’ in the UK including Wessely, Sharpe, Cleare and White, their US counterparts Reeves, Straus etc of the CDC, in Australia Lloyd, Hickie etc and the clinicians of the Nijmegen group in the Netherlands each support a bogus psychiatric or behavioural paradigm of ‘CFS’ and recommend rehabilitation-based approaches such as cognitive behavioural therapy (CBT) and graded exercise therapy (GET) as the most useful interventions for ‘CFS’ patients. It is important to be aware that none of these groups is studying patients with M.E. Each of these groups uses a definition of ‘CFS,’ or has created their own, which does not select those with M.E. but instead selects those with various types of psychiatric and non-psychiatric fatigue. (These inappropriate interventions are at best useless and at worst extremely harmful or fatal for M.E. patients.)
The creation of the bogus disease category ‘CFS’ has undoubtedly been used to impose a false psychiatric paradigm of M.E. by allying it with various unrelated psychiatric fatigue states and post-viral fatigue syndromes (etc) for the benefit of various (proven) financial and political interests. The resulting ‘confusion’ between the distinct neurological disease M.E. and the man-made bogus disease category of ‘CFS’ has caused an overwhelming additional burden of suffering for those who suffer from neurological M.E. and their families. It's a big huge mess, that is for certain - but it is not an accidental mess - that is for certain too (Hyde 2006a, [Online]) (Hooper 2006, [Online]) (Hyde 2003, [Online]) (Hooper 2003a, [Online]) (Dowsett 2001a, [Online]) (Hooper et al. 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]).
What does a diagnosis of ‘Chronic Fatigue Syndrome’ actually mean?
There are now more than nine different definitions of ‘CFS.’ All each of these flawed CFS definitions ‘define’ is a heterogeneous (mixed) population of people with various misdiagnosed psychiatric and miscellaneous non-psychiatric states which have little in common but the symptom of fatigue. The fact that a person qualifies for a diagnosis of CFS, based on any of the CFS definitions (a) does not mean that the patient has Myalgic Encephalomyelitis, and (b) does not mean that the patient has any other distinct and specific illness named ‘CFS.’ A diagnosis of CFS – based on any of the CFS definitions – can only ever be a misdiagnosis. All a diagnosis of ‘CFS’ actually means is that the patient has a gradual onset fatigue syndrome which is usually due to a missed major disease. As Dr Byron Hyde explains, the patient has:
Missed cardiac disease, b. Missed malignancy, c. Missed vascular disease, d. Missed brain lesion either of a vascular or space occupying lesion, e. Missed test positive rheumatologic disease, f. Missed test negative rheumatologic disease, g. Missed endocrine disease, h. Missed physiological disease, i. Missed genetic disease, j. Missed chronic infectious disease, k. Missed pharmacological or immunization induced disease, l. Missed social disease, m. Missed drug use disease or habituation, n. Missed dietary dysfunction diseases, o. Missed psychiatric disease (2006, [Online]).
Under the cover of ‘CFS’ certain vested interest groups have assiduously attempted to obliterate recorded medical history of Myalgic Encephalomyelitis; even though the existing evidence has been published in prestigious peer-reviewed journals around the world and spans over 70 years. As M.E. expert Dr Byron Hyde explains:
Do not for one minute believe that CFS is simply another name for Myalgic Encephalomyelitis. It is not. The CDC 1988 definition of CFS describes a non-existing chimera based upon inexperienced individuals who lack any historical knowledge of this disease process. The CDC definition is not a disease process. It is (a) a partial mix of infectious mononucleosis /glandular fever, (b) a mix of some of the least important aspects of M.E. and (c) what amounts to a possibly unintended psychiatric slant to an epidemic and endemic disease process of major importance. Any disease process that has major criteria, of excluding all other disease processes, is simply not a disease at all; it doesn't exist. The CFS definitions were written in such a manner that CFS becomes like a desert mirage: The closer you approach, the faster it disappears (2006, [Online]).
The only way forward for M.E. patients and all of the diverse patient groups commonly misdiagnosed with ‘CFS’ (both of which are denied appropriate support, diagnosis and treatment, and may also be subject to serious medical abuse) is that the bogus disease category of ‘CFS’ must be abandoned. Every patient deserves the best possible opportunity for appropriate treatment for their illness, and for recovery and this process must begin with a correct diagnosis if at all possible. A correct diagnosis is half the battle won (Hyde 2006a, 2006b, [Online]) (Hooper 2006, [Online]) (Hyde 2003, [Online]) (Hooper 2003a, [Online]) (Dowsett 2001a, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Dowsett n.d., [Online]).
What do the terms CFIDS, ME/CFS, CFS/ME, Myalgic Encephalopathy and ME-CFS mean?
When the terms CFS, CFIDS, ME/CFS, CFS/ME, Myalgic Encephalopathy or ME-CFS are used what is being referred to may be patients with/facts relating to any combination of: 1. Miscellaneous psychological and non-psychological fatigue states (including somatisation disorder) 2. A self limiting post-viral fatigue state or syndrome (eg. following glandular fever.) 3. A mixed bag of unrelated, misdiagnosed illnesses (each of which feature fatigue as well as a number of other common symptoms; poor sleep, headaches, muscle pain etc.) including Lyme disease, multiple sclerosis, Fibromyalgia, athletes over-training syndrome, depression, burnout, systemic fungal infections (candida) and even various cancers 4. Myalgic Encephalomyelitis patients.
The terminology is often used interchangeably, incorrectly and confusingly. However, the DEFINITIONS of M.E. and CFS are very different and distinct, and it is the definitions of each of these terms which is of primary importance. The distinction must be made between terminology and definitions.
Chronic Fatigue Syndrome is an artificial construct created in the US in 1988 for the benefit of various political and financial vested interest groups. It is a mere diagnosis of exclusion (or wastebasket diagnosis) based on the presence of gradual or acute onset fatigue lasting 6 months. If tests show serious abnormalities, a person no longer qualifies for the diagnosis, as ‘CFS’ is ‘medically unexplained.’ A diagnosis of ‘CFS’ does not mean that a person has any distinct disease (including M.E.). The patient population diagnosed with ‘CFS’ is made up of people with a vast array of unrelated illnesses, or with no detectable illness. According to the latest CDC estimates, 2.54% of the population qualify for a ‘CFS’ (mis)diagnosis. Every diagnosis of ‘CFS’ can only ever be a misdiagnosis.
Myalgic Encephalomyelitis is a systemic neurological disease initiated by a viral infection. M.E. is characterised by (scientifically measurable) damage to the brain, and particularly to the brain stem which results in dysfunctions and damage to almost all vital bodily systems and a loss of normal internal homeostasis. Substantial evidence indicates that M.E. is caused by an enterovirus. The onset of M.E. is always acute and M.E. can be diagnosed within just a few weeks. M.E. is an easily recognisable distinct organic neurological disease which can be verified by objective testing. If all tests are normal, then a diagnosis of M.E. cannot be correct.
M.E. can occur in both epidemic and sporadic forms and can be extremely disabling, or sometimes fatal. M.E. is a chronic/lifelong disease that has existed for centuries. It shares similarities with MS, Lupus and Polio. There are more than 60 different neurological, cognitive, cardiac, metabolic, immunological, and other M.E. symptoms. Fatigue is not a defining nor even essential symptom of M.E. People with M.E. would give anything to be only severely ‘fatigued’ instead of having M.E. Far fewer than 0.5% of the population has the distinct neurological disease known since 1956 as Myalgic Encephalomyelitis.
The only thing that makes any sense is for patients with Myalgic Encephalomyelitis, to be studied ONLY under the name Myalgic Encephalomyelitis – and for this term ONLY to be used to refer to a 100% M.E. patient group The only correct name for this illness – M.E. as per Ramsay/Richardson/Dowsett and Hyde – is Myalgic Encephalomyelitis. M.E. is not synonymous with CFS, nor is it a subgroup of CFS. (There is no such disease/s as “CFS.’) It is also important that the only terms which are used are those which do have an official and correct World Health Organization classification.
There is no such disease/s as ‘CFS’ – the name CFS and the bogus disease category of CFS must be abandoned (along with the use of other vague and misleading umbrella terms such as ‘ME/CFS’ ‘CFS/ME’ 'CFIDS' and 'Myalgic Encephalopathy' and others), for the benefit of all the patient groups involved.
What does the term ICD-CFS mean?
The various definitions of ‘CFS’ do not define M.E. Myalgic Encephalomyelitis is an organic neurological disorder as defined at G.93.3 in the World Health Organization’s International Classification of Diseases (ICD). The definitions of ‘CFS’ do not reflect this. The ‘CFS’ definitions are not ‘watered down’ M.E. definitions, as some claim. They are not definitions of M.E. at all.
However, ever since an outbreak of M.E. in the US was given the label ‘CFS,’ the name/definition ‘CFS’ has prevailed for political reasons. ‘CFS’ is widely though wrongly applied to M.E. as well as to other diseases.
The overwhelming majority of ‘CFS’ research does not involve M.E. patients and is not relevant in any way to M.E. patients. However, a very small amount (a minuscule percentage) of research published under the name ‘CFS’ clearly does involve a significant number of M.E. patients as it details those abnormalities which are unique to M.E. Sometimes the term ‘ICD-CFS’ is used in those studies and articles which, while they use the term ‘CFS,’ do relate to some extent to authentic M.E.
Problems with ‘CFS’ or so-called ‘ICD-CFS’ research
The overwhelming majority of ‘CFS’ research does not involve M.E. patients and is not relevant in any way to M.E. patients. A small number of ‘CFS’ studies refer in part to people with M.E. but it may not always be clear which parts refer to M.E. Unless studies are based on an exclusively M.E. patient group, results cannot be interpreted and are meaningless for M.E. While it is important to be aware of the small amount of research findings that do hold some value for M.E. patients, using the term ‘ICD-CFS’ to refer to this research is misleading and in many ways just damaging as using terms and concepts like ‘ME/CFS’ or ‘CFS/ME.’
What does define Myalgic Encephalomyelitis? What is its symptomatology?
Myalgic encephalomyelitis is a systemic acutely acquired illness initiated by a virus infection which is characterised by post encephalitic damage to the brain stem; a nerve centre through which many spinal nerve tracts connect with higher centres in the brain in order to control all vital bodily functions – this is always damaged in M.E. (Hence the name Myalgic Encephalomyelitis.) The CNS is diffusely injured at several levels, these include the cortex, the limbic system, the basal ganglia, the hypothalamus and areas of the spinal cord and its appendages. This persisting multilevel central nervous system (CNS) dysfunction is undoubtedly both the chief cause of disability in M.E. and the most critical in the definition of the entire disease process.
Myalgic Encephalomyelitis represents an acute change in the balance of neuropeptide messengers, and due to this, a resulting loss of the ability of the CNS (the brain) to adequately receive, interpret, store and recover information which enables it to control vital body functions (cognitive, hormonal, cardiovascular, autonomic and sensory nerve communication, digestive, visual auditory balance etc). It is a loss of normal internal homeostasis. The individual can no longer function systemically within normal limits.
M.E. is primarily neurological, but because the brain controls all vital bodily functions virtually every bodily system can be affected by M.E. Again, although M.E. is primarily neurological it is also known that the vascular and cardiac dysfunctions seen in M.E. are also the cause of many of the symptoms and much of the disability associated with M.E. – and that the well-documented mitochondrial abnormalities present in M.E. significantly contribute to both of these pathologies. There is also multi-system involvement of cardiac and skeletal muscle, liver, lymphoid and endocrine organs in M.E. Some individuals also have damage to skeletal and heart muscle. Thus Myalgic Encephalomyelitis symptoms are manifested by virtually all bodily systems including: cognitive, cardiac, cardiovascular, immunological, endocrinological, respiratory, hormonal, gastrointestinal and musculo-skeletal dysfunctions and damage.
M.E. is an infectious neurological disease and represents a major attack on the central nervous system (CNS) – and an associated injury of the immune system – by the chronic effects of a viral infection. There is also transient and/or permanent damage to many other organs and bodily systems (and so on) in M.E. M.E. affects the body systemically. Even minor levels of physical and cognitive activity, sensory input and orthostatic stress beyond a M.E. patient’s individual post-illness limits causes a worsening of the severity of the illness (and of symptoms) which can persist for days, weeks or months or longer. In addition to the risk of relapse, repeated or severe overexertion can also cause permanent damage (eg. to the heart), disease progression and/or death in M.E.
M.E. is not stable from one hour, day, week or month to the next. It is the combination of the chronicity, the dysfunctions, and the instability, the lack of dependability of these functions, that creates the high level of disability in M.E. It is also worth noting that of the CNS dysfunctions, cognitive dysfunction is one of the most disabling characteristics of M.E. All of this is not simply theory, but is based upon an enormous body of mutually supportive clinical information. These are well-documented, scientifically sound explanations for why patients are bedridden, profoundly intellectually impaired, unable to maintain an upright posture and so on (Chabursky et al. 1992 p. 20) (Hyde 2007, [Online]) (Hyde 2006, [Online]) (Hyde 2003, [Online]) (Dowsett 2001a, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Hyde 1992 pp. x-xxi) (Hyde & Jain 1992 pp. 38 - 43) (Hyde et al. 1992, pp. 25-37) (Dowsett et al. 1990, pp. 285-291) (Ramsay 1986, [Online]) (Dowsett & Ramsay n.d., pp. 81-84) (Richardson n.d., pp. 85-92).
What are some of the symptoms of Myalgic Encephalomyelitis?
More than 64 distinct symptoms have been authentically documented in M.E. At first glance it may seem that every symptom possible is mentioned, but although people with M.E. have a lot of different minor symptoms because of the way the central nervous system (which controls virtually every bodily system) is affected, the major symptoms of M.E. really are quite distinct and almost identical from one patient to the next. (Hooper & Montague 2001a, [Online]) (Hyde 2006, [Online]) Individual symptoms of Myalgic Encephalomyelitis include:
Sore throat, chills, sweats, low body temperature, low grade fever, lymphadenopathy, muscle weakness (or paralysis), muscle pain, muscle twitches or spasms, gelling of the joints, hypoglycaemia, hair loss, nausea, vomiting, vertigo, chest pain, cardiac arrhythmia, resting tachycardia, orthostatic tachycardia, orthostatic fainting or faintness, circulatory problems, opthalmoplegia, eye pain, photophobia, blurred vision, wavy visual field, and other visual and neurological disturbances, hyperacusis, tinnitus, alcohol intolerance, gastrointestinal and digestive disturbances, allergies and sensitivities to many previously well-tolerated foods, drug sensitivities, stroke-like episodes, nystagmus, difficulty swallowing, weight changes, paresthesias, polyneuropathy, proprioception difficulties, myoclonus, temporal lobe and other types of seizures, an inability to maintain consciousness for more than short periods at a time, confusion, disorientation, spatial disorientation, disequilibrium, breathing difficulties, emotional lability, sleep disorders; sleep paralysis, fragmented sleep, difficulty initiating sleep, lack of deep-stage sleep and/or a disrupted circadian rhythm.
Neurocognitive dysfunction may include cognitive, motor and perceptual disturbances. Cognitive dysfunction may be pronounced and may include; difficulty or an inability to speak (or understand speech), difficulty or an inability to read or write or to do basic mathematics, difficulty with simultaneous processing, poor concentration, difficulty with sequencing and problems with memory including; difficulty making new memories, difficulty recalling formed memories and difficulties with visual and verbal recall (eg. facial agnosia). There is often a marked loss in verbal and performance intelligence quotient (IQ) in M.E. (Bassett 2009, [Online]).
What other features define or characterise Myalgic Encephalomyelitis?
What characterises M.E. every bit as much as the individual neurological, cognitive, cardiac, cardiovascular, immunological, endocrinological, respiratory, hormonal, muscular, gastrointestinal and other symptoms is the way in which people with M.E. respond to physical and cognitive activity, sensory input and orthostatic stress, and so on. In other words, the pattern of symptom exacerbations, relapses and of disease progression.
The way the bodies of people with M.E. react to these activities/stimuli post-illness is unique in a number of ways. Along with a specific type of damage to the brain (the central nervous system) this characteristic is one of the defining features of the illness which must be present for a correct diagnosis of M.E. to be made. The main characteristics of the pattern of symptom exacerbations, relapses and disease progression (and so on) in Myalgic Encephalomyelitis include:
For the full-length version of this text and for a full list of references for this text see: The Ultra-comprehensive Myalgic Encephalomyelitis Symptom List.
What causes Myalgic Encephalomyelitis?
M.E. expert Dr Byron Hyde explains that:
[The] prodromal phase is associated with a short onset or triggering illness. This onset illness usually takes the form of either, or any combination, of the following, (a) an upper respiratory illness, (b) a gastrointestinal upset, (c) vertigo and (d) a moderate to severe meningitic type headache. The usual incubation period of the triggering illness is 4-7 days. The second and third phases of the illness are usually always different in nature from the onset illness and usually become apparent within 1-4 weeks after the onset of the infectious triggering illness (1998 [Online]).
Despite popular opinion (and the vast amount of ‘CFS’ government and media propaganda) there is no link however between contracting M.E. and being a 'perfectionist' or having a ‘type A’ or over-achiever personality. M.E. also cannot be caused by a period of long-term or intense stress, trauma or abuse in childhood, becoming run-down, working too hard or not eating healthily. Myalgic Encephalomyelitis is not a form of ‘burnout’ or nervous exhaustion, or the natural result of a body no longer able to cope with long-term stress.
Research also shows that it is simply not possible that M.E. could be caused by the Epstein-Barr virus, any of the herpes viruses (including HHV6), glandular fever/mononucleosis, Cytomegalovirus (CMV), Ross River virus, Q fever, hepatitis, chicken pox, influenza or any of the bacteria which can result in Lyme disease (or other tick-borne bacterial infections). M.E. is also not a form of chemical poisoning.
M.E. is undoubtedly caused by a virus, a virus with an incubation period of 4-7 days. There is also ample evidence that M.E. is caused by the same type of virus that causes polio; an enterovirus (Hyde 2006, [Online]) (Hyde 2007, [Online]) (Hooper 2006, [Online]) (Hooper & Marshall 2005a, [Online]) (Hyde 2003a, [Online]) (Dowsett 2001a, [Online]) (Hooper et al. 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Ryll 1994, [Online]).
What does cause Myalgic Encephalomyelitis? Are there outbreaks of M.E.?
One of the most fundamental facts about M.E. throughout its history is that it occurs in epidemics. There is a history of over sixty recorded outbreaks of the illness going back to 1934 when an epidemic of what seemed at first to be poliomyelitis was reported in Los Angeles. As with many of the other M.E. outbreaks the Los Angeles outbreak occurred during a local polio epidemic.
The presenting illness resembled polio and so for some years the illness was considered to be a variant of polio and classified as ‘Atypical poliomyelitis’ or ‘Non-paralytic polio’ (TCJRME 2007, [Online]) (Hyde 1998, [Online]) (Hyde 2006, [Online]). Many early outbreaks of M.E. were also individually named for their locations and so we also have outbreaks known as Tapanui flu in New Zealand, Akureyri or Icelandic disease in Iceland, Royal Free Disease in the UK, and so on (TCJRME 2007, [Online]) (Hyde 1998, [Online]).
A review of early M.E. outbreaks found that clinical symptoms were consistent in over sixty recorded epidemics spread all over the world (Hyde 1998, [Online]). Despite the different names being used, these were repeated outbreaks of the same illness. It was also confirmed that the epidemic cases of M.E., and the sporadic cases of M.E. each represented the same illness (Hyde 2006, [Online]) (Dowsett 1999a, [Online]).
M.E. is an infectious neurological disease and represents a major attack on the central nervous system (CNS) by the chronic effects of a viral infection. The world’s leading M.E. experts, namely Ramsay, Richardson, Dowsett and Hyde, (and others) have all indicated that M.E. is caused by an enterovirus. The evidence which exists to support the concept of M.E. as an enteroviral disease is compelling (Hyde 2007, [Online]) (Hyde 2006, [Online]). An enterovirus explains the; age variation, sex variation, obvious resistance of some family members to the infection and the effect of physical activity (particularly in the early stages of the illness) in creating more long-term/severe M.E. illness in the host (Hyde & Jain 1992a, p. 40). There is also the evidence that; M.E. epidemics very often followed polio epidemics, M.E. resembles polio at onset, serological studies have shown that communities affected by an outbreak of M.E. were effectively blocked (or immune) from the effects of a subsequent polio outbreak, evidence of enteroviral infection has been found in the brain tissue of M.E. patients at autopsy, and so on (Hyde 2007, [Online]) (Hyde 2006, [Online]) (Hyde 2003, [Online]) (Dowsett 2001a, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Hyde 1992 p. xi) (Hyde & Jain 1992 pp. 38 - 43) (Hyde et al. 1992, pp. 25-37) (Dowsett et al. 1990, pp. 285-291) (Ramsay 1986, [Online]).
The US Centres for Disease Control (CDC) placed ‘CFS’ on its "Priority One; New and Emerging" list of infectious diseases some years ago; a list that also includes Lyme disease, hepatitis C, and malaria’ (Gellman & Verillo 1997, p. 19). But no real research into transmissibility (or more importantly on reducing infection rates) has been done by any government on patients with M.E. (or ‘CFS’) despite ample evidence that this is an infectious disease. There have been many well-documented clusters or outbreaks of the illness, reports of as many as 4.5% of M.E. sufferers contracting the illness immediately after blood transfusions (or after needle-stick injuries involving the blood of M.E. patients), evidence of the disease spreading through casual contact amongst family members and so on (Johnson, 1996) (Carruthers et al. 2003, p.79).
As Dr Elizabeth Dowsett explains: ‘The problem we face is that, in spite of overwhelming epidemiological and technical evidence of an infectious case, the truth is being suppressed by the government and the 'official' M.E. charities as 'too scary' for the general public.’ (n.d.a, [Online]) This pretence of ignorance on behalf of government worldwide has had enormous consequences; only in the UK are people with M.E. specifically banned from donating blood for example. So it is that the number of people infected with M.E. continues to rise unabated and largely unnoticed by the public (Johnson, 1996).
Is Myalgic Encephalomyelitis difficult to diagnose? What tests can be used to diagnose M.E.?
M.E. is a distinct, recognisable disease entity that is not difficult to diagnose and can in fact be diagnosed relatively early in the course of the disease (within just a few weeks) – providing that the physician has some experience with the illness. There is just no other illness that is even remotely like M.E.
Although there is as yet no single test which can be used to diagnose M.E. there are (as with Lupus and multiple sclerosis and ovarian cancer and many other illnesses) a series of tests which can confirm a suspected M.E. diagnosis. Virtually every M.E. patient will also have various abnormalities visible on physical exam. If all tests are normal, if specific abnormalities are not seen on certain of these tests (eg. brain scans), then a diagnosis of M.E. cannot be correct (Hyde 2007, [Online]) (Hyde 2006, [Online]) (Hooper et al. 2001, [Online]) (Chabursky et al. 1992, p.22). As M.E. expert Dr Byron Hyde explains:
The one essential characteristic of M.E. is acquired CNS dysfunction. A patient with M.E. is a patient whose primary disease is CNS change, and this is measurable. We have excellent tools for measuring these physiological and neuropsychological changes: SPECT, xenon SPECT, PET, and neuropsychological testing (2003, [Online]).
Thus it is these tests which are therefore most critical in the diagnosis of M.E., although various other types of tests are also useful. New TESTABLE definitions such as The Nightingale Definition of M.E. now also make diagnosis easier than ever before; even for those with no experience with the illness (Hyde 2007, [Online]) (Hyde 2006, [Online]) (Hooper & Marshall 2005a, [Online]) (Hyde 2003, [Online]) (Dowsett 2001a, [Online]) (Dowsett 2000, [Online]) (Hyde 1992 p. xi) (Hyde & Jain 1992 pp. 38 - 43) (Hyde et al. 1992, pp. 25-37) (Dowsett et al. 1990, pp. 285-291) (Ramsay 1986, [Online]) (Dowsett n.d., [Online]) (Dowsett & Ramsay n.d., pp. 81-84) (Richardson n.d., pp. 85-92).
How common is Myalgic Encephalomyelitis? Who get M.E. and how?
Although the illness we now know as Myalgic Encephalomyelitis has existed for centuries, for much of that time it was a relatively uncommon disease. Following the mass polio vaccination programs of the 1960s cases of polio were greatly reduced and outbreaks of M.E. seemed to be similarly affected. It wasn’t until the late 1970s that M.E. began its dramatic increase in incidence worldwide. Over 20 years later, M.E. is a worldwide epidemic of devastating proportions. Many people have died from M.E. and there are now many hundreds of thousands of people severely disabled by this epidemic (TCJRME 2007, [Online]) (Hyde 1992, p. xi).
The main period of infectivity of M.E. peaks at the time just before symptoms appear through to the initial acute phase of the illness (which lasts for several months or in some cases years). M.E. appears to be highly infective but also highly selective. The major mode of infectivity is by airborne or respiratory route. Modes of transmission are thought to include: casual contact (respiratory), salivary transmission (eg. kissing), sexual transmission and transmission through blood products. (Hyde et al. 1992, pp. 25 - 37) (A recent study of 752 patients found that 4.5% of them – almost one in twenty – had had a blood transfusion days or a week before experiencing acute onset of M.E., for example) (Carruthers et al. 2003, [Online]). (Hyde et al. 1992, pp. 25 - 37).
M.E. has a similar strike rate to multiple sclerosis (or possibly somewhat higher), and is estimated to affect roughly 0.2% of the population. Children and teenagers are also susceptible to the illness and children as young as five have been diagnosed with M.E. (M.E. can occur in children younger than five, but this is thought to be rare.) All ages are affected but most commonly sufferers are under 45 at onset. Women are affected around three times as often as men, a ratio common in autoimmune disorders, although in children the sexes seem to be afflicted equally. M.E. affects all races and socio-economic groups and has been diagnosed all over the world. There are more than a million M.E. sufferers worldwide (Hooper et al. 2001 [Online]) (Hyde 1992, pp. x - xxi).
Are there any treatments for Myalgic Encephalomyelitis?
Whilst there is no cure as yet, or treatments which can dramatically influence the course of the illness due to the appalling lack of funding into research; intelligent nutritional, pharmaceutical and other interventions can make a significant difference to a patient's life. Appropriate biomedical diagnostic testing should be done as a matter of course (and repeated regularly) to ensure that the aspects of the illness which are able to be treated can be diagnosed, monitored and then treated as appropriate. Testing is also important so that dangerous deficiencies and dysfunctions (which may place the patient at significant risk) are not overlooked (Hooper at al. 2001 [Online]). For information on treatment see: Treating M.E. - The Basics.
What is known about Myalgic Encephalomyelitis so far?
There is an abundance of research which shows that M.E. is an organic illness which can have profound effects on many bodily systems. These are well-documented, scientifically sound explanations for why patients are bedridden, profoundly intellectually impaired, unable to maintain an upright posture and so on. More than a thousand good articles now support the basic premises of M.E. Autopsies have also confirmed such reports of bodily damage and infection (Hooper & Williams 2005a, [Online]).
Many different organic abnormalities have been found in M.E. patients (in peer reviewed research). Patient advocates Margaret Williams and Eileen Marshall explain that:
(Note that this is only a sample of some of the research available, not an exhaustive list.) It is known that Myalgic Encephalomyelitis is:
1. An acute onset (biphasic) epidemic or endemic infectious disease process
2. An autoimmune disease (with similarities to Lupus)
3. An infectious neurological disease, affecting adults and children
4. A disease which involves significant (and at times profound) cognitive impairment/dysfunction
5. A persistent viral infection (most likely due to an enterovirus; the same type of virus which causes poliomyelitis and post-polio syndrome)
6. A diffuse and measurable injury to the vascular system of the central nervous system (the brain)
7. A central nervous system (CNS) disease (with similarities to MS)
8. A variable (but always, serious) diffuse (acquired) brain injury
9. A systemic illness (associated with organ pathology; particularly cardiac)
10. A vascular disease
11. A cardiovascular disease
12. A type of cardiac insufficiency
13. A mitochondrial disease
14. A metabolic disorder
15. A musculo-skeletal disorder
16. A neuroendocrine disease
17. A seizure disorder
18. A sleep disorder
19. A gastrointestinal disorder
20. A respiratory disorder
21. An allergic disorder
22. A pain disorder
23. A life-altering disease
24. A chronic or lifelong disease associated with a high level of disability
25. An unstable disease; from one hour/day/week or month to the next
26. A potentially progressive or fatal disease (Hyde 2007, [Online]) (Hooper et al. 2001, [Online]) (Cheney 2007, [video recording]) (Ramsay 1986, [Online])
Is there a legitimate scientific debate about whether or not M.E. is a ‘real’ medical condition?
Despite popular opinion, there simply is no legitimate scientifically motivated debate about whether or not M.E. is a ‘real’ illness or not or has a biological basis. The psychological or behavioural theories of M.E. are no more scientifically viable than are the theories of a ‘flat earth.’ They are pure fiction.
Similar Medical Conditions?
There are a number of post-viral fatigue states or syndromes which may follow common infections such as mononucleosis/glandular fever, hepatitis, Q fever, Ross river virus and so on. M.E. is an entirely different condition to these self-limiting fatigue syndromes however (and is not caused by the Epstein Barr virus or any of the herpes or hepatitis viruses). People suffering with any of these post-viral fatigue syndromes do not have M.E.
Myalgic Encephalomyelitis does have some limited similarities – to varying degrees – to illnesses such as multiple sclerosis, Lupus, post-polio syndrome, Gulf War Syndrome and chronic Lyme disease, and others. But this does not mean that they represent the same etiological or pathobiological process. They do not. M.E. is a distinct neurological illness with a distinct; onset, symptoms, aetiology, pathology, response to treatment, long and short term prognosis – and World Health Organization classification (G.93.3) (Hyde 2006, [Online]) (Hyde 2007, [Online]) (Hooper 2006, [Online]) (Hooper & Marshall 2005a, [Online]) (Hyde 2003a, [Online]) (Dowsett 2001a, [Online]) (Hooper et al. 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online])
How well is research into Myalgic Encephalomyelitis research funded by government?
Governments around the world are currently spending $0 a year on M.E. research. Considering the brutal severity of the illness, the vast numbers of patients involved, this is a worldwide disgrace.
Abuse and Myalgic Encephalomyelitis
Two of the most common interventions people with M.E. are recommended to participate in are cognitive behavioural therapy (CBT) and graded exercise therapy (GET).
However, despite the misleading claims to the contrary made by various vested interest groups, no evidence exists which shows that CBT and GET are appropriate, useful or safe treatments for Myalgic Encephalomyelitis patients. Studies by these groups (and others) involving miscellaneous psychiatric and non-psychiatric ‘fatigue’ sufferers, and their positive response to these treatments, have no more relevance to M.E. sufferers than they do to diabetes patients, patients with multiple sclerosis or any other illness. Thus, patients with M.E. are routinely being prescribed these treatments on what amounts to a ‘random’ basis medically.
As (bad) luck would have it, graded exercise programs are probably the single most inappropriate ‘treatment’ that a M.E. sufferer could be recommended to undertake. Permanent damage may be caused, as well as disease progression. Patient accounts of leaving exercise programs much more severely ill than when they began them; wheelchair-bound or bed-bound or needing intensive care or cardiac care units, are common. The damage caused is often severe and either long-term or permanent; thus some patients are still dealing with the effects of inappropriate advice to exercise five or ten or more YEARS afterward and for some patients this damage is permanent. Sudden deaths have also been reported in a small percentage of M.E. patients following exercise. CBT and GET are at best useless and at worst extremely harmful for Myalgic Encephalomyelitis patients. Despite this, people with M.E. are routinely being recommended these ‘treatments’ while also being assured that they are completely safe. These interventions are also not just being offered to M.E. patients solely on a voluntary basis; many have been treated as psychiatric patients against their will (or against the will of the parents of children with M.E.). In some cases it is a condition of receiving medical insurance or government welfare entitlements that M.E. patients first undergo ‘rehabilitation’ such as CBT and GET programs, particularly in the UK.
If a prescription drug had anything like the appalling track record exercise has with people with M.E. (or even a small fraction of it; even 2%) it would be an enormous worldwide scandal. The drug would be immediately banned, there would be some form of inquiry and serious criminal charges may well be laid. Yet the rate of people with M.E. recommended or even forced to exercise continues to rise, and with the full support of government etc. This is despite the fact that legitimate research clearly shows that along with the huge risk involved, it has a ZERO percent chance of providing any benefit to people with authentic M.E. That this can be allowed to go on in such a supposedly enlightened day and age as ours defies belief.
It is also of great concern that so many M.E. patients are ONLY offered ‘treatments’ such as CBT and GET – while access to even basic appropriate medical care is withheld. Of the 25% of patients who are severely affected by the illness (and are bed-bound and housebound) around the majority have no contact with the health service at all as they are seldom able to obtain housecalls, for example (Dunn 2005, [Online]). Many sufferers are also refused the basic welfare support to which they are entitled. Thus a significant percentage of very physically ill and vulnerable M.E. patients are simply left to suffer and die at home without any medical care or welfare or social support (Hooper 2003a, [Online]).
It is only Myalgic Encephalomyelitis patients who are negatively affected by the bogus creation of ‘CFS’?
Other patient groups misdiagnosed as CFS are also denied appropriate diagnosis and treatment and they may also routinely be subjected to inappropriate psychological interventions such as CBT and GET. There are also a variety of negative impacts on doctors and the public (and others) caused by the ‘CFS’ insurance scam. Truly the only groups which gain from the ‘CFS’ confusion are insurance companies and various other organisations and corporations which have a vested financial interest in how these patients are treated, including the government.
How severe is Myalgic Encephalomyelitis?
Although some people do have more moderate versions of the illness, symptoms are extremely severe for at least 25-30% of the people who have M.E.; significant numbers of whom are housebound and bedbound.
Dr. Paul Cheney stated before a US FDA Scientific Advisory Committee:
I have evaluated over 2,500 cases. At worst, it is a nightmare of increasing disability with both physical and neurocognitive components. The worst cases have both an MS-like and an AIDS-like clinical appearance. We have lost five cases in the last six months. 80% of cases are unable to work or attend school. We admit regularly to hospital with an inability to care for self. (Hooper et al. 2001 [Online])
Dr Dan Peterson found that: ‘M.E. patients experienced greater "functional severity" than the studied patients with heart disease, virtually all types of cancer, and all other chronic illnesses.’ An unrelated study compared the quality of life of people with various illnesses, including patients undergoing chemotherapy or haemodialysis, as well as those with HIV, liver transplants, coronary artery disease, and other ailments, and again found that M.E. patients scored the lowest. "In other words", said one M.E. expert in a radio interview, “this disease is actually more debilitating than just about any other medical problem in the world” (Munson 2000, p. 4).
In the 1980s Mark Loveless, an infectious disease specialist and head of the AIDS and M.E. Clinic at Oregon Health Sciences University, found that M.E. patients whom he saw had far lower scores on the Karnofsky performance scale than his HIV patients even in the last week of their life. He testified that a M.E. patient, ‘feels effectively the same every day as an AIDS patient feels two weeks before death’ (Hooper & Marshall 2005a, [Online]).
But in M.E., this extremely high level of illness is not short-term – it does not always lead to death – it can instead continue uninterrupted for decades.
Recovery from Myalgic Encephalomyelitis
Myalgic Encephalomyelitis patients who are given advice to rest in the early stages of the illness (and who avoid overexertion thereafter) have repeatedly been shown to have the most positive long-term prognosis. As M.E. expert Dr Melvin Ramsay explains; ‘The degree of physical incapacity varies greatly, but the [level of severity] is directly related to the length of time the patient persists in physical effort after its onset; put in another way, those patients who are given a period of enforced rest from the onset have the best prognosis. Since the limitations which the disease imposes vary considerably from case to case, the responsibility for determining these rests upon the patient. Once these are ascertained the patient is advised to fashion a pattern of living that comes well within them’ (1986, [Online]).
M.E. can be progressive, degenerative (change of tissue to a lower or less functioning form, as in heart failure), chronic, or relapsing and remitting. Some patients experience spontaneous remissions albeit most often at a greatly reduced level of functioning compared to pre-illness and such patients remain susceptible to relapses for the remainder of their lives – M.E. is a chronic/life-long disability where relapse is always possible. Cycles of severe relapse are common, as are further symptoms developing over time. Around 30% of cases are progressive and degenerative and sometimes M.E. is fatal. As Dr Elizabeth Dowsett explains:
After a variable interval, a multi-system syndrome may develop, involving permanent damage to skeletal or cardiac muscle and to other "end organs" such as the liver, pancreas, endocrine glands and lymphoid tissues, signifying the further development of a lengthy chronic, mainly neurological condition with evidence of metabolic dysfunction in the brain stem. Yet, stabilisation, albeit at a low level, can still be achieved by appropriate management and support. The death rate of 10% occurs almost entirely from end-organ damage within this group (mainly from cardiac or pancreatic failure) (2001a, [Online]).
Clearly, many people with M.E. are significantly or severely disabled. But what is so tragic about this high level of suffering is that so much of it is needless. The correct type of support (financial, medical and practical) can do much to prevent the physical, occupational and other deterioration in the quality of life for M.E. patients and can stabilise the illness (Dowsett 2002b, [Online]). Many deaths from M.E. could also have been prevented if only those patients had been given a basic level of support and care made available to patients with illnesses with comparable care needs such as multiple sclerosis and motor neurone disease.
Certain groups and individuals are benefiting enormously from this fraudulent artificial ‘CFS’ construct.
To say that these groups and individuals actually believe what they are saying and that is it based on science or reality is ridiculous. To say that it is merely a misunderstanding or a mistake is also ridiculous. The ‘CFS’ construct is complete fiction, and exists purely because it is so financially and politically beneficial to a number of powerful groups.
The artificial ‘CFS’ construct is no more a scientifically accurate description of M.E. than it is a scientifically accurate description of MS, Lupus or polio. This pretence of ignorance about M.E. and about the reality of ‘CFS’ (particularly by government) has had devastating consequences for people with M.E. – and all those with non-M.E. illnesses who are misdiagnosed as having ‘CFS’ – and has also meant that the number of M.E. sufferers continues to rise unabated and largely unrecognised. The general public worldwide – including sufferers themselves – have been lied to repeatedly about the reality of Myalgic Encephalomyelitis.
The decades of systemic abuse and neglect of the million or more people with M.E. worldwide has to stop. M.E. and CFS are not the same. Concepts such as ‘ME/CFS,’ ‘CFS/ME,’ Myalgic ‘Encephalopathy’ and ‘CFIDS’ are also unhelpful and unscientific and only add to the obfuscation.
‘CFS’ is merely a scam invented by insurance companies motivated by profit without regard for truth or ethics. These groups are acting without any regard for the (extreme) suffering and the additional avoidable deaths they are causing. These groups are acting criminally. This scam is tissue thin and very easily discovered if one merely takes a small amount of time to look at all of the evidence.
Why is almost nobody doing this? Why is the world letting these groups get away with such a heinous scam and such appalling abuse on a massive scale? Why isn’t the world caring enough or smart enough or gutsy enough to see through these slick and well-funded misinformation campaigns, and to act? How can this be, when the lies are so flimsy and scientifically laughable? Have we learned nothing from the devastating corporate cover-ups of the truth about tobacco and asbestos in our recent past? Where is the World Health Organisation? Where are our human rights groups? Where is our media? Where are our uncompromising investigative journalists?
Will it take another 20 years? How much more extreme do the suffering and abuse have to be? How many more hundreds of thousands of children and adults worldwide have to be affected? How many more patients will have to die needlessly before something is finally done? How much longer will we leave the fox in charge of the hen house? It’s beyond sick.
Where do we go from here?
Sub-grouping different types of ’CFS,’ refining the bogus ‘CFS’ definitions further or renaming ‘CFS’ with some variation on the term M.E. would achieve nothing and only create yet more confusion and mistreatment. The problem is not that ‘CFS’ patients are being mistreated as psychiatric patients; some of those patients misdiagnosed with CFS actually do have psychological illnesses. There is no such distinct disease as ‘CFS’ – that is the entire issue, and the vast majority of patients misdiagnosed with CFS do not have M.E. and so have no more right to that term than to ‘cancer’ or ‘diabetes.’ The only way forward, for the benefit of society and every patient group involved, is that:
1. The bogus disease category of ‘CFS’ must be abandoned completely. Patients with fatigue (and other symptoms) caused by a variety of different illnesses need to be diagnosed correctly with these illnesses if they are to have any chance of recovery; not given a meaningless Oxford or Fukuda ‘CFS’ misdiagnosis. Patients with M.E. need this same opportunity. Each of the patient groups involved must again be correctly diagnosed and then treated as appropriate based on legitimate and unbiased science involving the SAME patient group.
2. The name Myalgic Encephalomyelitis must be fully restored (to the exclusion of all others) and the World Health Organization classification of M.E. (as a distinct neurological disease) must be accepted and adhered to in all official documentations and government policy. As Professor Malcolm Hooper explains:
The term myalgic encephalomyelitis was first coined by Ramsay and Richardson and has been included by the World Health Organisation (WHO in their International Classification of Diseases (ICD), since 1969. The currently version ICD-10 lists M.E. under G.93.3 - neurological conditions. It cannot be emphasised too strongly that this recognition emerged from meticulous clinical observation and examination. (2006, [Online])
3. People with M.E. must immediately stop being treated as if they are mentally ill, or suffer with a behavioural illness, or as if their physical symptoms do not exist or can be improved with ‘positive thinking’ and exercise – or mixed in with various ‘fatigue’ sufferers in any way or patients with any other illness than authentic Myalgic Encephalomyelitis. People with M.E. must also be given access to basic medical care, financial support and other appropriate services (including funding for legitimate M.E. research) on an equal level to what is available for those with comparable illnesses (eg. multiple sclerosis or Lupus). The facts about M.E. must again be taught to medical students, and included in mainstream medical journals, and so on.
What can you do to help?
Unlike people with HIV/AIDS, people with M.E. do not have an initial period of their illness where they are only mildly affected. M.E. is severely disabling even in the first week of illness. People with M.E. are almost all far too ill to stage huge protests, rallies and marches. Many with M.E. cannot even read enough to be able to understand what is happening, or they aren’t even aware that high quality scientific information on M.E. exists. Almost all so-called patient advocacy groups worldwide have sold patients out to the highest bidder and are now actively collaborating with our abusers. These groups are no longer advocates for patients with M.E. – indeed they are working directly AGAINST the interest of people with M.E. (These groups also do not help all those misdiagnosed with ‘CFS’ who do not have M.E.) The media has also sold-out and betrayed M.E. patients. The list goes on.
People with M.E. have only a tiny minority of the medical, scientific, legal and other potentially supporting professions – or the public – on their side. As the Committee for Justice and Recognition of Myalgic Encephalomyelitis explain:
There is no immunity to M.E. The next victim of this horrible disease could be your sister, your friend, your brother, your grandchildren, your neighbour [or] your co-worker. M.E. is an infectious disease that has become a widespread epidemic that is not going away. We must join together, alert the public and demand action (2007, [Online]).
That is what is needed, for people from all over the world to stand up for Myalgic Encephalomyelitis. We must all stand up for the truth, individual physicians, journalists, politicians, human rights campaigners, patients, families and friends of patients and the public – whether they are affected yet by M.E. or not. That is the only way change will occur, through education and people simply refusing to accept what is happening any more.
Yes there are powerful and immensely wealthy vested interest groups out there which will fight the truth every step of the way, but we have science, reality and ethics completely on our side and that is also very powerful. However, for this to be of any use to us, we must first make ourselves aware of the facts and then use them
So what you can do to help is to PLEASE help to spread the truth about Myalgic Encephalomyelitis and try to expose the lie of ‘CFS.’ You can also help by NOT supporting the bogus concepts of ‘CFS,’ ‘ME/CFS,’ ‘subgroups of ME/CFS,’ ‘CFS/ME,’ ‘CFIDS’ and Myalgic ‘Encephalopathy.’ Do not support groups which promote these concepts. Do not give public or financial help to our abusers.
This appalling abuse and neglect of so many severely ill people on such an industrial scale is truly inhuman and has already gone on for far too long.
People with M.E. desperately need your help.
For more information about the medical and political facts of M.E. see: What is Myalgic Encephalomyelitis? Extra extended version, Who benefits from 'CFS' and 'ME/CFS'?, The misdiagnosis of CFS, Why the bogus disease category of ‘CFS’ must be abandoned, Smoke and mirrors, M.E. The Medical Facts - Extended, The Ultra-comprehensive Myalgic Encephalomyelitis Symptom List, Testing for Myalgic Encephalomyelitis and Putting research and articles into context.
To read a list of all the articles on this site suitable for different groups such as M.E. patients, carers, friends and family, the ‘CFS’ misdiagnosed, doctors or severe M.E. patients and so on, see the Information Guides page.
All of the information concerning Myalgic Encephalomyelitis on this website is fully referenced and has been compiled using the highest quality resources available, produced by the world's leading M.E. experts. More experienced and more knowledgeable M.E. experts than these – Dr Byron Hyde and Dr. Elizabeth Dowsett in particular – do not exist. Between Dr Byron Hyde and Dr. Elizabeth Dowsett, and their mentors the late Dr John Richardson and Dr Melvin Ramsay (respectively), these four doctors have been involved with M.E. research and M.E. patients for well over 100 years collectively, from the 1950s to the present day. Between them they have examined more than 15 000 individual (sporadic and epidemic) M.E. patients, as well as each authoring numerous studies and articles on M.E., and books (or chapters in books) on M.E. Again, more experienced, more knowledgeable and more credible M.E. experts than these simply do not exist.
This paper is merely intended to provide a brief summary of some of the most important facts of M.E. It has been created for the benefit of those people without the time, inclination or ability to read each of these far more detailed and lengthy references created by the world’s leading M.E. experts. The original documents used to create this paper are essential additional reading however for any physician (or anyone else) with a real interest in Myalgic Encephalomyelitis. For more information see the References page.
Before reading this research/advocacy information, please be aware of the following facts:
1. Myalgic Encephalomyelitis and ‘Chronic Fatigue Syndrome’ are not synonymous terms. The overwhelming majority of research on ‘CFS’ or ‘CFIDS’ or ‘ME/CFS’ or ‘CFS/ME’ or ‘ICD-CFS’ does not involve M.E. patients and is not relevant in any way to M.E. patients. If the M.E. community were to reject all ‘CFS’ labelled research as ‘only relating to ‘CFS’ patients’ (including research which describes those abnormalities/characteristics unique to M.E. patients), however, this would seem to support the myth that ‘CFS’ is just a ‘watered down’ definition of M.E. and that M.E. and ‘CFS’ are virtually the same thing and share many characteristics.
A very small number of ‘CFS’ studies/articles and books refer in part to people with M.E. but it may not always be clear which parts refer to M.E. The A warning on ‘CFS’ and ‘ME/CFS’ research and advocacy paper is recommended reading and includes a checklist to help readers assess the relevance of individual ‘CFS’ studies (etc.) to M.E. (if any) and explains some of the problems with this heterogeneous and skewed research.
In future, it is essential that M.E. research again be conducted using only M.E. defined patients and using only the term M.E. The bogus, financially-motivated disease category of ‘CFS’ must be abandoned.
The research referred to on this website varies considerably in quality. Some is of a high scientific standard and relates wholly to M.E. and uses the correct terminology. Other studies are included which may only have partial or minor possible relevance to M.E., use unscientific terms/concepts such as ‘CFS,’ ‘ME/CFS,’ ‘CFS/ME,’ ‘CFIDS’ or Myalgic ‘Encephalopathy’ and also include a significant amount of misinformation. Before reading this research it is also essential that the reader be aware of the most commonly used ‘CFS’ propaganda, as explained in A warning on ‘CFS’ and ‘ME/CFS’ research and advocacy and in more detail in Putting Research and Articles on Myalgic Encephalomyelitis into Context.
“People in positions of power are misusing that power against sick people and are using it to further their own vested interests. No-one in authority is listening, at least not until they themselves or their own family join the ranks of the persecuted, when they too come up against a wall of utter indifference.’ Professor Hooper 2003
‘Do not for one minute believe that CFS is simply another name for Myalgic Encephalomyelitis (M.E.). It is not. The CDC definition is not a disease process. It is (a) a partial mix of infectious mononucleosis /glandular fever, (b) a mix of some of the least important aspects of M.E. and (c) what amounts to a possibly unintended psychiatric slant to an epidemic and endemic disease process of major importance’ Dr Byron Hyde 2006
The term myalgic encephalomyelitis (means muscle pain, my-algic, with inflammation of the brain and spinal cord, encephalo-myel-itis, brain spinal cord inflammation) was first coined by Ramsay and Richardson and has been included by the World Health Organisation (WHO) in their International Classification of Diseases (ICD), since 1969. It cannot be emphasised too strongly that this recognition emerged from meticulous clinical observation and examination. Professor Malcolm Hooper 2006
M.E. is a systemic disease (initiated by a virus infection) with multi system involvement characterised by central nervous system dysfunction which causes a breakdown in bodily homoeostasis. It has an UNIQUE Neuro-hormonal profile. .Dr Elizabeth Dowsett
M.E. appears to be in this same family of diseases as paralytic polio and MS. M.E. is less fulminant than MS but more generalized. M.E. is less fulminant but more generalized than poliomyelitis. This relationship of M.E.-like illness to poliomyelitis is not new and is of course the reason that Alexander Gilliam, in his analysis of the Los Angeles County General Hospital M.E. epidemic in 1934, called M.E. atypical poliomyelitis. Dr Byron Hyde 2006
The vested interests of the Insurance companies and their advisers must be totally removed from all aspects of benefit assessments. There must be a proper recognition that these subverted processes have worked greatly to the disadvantage of people suffering from a major organic illness that requires essential support of which the easiest to provide is financial. The poverty and isolation to which many people have been reduced by ME is a scandal and obscenity. Professor Malcolm Hooper 2006
‘Thirty years ago when a patient presented to a hospital clinic with unexplained fatigue, any medical school physician would search for an occult malignancy, cardiac or other organ disease, or chronic infection. The concept that there is an entity called chronic fatigue syndrome has totally altered that essential medical guideline. Patients are now being diagnosed with CFS as though it were a disease. It is not. It is a patchwork of symptoms that could mean anything’ Dr Byron Hyde 2003
Note that this list may contain some references which are not directly referenced in this paper (as this list also serves as a reference list for several other papers).
Disclaimer: The HFME does not dispense medical advice or recommend treatment, and assumes no responsibility for treatments undertaken by visitors to the site. It is a resource providing information for education, research and advocacy only. Please consult your own health-care provider regarding any medical issues relating to the diagnosis or treatment of any medical condition.
Copyright © by Jodi Bassett January 2009 on http://www.hfme.org/
This version updated May 2009
For more information, and to read a fully-referenced version of this text compiled using information from the world’s leading M.E. experts, please see: What is M.E.? Extra extended version.
Permission is given for this unedited document to be freely redistributed. Please redistribute this text widely.
To download a PDF copy of the super summarised version of the text (and all other super-summaries) in standard format (plus large type format), click here.
To download other papers from this site, see the Document Downloads page.
Permission is given for this document to be freely redistributed by e-mail or in print for any not-for-profit purpose provided that the entire text (including this notice and the author’s attribution) is reproduced in full and without alteration. Please redistribute this text widely | 1 | 55 |
<urn:uuid:57e1eb9e-16b7-4e17-b04b-e96dfa886eb9> | This module aims to serve as an introduction to the Internet for language teachers, covering a variety of topics beginning with an explanation of the terms Internet, World Wide Web and Web 2.0, followed by a substantial set of topics from the basics of using a browser to recent developments such as the 3D virtual world of Second Life.
After reading this module have a look at Module 2.3, Exploiting Word Wide Web resources online and offline.
This Web page is designed to be read from the printed page. Use File / Print in your browser to produce a printed copy. After you have digested the contents of the printed copy, come back to the onscreen version to follow up the hyperlinks.
Graham Davies, Editor-in-Chief, ICT4LT Website.
Ros Walker, Freelance Educational Consultant, UK.
Sue Hewer, Freelance Educational Consultant, UK.
In order to keep pace with the rapid developments of Internet technology this module has undergone regular editing and revision by Graham Davies since it was first published, especially Section 12 on Discussion lists, blogs, wikis, social networking and Section 14 on Computer Mediated Communication (CMC).
Graham Davies, Editor-in-Chief of the ICT4LT website, put this question to a group of postgraduate students back in the late 1990s:
"If you were asked to name one single recent development in ICT that has had the most significant impact on your work, what would it be?"
Most of the students answered, as anticipated, "The Internet". Significantly, none of the students was aware that the Internet and the World Wide Web are not synonymous terms. The Web is a subset of the Internet, and none of the students was aware just how recently it came into being, namely 1993. The Internet dates back much further, its forerunner being ARPANET, a US military communications network which was set up in 1969. ARPANET was extended (i.e. as the Internet) in the 1970s to include libraries, educational institutions and businesses, and email began to become used as a means of communication. The first publicly accessible Web browser, known as Mosaic, appeared in 1993, followed by Netscape in 1994. See Section 3, headed Using a browser: navigating the Web.
Graham Davies describes the Internet as follows:
The Internet is a computer network connecting millions of computers all over the world. It provides communications to governments, businesses, universities, schools and homes. Any modern computer can be connected to the Internet using existing communications systems. Schools and universities normally access the Internet via their own educational networks, but private individuals usually have to take out a subscription with an Internet Service Provider (ISP). They can then connect their computer to the Internet via a modem and their local telephone system. Davies (1999)
Nowadays there are many different ways of obtaining a connection to the Internet. If you work in an educational institution you are probably already connected and you should talk to your ICT manager if you require advice and information. If you work from home you should be able to obtain access to broadband, which is a fast connection to the Internet via a standard telephone line. See the Glossary for a definition of broadband, and see Section 1.3.2, Module 1.2 for more information on broadband.
History of the Internet: See A brief history of the Internet by Walt Howe. Karenne Sylvester has produced a useful blog (Kalinago English) on The history and the future of the Internet, which includes an embedded video on the early developments of the Internet and a slide show on the future directions it may take, with suggested ways of exploiting these rersources in teaching business English.
Contents of this section
Davies (1999) described the World Wide Web as follows:
This is the most powerful and fastest growing Internet service, now known simply as the Web. The Web is accessed by means of a computer program known as a browser. Two popular browsers are Internet Explorer and Netscape, both of which work more or less the same way. Using a browser you can access websites all over the world and download pages of information. Most Web pages include pictures, and many include audio, animated graphics, video and links - known as hyperlinks - to other websites.
The inventor of the Web, Tim Berners-Lee, has a more visionary view:
The dream behind the Web is of a common information space in which we communicate by sharing information. Its universality is essential: the fact that a hypertext link [Editor's Note: My italics - now usually abbreviated to hyperlink] can point to anything, be it personal, local or global, be it draft or highly polished. There was a second part of the dream, too, dependent on the Web being so generally used that it became a realistic mirror (or in fact the primary embodiment) of the ways in which we work and play and socialise. That was that once the state of our interactions was on line, we could then use computers to help us analyse it, make sense of what we are doing, where we individually fit in, and how we can better work together. (Berners-Lee 1998)
The concept of hypertext predates the Web by many years.Vannevar Bush is credited with inventing the concept of hypertext: see his article "As we may think", written as early as 1945, in which he describes an imaginary machine called "Memex" - essentially a hypertext device that takes account of the way the human mind associates ideas and follows a variety of different paths rather than moving on sequentially (Bush 1945). Bush wrote:
[ The human mind] operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature
The term hypertext did not, however, appear until the 1960s, when it was coined by Ted Nelson. Hypertext was implemented in HyperCard, a program developed for the Apple Mac in 1987, which is acknowledged as the first successful (offline) hypertext system before the advent of the World Wide Web. Essentially, the Web is hypertext running across the Internet.
Contents of this section
2.1.1 Definition of Web 2.0
Contrary to what many people think, Web 2.0 is not a new version of the World Wide Web - as this is what the appendage of a version number to a product's name normally implies. The term dates back to 1999 but only gained popularity following the first of a series of Web 2.0 Summit conferences initiated by Tim O'Reilly in 2004 (Oreilly 2005). Web 2.0 suggested a revival of the Web following the dot-com crash in the early 2000s, which had damaged people's confidence in the Web.
Essentially, the term Web 2.0 is an attempt to redefine what the Web is all about and how it is used. In recent years we have experienced a breathtaking increase in the number Web-based communities that make use of typical Web 2.0 tools such as discussion lists, blogs, wikis and podcasts, as well as dedicated social networking websites and virtual worlds or MUVEs that promote sharing, collaboration and interaction. In other words, Web 2.0 signifies a more democratic approach to the use of the Web, in which traffic is less likely to be one-way, i.e. from the website to the end-user. Thus more and more websites are emerging that are the result of sharing and collaboration between closed groups of users, e.g. students in a university or college, or by the public at large. Wikipedia is a typical example of collaborative publishing by the public at large. To most newcomers to the Web, Web 2.0 is the Web.
Interestingly, Tim Berners-Lee's concept of the Web as described in 1998 (see citation above) is broadly in line with what many people now associate with Web 2.0. Tim Berners-Lee reiterated this view in an interview conducted in August 2006, when he dismissed Web 2.0 as a "piece of jargon":
Web 1.0 was all about connecting people. It was an interactive space, and I think Web 2.0 is, of course, a piece of jargon, nobody even knows what it means. If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along. And in fact, you know, this Web 2.0 means using the standards which have been produced by all these people working on Web 1.0.
Source: developerWorks Interviews: Tim Berners-Lee, 22 August 2006
Many Web 2.0 applications work rather like the software installed on the hard disc of your desktop computer, like the software that you use for word-processing and other routine tasks. When you click on an icon in your word-processor you expect something to happen without a time delay and you also assume that you can save the documents that you create with your word-processor onto your hard disc and send copies to your friends using email software (see Section 14). You can now do this sort of thing via your Web browser (see Section 3), regardless of where you are located. In the early days of the Web this would not have been possible. Firstly, the software tools were not available. Secondly, long delays were a feature of the early Web. When you clicked on a button on a Web page you could go away and make yourself a cup of coffee before anything happened. Time delays still occur on the Web, of course, but the advent of new Web programming tools such as AJAX (see Glossary) and plug-ins have made it possible to create Web pages that respond more quickly to your requests and incorporate more interactivity and functionality. Google Maps is a typical example of a Web application incorporating AJAX. Scroll around the map and watch it update itself with relatively little time delay.
With the advent of new, so-called Web 2.0 software tools and faster connections to the Internet, you no longer have to rely exclusively on software being installed on your desktop computer. Web 2.0 provides you with a variety of online tools that enable you to produce documents, communicate via email, set up lists of your favourite websites, and organise and store your digital photographs, thus making it possible for you to work away from home and also share what you create with other people, anywhere in the world. Web 2.0 certainly offers a wealth of exciting new developments, but the question arises regarding how and to what extent these developments can contribute to education, especially the teaching and learning of foreign languages. Web 2.0 tools cover a wide variety of applications, some of which are intended for serious work and some of which are just for fun.
See this excellent PowerPoint presentation, The best of CALICO for K 12 teachers, by Lara Lomicka, Gillian Lord, Nike Arnold and Lara Ducate. It looks at a range of Web 2.0 tools, with links to where they may be found, the pros and cons of using them and some imaginative ideas for projects.
2.1.2 Links to further information on Web 2.0
2.1.3 Examples of Web 2.0 applications
The following sub-sections contain examples of and links to Web 2.0 applications that have been found useful by language teachers:
You might also consider looking at DOTS (Developing Online Teaching Skills), a free online course in ICT for language teachers, the result of a project funded by the European Centre for Modern Languages (ECML). The course is delivered in English and in German via Moodle and covers Audacity, Audioconferencing, Blogs, Forums, Moodle, Podcasting, Quizzes, SurveyMonkey, Wikis, and YouTube.
i. Image sharing
If you wish to use Web 2.0 tools for image storage and sharing you also need to know how to use a digital camera, how to store the images on your computer's hard disc and how to edit the images: see Section 188.8.131.52, Module 2.2, headed Image editing software.
Compfight and Behold are useful tools for finding images on the Web. See also MorgueFile, which offers "Free images for your inspiration, reference and use in your creative work, be it commercial or not!"
ii. Social bookmarking: see Section 5 (below).
iii. Discussion lists, blogs, wikis, social networking: see Section 12 (below).
iv. Chat rooms, MUDs, MOOs and MUVEs (virtual worlds): see Section 14.2 (below).
v. Podcasting: See Section 3.5.2, Module 2.3, headed Podcasting. See also (vi.) Audio tools (below).
If you wish to use Web 2.0 tools for creating podcasts you also need to know how to use digital recording devices and software, how to store the recordings on your computer's hard disc and how to edit the recordings. See Section 184.108.40.206, Module 2.2, headed Sound recording and editing software. See also Section 3.5, Module 2.3, headed Audio and video.
vi. Audio tools
There is an increasing choice of tools that enable audio recordings to be downloaded from and uploaded to the Web, combined with other media, for example:
VoiceThread: VoiceThread allows you to place collections of media such as images, videos, documents, and presentations at the centre of an asynchronous conversation. A VoiceThread allows people to have conversations and to make comments using any mix of text, a microphone, a web cam, a telephone, or uploaded audio file. VoiceThread runs inside your Web browser, so there is no software to download, install, or update. See Russell Stannard's Teacher Training Videos website, where you will find his VoiceThread tutorial screencasts.
If you wish to use Web 2.0 audio tools you also need to know how to use digital audio recording devices and software, how to store audio recordings on your computer's hard disc and how to edit the recordings. See:
Section 3.5.2, Module 2.3, headed Podcasting.
vii. Video sharing
Many language teachers make regular use of video sharing websites, which enable them to play and download existing video recordings to the Web or upload their own recordings, for example:
If you wish to use Web 2.0 video tools you also need to know how to use a camcorder or webcam, how to store video recordings on your computer's hard disc and how to edit the recordings. See Section 220.127.116.11, Module 2.2, headed Video editing software. See also Section 3.5, Module 2.3, headed Audio and video.
viii. Screen capture tools
Snagit: Captures any image from your computer screen, pulls it into the image editor, where you can add text, arrows and effects. The completed Snagit image can then be pasted into emails, documents and presentations, or uploaded it to a website.
ix. Animation tools - comic strips, movies, etc
Mashups are typical manifestations of Web 2.0. The term mashup derives from the practice in music of mixing two or more songs in order to produce a new song, particularly in musical genres such as hip-hop. In the context of Web 2.0, a mashup can be described as a Web page, often assembled by an amateur enthusiast, that brings together data from two or more Web services and combines the data into a new application with added functionality. O'Reilly (2005:4) describes this phenomenon as "innovation in assembly". Flickrvision and Earthalbum are examples of mashups in which Flickr and Google Maps have been combined into new hybrid Web pages.
Essentially, then, a mashup is a way of repurposing existing Web services and requires relatively little Web programming expertise. A directory of mashups can be found here on the Programmable Web site.
A mashup could be useful in language teaching and learning. A mashup for students studying a foreign language might consist, for example, of audio or video clips from an online broadcasting service, with transcriptions and annotations, grammar explanations and activities and exercises. Mashups could also be used in constructivist ways. For example, students could demonstrate their understanding of concepts by creating their own mashups.
xi. Document sharing
To what extent is Web 2.0 a break with the past? Web 2.0 is broadly in line with the concept of the Web as defined by its inventor, Tim Berners-Lee, back in 1998 (see citation above), so is it more accurate to say that Web 2.0 is just an example of the continuous development of established technologies - a transition rather than a break with the past?
It has been argued that Web 2.0 is essentially a meaningless term invented by a group of businessmen as a way of convincing the media and investors that something fundamentally new had been created following the crash of the so-called Dot-com bubble. See O'Reilly (2005). What do you think?
See also these discussion topics in the ICT4LT blog:
When you want to view pages on the World Wide Web, you need a computer program to do it, namely a browser. A browser is a software application that carries your messages to computers all over the world and returns messages to your computer. The most common browser is Internet Explorer, which is bundled with Microsoft Windows, but there are many others, e.g. Firefox, Safari and Google Chrome: see the Wikipedia article List of Web browsers.
Essentially, a browser works as follows:
Some browsers, particularly later versions, have additional features, but the ones listed above are the most important.
It is assumed that if you are reading this module are already familiar with using a browser. There are many useful tutorials on using the Internet, e.g.
Web Literacy: Written by Bernard Moro and located at the website of the Council of Europe's European Centre for Modern Languages.
Virtual Training Suite for Modern Languages: Free online materials to help university students develop their Internet research skills.
Walt Howe's Internet Learning Center, a mine of information about the Internet.
See our "can do" list under the heading Browsers to check your progress: ICT_Can_Do_Lists.
The Web is truly an enormous collection of information: texts, images, audio and video recordings, etc, many of which can be exploited in language teaching. The problem is that this information is somewhat chaotically organised. Bush (1996) summed it up:
As someone once said, the Web is like one great big, wonderful library. You enter the front door, and there are all the books... piled in the middle of the floor!
But there are many tools available that will help you to find what you want. When you need to locate a Web page you may already have its Web address (http://... etc), but if you want to search for something completely new you will need to use a search engine. Google is currently the most popular search engine on the Web: see Section 4.2. And there are many other search engines in a variety of different languages: see Section 4.3.
Contents of this section
First and foremost, don't waste time looking for materials that are unlikely to be found on the Web. Living professional authors are usually unwilling to give information away for free. This is why the texts of most modern books cannot be found on the Web, especially those that are still subject to copyright, i.e. where the author has been dead for less than 70 years. Similarly, don't expect to find huge collections of freely downloadable audio and video materials for use with language learners, as copyright on audio and video materials is jealously guarded: see Section 3.5, Module 2.3, headed Audio and video. However, the situation regarding copyright on materials in electronic format has changed considerably in recent years: see our General guidelines on copyright. Sharing materials has become common practice since the advent of Web 2.0, and there are now many sites where you can find materials offered free of charge or buy them at a very low cost: see Section 2.1, headed What is Web 2.0? where you will find references to some of these sites.
When searching, the most important thing is to hit on the keyword or combination of keywords that will bring up the information you are looking for. For example, you may be looking for lyrics of French songs. The keywords are lyrics french songs (note that you do not need to use upper case letters). These three keywords will probably find all the sites that contain these keywords, but not necessarily in that order and french may not be juxtaposed with songs. If you place quotation marks round french and songs - thus "french songs" - then the search engine will try to find sites in which the two words are juxtaposed. If you are looking for something more specific, for example the words of a particular song that was recorded by a particular singer, you can try a search such as "edith piaf" lyrics milord. This should find a site where the complete lyrics of the song Milord, as recorded by Edith Piaf, are listed.
The tutorial materials listed in Section 3.1 contain advice on searching and search activities. The following guides will also help you learn how to be more successful in your searching.
Nancy Blachman's Google Guide.
The Spider's Apprentice: Monash University's guide to search engines and search techniques, with links to sites that will help you learn to search effectively.
Search Engine Guide: Aimed mainly at small businesses, containing hundreds of articles and reviews and useful tips, such as Search Engine Optimisation (SEO), i.e. how to make your website show up more effectively in Web searches..
Google is a very efficient search engine, and currently the most popular on the Web. Google's UK homepage is at http://www.google.co.uk/, but http://www.google.com/ will also work. Google operates in a wide range of languages and also has a built-in translator, Google Translate.
Google is simple to use and very fast. Try entering your search terms and then clicking on I'm feeling lucky button, which homes in on the site that is most likely to fulfil your needs. You can also search for images and news items in the world's press by clicking on the Images or the News tab above the search box and then entering your search terms. If you click on the Maps tab above the search box you can search for a map showing almost any location anywhere in the world. There are many other useful features of Google, for example
Type define: immediately in front of a word (or a phrase in inverted commas) and Google will search for definitions of that word, e.g. define:pedagogy or define:"learning outcome" (NB the use of quotation marks when searching for two or more words that are normally linked together).
You can limit general searches as well as searches for news items to specific languages in Google by indicating in which language(s) you wish to search under Google Preferences.
Searching for authentic usage in foreign languages
Let us suppose that you wish to find examples of the phrase "il était une fois" ("once upon a time"). Enter the whole phrase in inverted commas in Google's search box and you will find hundreds of examples of how the phrase is used.
You can use a wildcard (* = the asterisk character) if you are not sure of the spelling of a word or wish to look for two words used together but separated by other letters or words, e.g. a search for ich * habe gesurft (no quotation marks round the phrase) will find ich habe gesurft and ich habe gestern mittag noch normal gesurft - very handy in German when different parts of the verb are separated. Enter the combination ich * habe * Internet * gesurft (no inverted commas round the phrase) and you should find examples such as dann habe ich im Internet nach Rezepten gesurft.
Searching for images in Google
If you are unsure that you have found the right word in a foreign language try searching for the foreign word by clicking on the Images tab in Google. Seeing a picture of what you are looking for can often confirm that you are on the right track. For example, I was not sure that arboriste in French was the equivalent of tree surgeon, but the images I found clearly indicated that it was the right term, often combined with grimpeur to indicate that it refers to someone who climbs trees and lops off branches. A contributor to a discussion list recently asked if it was correct to say cheveux auburn in French (NB: no "s" on the end of auburn). Indeed it is: using Google images search facility I found lots of pictures of people with auburn hair and descriptions in French of hair products designed for auburn hair.
Using Google as a concordancer
You can also use Google as a simple concordancer (see Module 2.4 for more information about concordancers) to search for collocations that you are unsure about. Is is possible, for example, to say "a metal wood"? Yes, indeed! Google cites numerous examples. See Robb (2003).
Wikipedia: searching for neologisms
Here's a useful trick using a combination of of the online encyclopaedia Wikipedia and Google.
Let us suppose that you want to know how to translate and how to use a new word or one that is unlikely to appear in printed bilingual dictionaries, e.g. snowboard, zip wire, quad bike, podcast, wiki.
First, you look up the term in the English-language version of the online encyclopaedia, Wikipedia. For example: Snowboard.
When you find the Wikipedia entry in English click on one of the foreign languages in the languages list in the left-hand column of the screen, e.g. Deutsch. This will take you to the equivalent article in German: Snowboard. This shows that German simply borrows the English term, but the article also shows how the word is used in context and that the noun used in German to describe the sport, namely snowboarding, is (das) Snowboarden.
Let's take another example:
Wikipedia shows that the German for quad bike is (das) Quad. The German-language article on Quad will show you how the word is used in context, but you can go one step further. Set your Google Preferences to indicate that your preferred language is German. You can now search for specific words that might be used in combination with Quad, e.g. by entering Quad fahren or fahre Quad in the search box. A fruitful combination of keywords is likely to be bin * Quad gefahren - i.e. the asterisk being a wildcard standing for anything between bin and Quad gefahren. This should enable you to find Quad used in contexts such as "Ich bin Quad gefahren”, “Ich bin mit einem Quad gefahren”, “Ich bin auf meinem Quad gefahren” etc.
Here are some more search engines:
Most modern search engines can function in a range of languages and allow you to set your language preferences. Here are a few direct links to search engines in foreign languages:
A bookmark is a facility within a browser that enables you to keep a record of Web pages that you have visited and may wish to visit again. Bookmarks are stored in a special folder on your computer. In Internet Explorer bookmarks are known as Favorites (sic - spelt the American way), which is also the name of the folder in which they are stored on your computer.
If you find a useful website, click on Favorites in Internet Explorer on the main menu bar of your browser. This will enable you to add the website's address to your own personal list so that you can locate the website quickly if you want to visit it again. See Section 3, headed Using a browser: navigating the Web.
More ambitious Web users may wish to set up their own annotated set of Web links, also known as a webliography, portal or jump station. See Task 2 on Graham Davies's INSET training materials Web page which explains step-by-step how to do this.
You can also use Web 2.0 tools to store and share your bookmarks at so-called social bookmarking websites:
Delicious: A website where you can store your bookmarks, share your bookmarks with your friends and colleague and find out what other people are bookmarking.
Diigo: A website which allows you to bookmark and tag websites. Diigo also allows users to highlight any part of a Web page and attach post-it notes to the whole page or sections of a page. These notes can be kept private, shared with a group within Diigo or forwarded to an individual. Diigo is an acronym standing for Digest of Internet Information, Groups and Other stuff - pronounced "deego".
For more information on Web 2.0 see Section 2.1 (above), headed What is Web 2.0?
Lists of useful Web links
There are many excellent collections of links from a variety of sources. As a starting point, see the list of links, headed Useful Web links, in the ICT4LT Resource Centre. See also:
Graham Davies's Favourite Websites: An annotated set of over 500 language-related websites.
This section addresses the key issues that need to be considered when evaluating a website. See also:
Contents of this section
The Internet is totally unregulated and whilst this means that there are huge amounts of good materials, it also means that materials of poor and dubious quality also appear on websites. Before using materials with students, it is important to determine certain facts about the site. For example:
Who created the site? What is their background? What credentials do they have? For example, you locate what appears to be a great website, but on closer examination you find it's been created by a 14-year-old schoolboy as a Web design project. We list the names of the original members of the ICT4LT project team, together with their affiliations on the ICT4LT homepage, and at the beginning of each module we provide information on its authors. Remember that anyone can publish anything on the Web and that, unlike books and articles in printed format, Web materials are less likely to be subjected to editorial scrutiny. Accuracy cannot always be guaranteed. You can find out who owns a site by using the Whois Lookup facility.
Who is the site aimed at? The site may sound like it's aimed at schoolchildren but on closer examination it may prove to be suitable only for adult learners. We provide details under the heading Aims of the ICT4LT website on the ICT4LT homepage.
When were the contents written and how regularly is the site updated? Look for evidence of the most recent update. At the bottom of each page of this site we provide details of its revision date.
Is there a contact name or contact address at the site? We use a Feedback Form. If you find a mistake, wish to make a comment, or ask a question you can use the form to contact us. Our Feedback Form helps cut down spam as it makes our email address less obvious to spambots, i.e. programs designed to collect email addresses from the Internet in order to build mailing lists for sending spam. All email sent to us is filtered rigorously.
Is the site easy to access and quick to download? Is the server on which the site is located up to the job of delivering its content at any time? Some servers slow down when lots of people are trying to access the site at peak times, e.g. between 9am and 5pm. Some servers shut down at weekends and during holiday periods.
The site may be huge and labyrinthine and you get hopelessly lost trying to navigate it.
The contents page looks impressive, but most of the site is "under construction" and a lot of internal links don't work.
A plug-in is an extra piece of software that a Web browser needs to run certain elements of a Web page, e.g. animated sequences and audio or video clips. You will find that when you click on an icon that signifies the availability of streaming audio or video material, your browser will link with a plug-in. If the plug-in is not already installed on your computer then you will be able to download it free of charge. Web pages incorporating multimedia often need plug-ins such as Flash Player, QuickTime, Shockwave Player or RealPlayer.If you have problems running animated sequences or video clips check that the relevant plug-in has been downloaded and installed on the computer that you are using.
You find a site that appears to contain French legal texts, but when you access it it turns out to be full of pornographic pictures. Does this sound far-fetched? No, this actually happened to us when we did one of our regular checks on links that we list at the ICT4LT site. The site's name had been transferred from an institution that provided information on French law to a pornography business. See Graham Davies's Dodgy Links Web page.
If audio materials are offered, are they of adequate quality? Can you play audio materials easily? Do you need a plug-in to play audio materials? See Section 3.5, Module 2.3, headed Audio and video.
If video materials are offered, are they of adequate quality? Can you play video materials easily? Do you need a plug-in to play video materials? See Section 3.5, Module 2.3, headed Audio and video.
If interactive exercises are offered, do they do the job better than paper-based exercises? Consider especially the kind of feedback that they incorporate. Feedback should go beyond the standard "Well done!" and "Sorry, wrong!" types of messages. Feedback should mimic a good teacher offering helpful advice and encouragement. See:
All language learners, especially in the early stages of learning a language, need to know what they sound like. If interactive exercises are offered, do they allow the learner to record and play back his/her own voice? This is not an unreasonable request, as teachers and learners have been making use of listen / respond / playback facilities ever since the advent of the tape recorder. Most multimedia CD-ROMs offer the possibility of recording one's own voice and some incorporate Automatic Speech Recognition (ASR). Very few websites offer this facility and when they do it doesn't work very well. For further information on ASR see:
Contents of this section
There are several ways in which the Web can assist with teaching languages:
There are advantages and disadvantages to using the Web in all the above situations, but most of those who have taken the plunge have not regretted it. Clare Bradin, in her article "The Dark Side of the Web" (Bradin 1997), lists the following advantages to using the Web with students:
See also the paper by Paul Bangs titled "Will the Web catch enough flies? Where Web-based learning cannot yet reach" (Bangs 2001).
However, as with any lesson, a lesson using Web-based material needs to be carefully planned.
Before using the Web live with students:
Do preview and evaluate material carefully. Always revisit websites shortly before each lesson to ensure that links are not broken or - which has happened in a few cases - have been transformed into something undesirable: see Graham Davies's article, Dodgy links.
Don't plan a whole lesson around a single site and make sure that you have a stand-by plan in case the connection is lost for any reason.
Do make sure that all students can access a computer comfortably or think of other ways of working.
There are numerous ways in which materials on the Web can be exploited in language teaching. See Felix (2001), Felix (2003), Gitsaki & Taylor (1999b and 2000), Windeatt et al. (2000).
When downloading or copying materials from another website, it is most important that you pay attention to copyright. Above all, don't assume that just because material is publicly available on the Web you can do whatever you like with it.
Copyright infringement is a growing problem, which we refer to in:
Email: There are a number of important copyright issues surrounding email correspondence. If you send an email to a private person or discussion list, for example, you automatically own the copyright in your email message and you retain your moral right to be identified as the author. Regarding other people's email messages, you should always seek permission (it's only polite, anyway) before passing them on to third parties or copying extracts for publication elsewhere.
See our General guidelines on copyright, which is a general introduction to copyright, drawing on a variety of sources.
Exploiting WWW resources online and offline: Module 2.3 at the ICT4LT site, which follows on from Module 1.5 and contains more information on finding resources on the Web, downloading pages, copying texts and images, Web-based CALL, etc.
Webquests and scavenger hunts are task-oriented activities in which the learner draws on material from different websites in order to achieve a specific goal, e.g. researching a topic and (i) answering a series of questions posed by the teacher, (ii) creating a presentation or (iii) writing an essay, etc. The skills that are required in a webquest or scavenger hunt mainly involve reading and listening, but there may also be communicative speaking exercises.
For further information on webquests see:
For the theoretical underpinnings of webquests see: Koenraad & Westhoff (2004).
A VLE may also be described as a:
Theoretically, there are differences in the way these systems operate, but this may mean little to the non-technical user. See the definitions for the above terms in the Glossary. Many people use Learning Platform as a catch-all term to describe software and systems designed to manage, deliver and provide access to e-learning materials in a distance-learning context.
A VLE is normally protected by passwords that enable teaching staff and enrolled students to access it. Typically, a VLE will contain:
This Wikipedia article, Virtual Learning Environment, goes into more detail about what you can expect from a VLE.
These VLEs are used in education in the UK:
Moodle: Probably the most widely used VLE in the language teaching community and the VLE that is favoured by The Open University, UK (see below). Moodle has its own Moodle for Language Teaching community - log in as a guest or register to join the community. See also Mary Cooch's Blog.
Blackboard: Blackboard incorporates WebCT, following a merger in 2005.
Kaleidos Learning Platform, a VLE produced by RM, UK.
Distance learning courses for language students that make use of the Web are now well established: for example in The Open University (OU) in the UK. See the OU's Web page on What is distance learning? Study materials include printed course books and audio materials that cover survival language for the traveller as well as the communication skills needed in a range of settings, at home, work or leisure.The OU makes use of both online tuition and face-to-face tuition See:
The Open University has also made some of its language learning materials available via iTunes and is reporting a huge uptake. See Section 5, Module 2.3 on Mobile Assisted Language Learning (MALL).
The Open University has been developing and using conferencing tools within its extensive distance-learning programmes for a number of years. An early example of a conferencing tool used by The OU is FirstClass, which began life as a text-only conferencing system and bulletin board. Then, in 2002, The OU developed its own in-house tool, Lyceum, an audio-graphics tool which included a whiteboard facility combined with audio-conferencing: see Section 7.3, Module 1.4 for furthe references to Lyceum. More recently The Open University has chosen Moodle for the delivery of a wide range of its courses, making it the largest user of Moodle in the world. Moodle is open source software, which means you are free to download it, use it, modify it and even distribute it. Moodle has its own Moodle for Language Teaching Community - log in as a guest or register to join the community. Listen to the Callspot podcast in which OU lecturers Regine Hampel and Uschi Stickler are interviewed on the topic Distance Language Teaching Online.
Moodleflair is a site which is managed by Jeff Stanford and aimed at language teachers (and anyone else!) who want to play with Moodle. It is not a fully developed site but aims to give a an impression of the way in which Moodle works in practice. See also Stanford (2009).
We have added a Moodle "can do" list, compiled by Seth Dickens and updated by Mary Cooch, to our general ICT_Can_Do_Lists.
For further information on VLE see:
Distance learning of languages has only become feasible since audio and video quality has improved over the Web. Some sites are run for profit and will charge for the services, but others have been set up by enthusiasts keen to pass on their language and culture. The sites vary tremendously in quality and you would be well advised to spend quite some time reviewing materials from these sites before attempting to use them with students. However, there is some very innovative work going on and you may well find some gems: see Felix (1998a), Felix (2001) and Felix (2003), three works that contain a vast collection of information on virtual language learning, the latter two incorporating a number of case studies and articles on good practice. See also Graham Davies's annotated list of Favourite Websites, an extensive list of over 500 websites that is constantly being updated and expanded.
A good deal has already been written on distance learning of languages, e.g. in the form of articles based on conference papers presented at EUROCALL and CALICO conferences and published in ReCALL (published in printed form and online by CUP) and in the CALICO Journal (now published only online). There is also the Language Learning and Technology (LLT) journal (published only online).
Although Web-based language learning has expanded rapidly in recent years there are still limitations to the different kinds of interaction that work successfully on the Web, especially interaction involving prompted speaking activities, which is well established in CD-ROM-based learning. See Section 3.1, Module 2.3, headed Web-based CALL.
Although VLEs have a number of advantages, they are not without their critics. Professor Mark Stiles talks about the Death of the VLE (Stiles 2007). The abstract follows:
The VLE has become almost ubiquitous in both higher and further education, with the market becoming increasingly 'mature'. E-learning is a major plank in both national and institutional strategies. But, is the VLE delivering what is needed in a world where flexibility of learning is paramount, and the lifelong learner is becoming a reality? There are indications that rather than resulting in innovation, the use of VLEs has become fixed in an orthodoxy based on traditional educational approaches. The emergence of new services and tools on the web, developments in interoperability, and changing demands pose significant issues for institutions' e-learning strategy and policy. Whether the VLE can remain the core of e-learning activity needs to be considered.
What do you think? Have a look at the ICT4LT blog under these topic headings:
Death of the VLE? (August 2008) - where Mark Stiles's viewpoint is discussed.
The VLE is dead. Long live the PLE! (July 2009) - where we raise the issue of the Personal Learning Environment (PLE) replacing the VLE. A PLE may also be referred to as a Personal Learning Network (PLN). Such an environment, in contrast to a VLE, is not so much a package or system for delivering learning materials, rather it is an approach to using new technologies in order to enable learners to develop and control their own learning environment. This does not preclude the presence of teachers. Teachers are important for providing support for learners in setting their own learning goals and helping them manage the content and process of learning. The use of social networking tools (see Section 12) and Mobile Assisted Language Learning (MALL) for communication both with teachers and peers are key elements of a PLE. See Section 5, Module 2.3 for further information on MALL.
Do-it-yourself: For information on tools that are used to create distance learning materials see Module 2.5, Introduction to CALL authoring programs.
Copyright: If you upload third-party materials to a VLE make sure that they are not in breach of copyright. Contrary to popular opinion, copyright legislation still applies to password-protected VLEs. See our General guidelines on copyright, especially Section 4.1.
Whilst the Web can provide valuable opportunities and superb resources, there are some potential problems that teachers should be aware of:
When the World Wide Web first appeared in the 1990s it was dubbed the World Wide Wait. Big files took an eternity to download and the wait time could be maddening. Internet access speeds still vary according to the type of Internet connection that you have, how congested the Internet is in general at a particular time of day, how many other people in your neighbourhood are trying to access a website at the same time as you, your computer configuration, and even the weather. But, generally speaking, the speed of Web access has improved enormously. Older dial-up modems using standard telephone lines running at around 56Kbps are now technically obsolete, and faster connections via ADSL broadband or via a dedicated leased line are the norm. See Section 1.3.2, Module 1.2 for further information on modems and broadband, and see the Glossary for an explanation of the terms ADSL, broadband, dial-up modem and leased line. New Web programming techniques have also resulted in more spontaneity and better interaction on the Web: see Section 2.1, headed What is Web 2.0?
The ICT4LT site contains over 1000 links to other sites. Checking these links on a regular basis takes a good deal of time. Up to 5% of the links listed at the ICT4LT site move or disappear each month. This phenomenon is sometimes referred to as linkrot (see Glossary). Linkrot is a growing disease: see Jakob Nielsen, Fighting Linkrot, Alertbox, 14 June 1998. We regularly check the ICT4LT site using the excellent Xenu Link Sleuth program, which is available free of charge. We also mention the topic of linkrot in Section 6.3.3, Module 3.3.
After we have identified dead links with Xenu Link Sleuth, they have to be retraced manually - mainly by backtracking to homepages and using local or global search engines, combined with a bit of intuition. If you come across a dead link at the ICT4LT site please let us know.
You may be able to retrieve the contents of a dead link by entering its URL into the Internet Archive (aka the Wayback Machine). This enormous archive keeps records of revisions of websites at various stages in their lives. It is not 100% complete, but we have found it to be remarkably efficient at recovering old documents that we thought had been lost forever.
A further problem that we have identified is that domain names regularly change hands, especially when a site goes dead. Unfortunately, this can lead to so-called cybersquatters (see Glossary) grabbing the name and using it for other purposes, e.g. for a site containing offensive material. We have had two experiences of this, which Graham Davies documents on his Dodgy Links Web page. Our research indicates that this is a growing problem. We check all links when we add them to this site, but constantly checking what they contain is very time-consuming. We apologise for any oversights on our part. You can help by notifying us if you discover any links that contain anything you find offensive.
Felix (2001:353) makes the following important points regarding mkaing use of other people's websites:
Regarding the first of Uschi Felix's points, we expected educational and government sites to be among the most stable. How wrong we were! In terms of stability, these are the worst offenders in our experience. Their webmasters simply cannot resist moving the furniture around every few months. Restructuring is a permanent process, it seems, and very few webmasters in educational institutions and government organisations leave clear indications of how their site has been restructured. We therefore make a special plea to these webmasters: Please leave redirection instructions at the old URLs for a period of at least six months.
Regarding the second of Uschi Felix's
points, please make sure you pay attention to copyright. Just because the material
is on the Web it doesn't mean that it can be distributed freely to all and sundry.
See our General guidelines on Copyright.
Regarding the third of Uschi Felix's points: This is where ICT4LT can help!
There is so much information that it may be too time-consuming to find the "good stuff.". Even with search engines, it can be hard to find what you want, and you therefore have to select your search terms carefully (see Section 4). As Arthur C. Clarke put it: "Getting information from the Internet is like getting a glass of water from the Niagara Falls."
See Section 6 on Evaluating websites.
When you visit a website you need to know if the information it contains is reliable. This issue has already been raised above in Section 6 under the sub-heading Authorship. For example, consider Wikipedia, which is a free-content encyclopaedia on the Web that anyone can add to or edit - yes, anyone, which is both its strength and its weakness. While Wikipedia covers an enormous range of subjects in different languages there is no guarantee that what you read is accurate as the content can be added to or amended by any member of the public. Furthermore, there is often no indication of who the author is or the author's credentials. On the one hand this can be perceived as a wonderful example of collaborative publishing, but on the other hand it can be perceived as a golden opportunity for the propagation of oddball ideas and self-promotion. Graham Davies checked out the Wikipedia article on Computer Assisted Language Learning in early 2005. It was hopelessly out of date, sketchy and inaccurate, so he amended it. Many more additions and revisions were then made by other contributors, including a major rewrite (which was quite good) in 2007, but after that the article ended up as a complete mess as a result of too people making amendments that destroyed its structure and presented a completely inaccurate picture of CALL. Graham Davies then decided to rewrite the article from scratch at the end of 2010, posting the final update in early 2011: see the Wikipedia article on CALL and join in the Discussion about the article on CALL.
In its early days Wikipedia was too open to unscrupulous editing by the public at large, but the editing process has since been tightened up and the content of articles meeting certain quality criteria can now be "fixed". While Wikipedia can be a remarkably useful and accurate resource it cannot be relied upon 100% - but nor can most other encyclopaedias. See the Wikipedia article, Reliability of Wikipedia.
To a large extent Wikipedia's reliability
depends on the subject matter: for example, articles on history and politics
are often subject to wildly varying opinions - and even deliberate vandalism.
As a consequence many colleges and universities have banned students from citing
Wikipedia as a source in their coursework. The founder of Wikipedia, Jimmy Wales,
is on record as saying (in 2005) that this is going too far and that teachers
who ban the use of Wikipedia as a source of information are "bad educators".
He did, however, go on to say that the website lacked the authority to be used
as a citeable source for university students and that students who copied information
from Wikipedia "deserved to get an F grade" (Source: BBC News, 7 December 2007:
'should use Wikipedia'.)
Here's a useful tip: If you find an article on Wikipedia in English and then click on one of the language options (headed in other languages) in the left-hand column of the page, you go immediately to an article on the same subject in that language. See Section 12 for more information on wikis.
Make sure that you are adequately protected against invasions by viruses when you surf the Web, as there are new strains of viruses that are able to invade your computer while you are browsing. You should consider installing a firewall, which gives you additional protection against unwanted intruders. Watch out also for spam, adware and spyware. See the Appendix: Viruses.
While you are surfing the Web all kinds of information is being dumped onto your hard disc. For example, a cache area on your hard disc keeps a record of sites that you have recently visited. Cookies store little bits of information about yourself after you have visited a site for the first time, and this can be accessed by the site server when you visit the site again. Caches and cookies take up valuable space on your hard disc drive. A useful piece of software is Webroot's Window Washer, which enables you to remove caches, cookies and other clutter at regular intervals. You can also block cookies - along with those dreadful banner advertisements that slow down your browser - using firewall software. See the Appendix: Viruses.
Unfortunately, the Web is full of websites containing undesirable material, and it is all too easy for young people to access such material, by accident or by design. You should consider installing software that filters out undesirable material. See Graham Davies's Dodgy links Web page.
Web guru Jakob Nielsen writes:
Reading from computer screens is about 25% slower than reading from paper. Even users who don't know this human factors research usually say that they feel unpleasant when reading online text. As a result, people don't want to read a lot of text from computer screens: As a result, people don't want to read a lot of text from computer screens: you should write 50% less text and not just 25% less since it's not only a matter of reading speed but also a matter of feeling good. We also know that users don't like to scroll: one more reason to keep pages short. [...] Because it is so painful to read text on computer screens and because the online experience seems to foster some amount of impatience, users tend not to read streams of text fully. Instead, users scan text and pick out keywords, sentences, and paragraphs of interest while skipping over those parts of the text they care less about. (Source: Be succinct! Writing for the Web, Alertbox, 15 March 1997.)
More recent research by Nielsen, in which the iPad and Kindle were examined, showed that
The iPad measured at 6.2% lower reading speed than the printed book, whereas the Kindle measured at 10.7% slower than print. However, the difference between the two devices was not statistically significant because of the data's fairly high variability. Thus, the only fair conclusion is that we can't say for sure which device offers the fastest reading speed. In any case, the difference would be so small that it wouldn't be a reason to buy one over the other. But we can say that tablets still haven't beaten the printed book: the difference between Kindle and the book was significant at the p<.01 level, and the difference between iPad and the book was marginally significant at p=.06. (Source: iPad and Kindle reading speeds, Alertbox, 2 July 2010.)
See Nielsen's other articles on Writing for the Web.
The Web is unlikely to replace the printed book as a means of presenting large amounts of text. This is not to say that text on the Web is a bad thing. The Web is superb as a means of delivering text that can then be printed. It is also quicker to search the Web for information than visiting your local library, and once you have found a text you want to read you can use your browser to search for keywords within it.
It was interesting to read the story in The Times (29 November 2000, p. 9), headed King leaves Internet readers in suspense. The article claims that Stephen King decided not to complete his online Internet novel The Plant because - according to King - "it failed to grab the attention of readers on the Web". King found that a surprisingly high proportion of the readers accessing his site (75%-80%) made the "honesty payment" for being allowed to download chapters: "But", he said, "there are a lot fewer of them coming. Online people have the attention span of a grasshopper." The article also claims "that digital publishing has a bleak future because it is an unattractive medium for reading long texts and it is difficult to stop breach of copyright". See also Messages from Stephen King.
You should therefore not feel guilty about printing out any of the pages at this site and sitting down in a comfortable armchair in order to read them. It's the sensible thing to do - and better for your eyes. To print a page, just use the File/Print facility in your browser.
Some of the above points were taken from Clare Bradin's FLEAT 97 paper, "The Dark Side of the Web" (Bradin 1997).
See also Section 6, headed Evaluating websites.
See the Glossary of Internet Terms, a comprehensive list of Internet terminology compiled by Matisse Enzer.
See also our own Glossary, which is regularly updated and includes links to sections of the ICT4LT website.
Reading foreign languages on the Web that use fonts other than the standard English-language fonts is no longer a problem. Most modern browsers support a range of fonts and alphabets, including those used in East Asian languages such as Chinese, Japanese and Korean. Microsoft Windows includes settings for a range of languages that just need to be activated by opening the Regional and Language Options in the Control Panel and making the required settings.
If you wish to type in different languages see Section 5, Module 1.3, headed Typing foreign characters.
Contents of this section
Discussion lists are essentially ways of sharing emails with the members of a group of people with a common interest. Many educational discussion lists in the UK are managed by Mailtalk or JISCMail.
Discussion lists are also referred to as forums (also fora, pl.), notice boards and bulletin boards. There may be subtle differences between them in the ways in which they are operated and the ways in which members can post messages to them, but essentially their main aim is to able people with common interests to share information and to communicate with one another.
Forums may also be set up in the context of a dedicated social network (see also Section 12.4. below), an online community in which information can be exchanged between people having a common interest.
If you are seeking an answer to a specific question about the use of ICT in language learning and teaching you can contact us via our Feedback Form. Alternatively, you may wish to initiate a new topic in the following discussion lists and forums. You may find your question has already been answered in the archives of messages sent in by their members:
IATEFL: The UK-based International Association of Teachers of English as a Foreign Language. IATEFL embraces a Learning Technologies Special Interest Group (LT SIG).
Linguanet Forum: Used mainly by UK teachers in primary and secondary education.
MFL Resources Forum: A forum for teachers of Modern Foreign Languages. Used mainly by UK teachers in primary and secondary education.
In recent years there has been a veritable explosion in the development of weblogs - or blogs for short. The first blogs that appeared took the form of a log, a kind of online diary. Blogs behave in similar ways to discussion lists, except that they often take the form of a journal or a collection of an individual's or group's ideas and thoughts, and they offer an easy facility for uploading new material to the Web. The ICT4LT site has an associated blog, managed by Graham Davies at http://ictforlanguageteachers.blogspot.com. Educational uses of blogs include:
If you wish to create your own blog have a look at these sites. Most blogs enable you post anything you like: texts, photos, and audio and video files:
Netiquette: If you set up or join a blog make sure that you read the service provider's guide to acceptable practice. See also Section 14.1.4 (below) on Netiquette.
See also the list of Top 25 world languages blogs.
Another way of sharing information on the Web or initiating discussions is to set up a wiki. A wiki is essentially a series of interlinked Web pages that can be edited and added to by a group of people, i.e. an online resource for which content can be created collectively. It's distinguishing feature is that it allows anyone who views the wiki to add to or edit the existing content, but it's possible to set up a closed wiki that is used simply to impart information to its readers. Photographs and video recordings can also be embedded in a wiki. Wiki derives from the Hawaiian "wiki-wiki", meaning "quick".
Wikipedia is the best known example of a wiki, a collaboratively written encyclopaedia. There is an article on Computer Assisted Language Learning in Wikipedia. Other examples of wikis include:
Social networking is a term applied to a type of website where people can seek other people who have similar interests, find out what's going on in their areas of interest, and share information and resources. Social networking is a controversial topic. Critics such as Sherry Turkle (2010) have expressed their misgivings about our reliance on technology for communicating with one another, but these two articles present a more positive view:
Online Education: study shows social networking a boon for education, by Johanna Sorrentino.
Many teachers make use of social networking sites to build up their Personal Learning Network (PLN) or Personal Learning Environment (PLE). A useful intoduction to this topic can be found at Chris Smith's website. Chris Smith has also produced an amusing presentation titled Which social network? Graham Davies describes his experience in using social networks in My life online.
These are examples of popular social networking sites:
aPLaNet is a project funded by the European Commission. The project aims to create support and resources which will help foreign language teachers in Europe to understand and use social networks in order to build and expand their Personal Learning Network (PLN) by connecting with educators on Facebook, Twitter & Ning, to continue with their professional development in an autonomous way, and to acquire the skills and digital literacies required in order to use these mediums successfully and productively.
Bebo: Very popular with young people - a social media network that focuses mainly on entertainment.
Classroom 2.0 describes it self as a "social network for those interested in Web 2.0 and social media in education".
Cloudworks: A place to share, find and discuss learning and teaching ideas and experiences. Cloudworks is being developed by the Institute of Educational Technology at The Open University.
Facebook: Facebook is a huge social network with millions of members, and there are many sub-networks based around a workplace, a region, a school, a college, a charity, etc. EUROCALL has a group on Facebook.
IMVU: You download IMVU's software onto your PC and create your own avatars who chat in animated 3D scenes.
MySpace: A social network that focuses mainly on music and entertainment.
LinkedIn: A network that offers facilities for people wishing to stay in touch with their old friends from college or university, former colleagues at work, and people who share their professional interests. There is a substantial CALL community on LinkedIn, including a EUROCALL Group. See also Graham Davies's LinkedIn Profile.
Ning: A platform that enables you to create your own social network. A Ning enables anyone to create a network focusing on a particular topic or catering for a specific membership, for example a group of teachers working together on an educational project. Typically, a Ning includes blogs, announcements of events, a forum, live chat and facilities for uploading photographs and video clips. Examples of educational Nings include the EUROCALL/CALICO Virtual Worlds Special Interest Group, AVALON and NIFLAR. The word "Ning" derives from the Chinese word for "peace".
Wiggio: A new facility (2011) for organising groups.
See this Teachers TV video: Online Communities in the Classroom, in which secondary school French teacher Marie Guyomarc'h, investigates how to make use of online communities in her classes. Online communities and social networks are often shunned by teachers because of negative publicity and online safety surrounding certain websites. Marie meets with Lisa Stevens, a primary school Spanish teacher, who relishes using social media websites for teaching purposes. Lisa explains the benefits of using websites such as Twitter and VoiceThread, and demonstrates how you can use them in the classroom. Later, Marie faces her challenge of taking back what she's learned and using it in the classroom.
Section 14.2, headed Chat rooms, MUDs, MOOs and MUVEs
RSS stands for Really Simple Syndication. Essentially, RSS allows you to see when websites have added new content. RSS can feed you information on new contributions to blogs, wikis and other types of social networking sites as soon as as they are published, hence the term RSS feed. Look for the RSS icon on a Web page. This indicates that an RSS feed is available:
If you click on an RSS icon on a Web page you will be given the option of subscribing to its feeds. Feeds can be added to your Favorites list in your browser by using the Add to Favorites option and they will then appear under the Favorites/Feeds tab. A more efficient way of subscribing to RSS feeds is to use Google Reader. Google Reader provides a summary of the sites to which you have subscribed, indicating which of them has added new content, thus saving you time if you subscribe to several different sites as then you don't have to go round each of them to check for new contributions. Google Reader includes a tutorial that explains how to set up and manage your feeds.
A degree of caution is advised when joining any kind of blog, wiki, chat room (See Section 14.2) or social networking site. See:
Creating your own Web pages is fairly straightforward nowadays. Whilst it is possible to develop high level programming skills, it is also now becoming much easier to type a document and convert it ready for the Web. Microsoft Word offers a Save as HTML option, which will create a simple Web page from a normal Word document. If this is an area that particularly interests you, see Module 3.3, Creating a World Wide Web site.
It is also possible to create your own interactive exercises on the Web, using a tool such as Hot Potatoes or Quia.
Contents of Section 14
There is no question that the Internet has had a tremendous impact on teaching and learning foreign languages. The term Computer Mediated Communication (CMC) dates back to the early days of computing but more recently it has been associated with the use of a range of tools enabling instant communication via email and Web-based teaching and learning to take place irrespective of time and place. Warschauer (1996a) mentions the following features of CMC:
Computer Mediated Communication allows users to share not only brief messages, but also lengthy (formatted or unformatted) documents - thus facilitating collaborative writing - and also graphics, sounds, and video. Using the World Wide Web (WWW), students can search through millions of files around the world within minutes to locate and access authentic materials (e.g. newspaper and magazine articles, radio broadcasts, short videos, movie reviews, book excerpts) exactly tailored to their own personal interests. They can also use the Web to publish their texts or multimedia materials to share with partner classes or with the general public.
EUROCALL manages a Computer Mediated Communications Special Interest Group (CMC SIG).
Now let's look at some CMC tools in detail and how they are used in teaching and learning foreign languages.
The most stable and long standing of Internet communications media is email. Email is essentially an asynchronous text-based medium which enables anybody with an Internet connection to send messages to one or more people similarly connected. The advantage of asynchronous communications is that the people communicating with one another do not have to be present at the same time - and this is the essential meaning of the term asynchronous.
Email has been widely used by the academic community since the early 1980s and has led more recently to the setting up of asynchronous discussion lists and blogs referred to earlier in this module: see Section 12, headed Discussion lists, blogs, wikis, social networking. It is also possible to send voice messages as email attachments: see the following section on audioconferencing: Section 14.1.2.
18.104.22.168 Email attachments
It is possible not only to exchange messages by email, but to send what are called attachments, which are files containing either text, graphics, audio or video clips, or any combination of these. It has to be remembered, however, that files involving graphics, audio and video are likely to be quite large and may take a comparatively long time to transmit and receive - although this is less of an issue than it used to be now that most schools have a broadband connection to the Internet. Attachments are also prone to contain viruses. Be very careful not to open an attachment that you receive from an unknown source, or with a strange name, as it might contain a virus: see the Appendix: Viruses. When sending an attachment it is common courtesy to accompany it with a plain text message so that the recipient can see that it is a bona fide, "clean" file, e.g.
I'm attaching a report on the conference we attended last week, together with a picture of the two of us that was taken at the conference banquet. The two attachments are named:
14.1.2 Audioconferencing: a synchronous communications medium
Audioconferencing is a typical example of a synchronous communications system, in which the people communicating with one another have to be present (in different locations, of course) at the same time - and this is the essential meaning of the term synchronous. Alongside videoconferencing (see Section 14.1.3 below), audioconferencing is progressing at an impressive rate.There are many software applications that enable audioconferencing via computers, e.g.
Gong offers facilities for voice communication on the Web. It allows groups of people to participate in discussion groups using their computers, using both synchronous (real-time) and asynchronous chat. It is widely used by schools and universities for providing a voice board for teaching purposes. There is also NanoGong, an applet that can be used by someone to record, playback and save their voice in a Web page. When the recording is played back the user can speed up or slow down the sound without changing it. The speeded up or slowed down version of the recorded sound can also be saved to the user's hard disc. In addition, the applet can be used as an integrated component in Moodle, a popular VLE (see Section 8 above).
Learning Times: An online audioconferencing tool.
Schoolshape: This website offers software for setting up an online Language Lab that includes the possibility of creating asynchronous audio and video assignments for students. Registration required.
Skype: This is a free Web telephone service that enables one-to-one audio communication via a computer with anyone in the world, as well as conferencing with more than one person at a time. Skype also offers a cheap pay-as-you-go service that enables you to call landline phones via your computer and also to make video calls.
Ventrilo: An online tool that for voice communication.
Vocaroo: A quick online voice recording app where you can record voice messages and afterwards share them with others via email or personal Web page.
Wimba specialises in in asynchronous voice technology which enables you, for example, to add voice messages to email and add audio to a website.
See also Godwin-Jones (2005).
14.1.3 Videoconferencing: a synchronous communications medium
Videoconferencing is another typical example of a synchronous communications system, essentially a system for connecting computers that are equipped with video transmission and reception facilities. Like audioconferencing, videoconferencing enables people to communicate in "real time", i.e. people communicating with these packages have to be present (in different locations, of course) at the same time. It is important to distinguish between room-based videoconferencing and desktop videoconferencing.
Room-based videoconferencing is generally organised on a group-to-group basis. In this case, a group sits in front of a large screen where they can view the participants at the other site as well as a smaller image of themselves. It is common to use this form of videoconferencing for distance-learning programmes. In this case the system may use an ISDN (Integrated Services Digital Network) connection or a dedicated leased line (see Glossary) connection to transmit information from one site to another. The quality of the video transmitted in this way is generally better than that offered by desktop videoconferencing systems (see below), although there may be a delay between the transmission of audio and picture with slower ISDN lines (64 Kbps to 128 Kbps), which means that lip movements may not be synchronised with the audio. The set-up and running costs of videoconferencing systems of this type can be quite expensive.
Desktop videoconferencing involves using a standard multimedia computer equipped with a microphone, loudspeakers and a webcam, a type of video camera that sits on top of your computer and links it to the Internet: see Section 1.2.6 Module 1.2 for a picture of a webcam. You also need an appropriate desktop videoconferencing software application (see next section) and a fast broadband connection to the Internet. This is especially suited to one-to-one communication or between small groups. Software applications may allow users to combine the videoconference with a shared whiteboard on their screens, where each participant can write, draw diagrams and make changes to what others have written. If the bandwidth of the Internet is too slow to support good quality interaction, users may opt to freeze the picture image of their partner on the screen and simply use the audio and whiteboard functions. Desktop videoconferencing systems are much cheaper than room-based systems.
Desktop videoconferencing software applications
FlashMeeting, a videoconferencing tool, based on Adobe Flash, developed at The Open University, UK.
SightSpeed: Videoconferencing, video chat and video email.
Communications packages like these are becoming increasingly reliable. They enable groups of people to talk to and even see each other over the Internet and to share text, graphics and audio documents in real time. The costs are therefore relatively modest.
iVisit: a range of tools for videoconferencing.
NetLearn Languages: a business that delivers language training courses live online.
Robert O'Dowd's website: Telecollaboration: developing intercultural language learning through online exchanges
Robert O'Dowd's website: Videoconferencing in foreign language education
It is important to abide by a code of behaviour if you intend to communicate by email via the Internet. Such a code of behaviour is known as netiquette, for example:
Identify the content of your message: Use the subject line of your communications software to indicate clearly what your message is about. Recipients can then choose to delete messages that appear to be irrelevant or uninteresting.
Identify yourself clearly at the end of your message, indicating your institution or business, affiliations and relevant URLs. This is known as your signature.
Be polite - as you would in normal communication.
Dont flame! Flame is a term used to describe language that is rude, sarcastic, condescending or inflammatory (hence "flame").. It is very immature and unprofessional. Bear in mind that even private emails can end up in the wrong hands - and it is possible for them to be intercepted by experts who have the know-how. If you post to a discussion list or blog, a large audience will see your messages, the recipients may keep a copy of your messages, and your messages may also be archived on the Web, e.g. as in the Linguanet Forum. So your words could be stored and be on view to the public for many, many months.There are documented cases of people having been sued for making libellous remarks in blogs. A troll is a person who deliberately starts a flame war in a discussion list or blog by posting provocative or derogatory messages.
Use plain text: Always send your messages as plain (unstyled) text as other people's email systems may not be able to read messages sent, for example, in HTML or RTF format. Make sure you know how to set up your email system to send messages as plain text.
Identify attachments: Don't send unidentified attachments (e.g. Word documents, pictures, etc) to anyone. Always indicate what the attachment contains.
Irony and humour do not always come across in written communication. If you make a remark that is intended to be ironic or humorous, add an emoticon, e.g. a wink or a smiley, to reinforce it, thus: - ;-) :-)
Familiarise yourself with some of the common acronyms and abbreviations used in email communication, e.g. IMHO (In My Humble Opinion), BTW (By The Way), FYI (For Your Information), AFAIK (As Far As I Know), IIRC (If I Remember Correctly), LOL (Laughing Out Loud).
Don't type in CAPITALS. This is considered the equivalent to shouting.
Don't use the Out of Office automatic reply facility in your email system, especially when replying to public discussion lists, as this can signal to thieves that you are away from home and you may return to find your house burgled. It is fairly easy to match up a person's name in an Out of Office reply with a publicly accessible address list on the Web.
Make sure your antivirus software is kept up to date, i.e. daily. Email is the commonest way of spreading viruses. See the Appendix: Viruses.
Don't send people warnings about hoax viruses. As a general rule, don't send people warnings about viruses at all until you have checked that the virus is real. See the Appendix: Viruses.
Copyright: There are a number of important copyright issues surrounding email correspondence. If you send an email to a private person or discussion list, for example, you automatically own the copyright in your email message and you retain your moral right to be identified as the author. Regarding other people's email messages, you should always seek permission (it's only polite, anyway) before passing them on to third parties or copying extracts for publication elsewhere.
Discussion lists and blogs: Discussion lists such as those managed by mailing list services, e.g. Mailtalk and JISCMail, and blogs have their own rules and usually contain guides on acceptable practice. Don't send attachments or unsolicited commercial emails to discussion lists and blogs.
There are several useful publications relating to netiquette:
In terms of your own professional development, what kind of benefits do you think might accrue to you through a discussion list or blog which would not have been available to you before the advent of Computer Mediated Communication? See Section 12, headed Discussion lists, blogs, wikis, social networking.
Chat rooms are synchronous communication facilities, offering online environments where people either drop in or arrange to meet in at specific times. Most are text-based, where you type in text online that is seen almost immediately by others who are online at the same time and who respond online in real time. Chat rooms involve extensive connect time and, when used for language learning, can put a great deal of pressure on students by requiring them to read fairly rapidly, and also to write fairly rapidly, with little time to reflect on the quality of the language used. Some chat rooms are asynchronous, which means that messages are stored and can be replied to at any time. There are also chat rooms that offer synchronous video chat. See:
TinyChat: live video chat.
Twitter can be considered as a type of asynchronous chat facility: see Section 12.4 (below).
Most VLEs (Virtual Learning Environments) and virtual worlds also offer a text chat facility. See Section 8 (above) and Section 14.2.1 (below).
E-Safety: A degree of caution is advised when joining a chat room or a social networking site. See Section 12.6 (above):
MUD stands for Multi User Domain or Multi User Dungeon. MUDs were originally developed as text-based, role-playing adventure games to be engaged in across computer networks, but they also offer facilities for collaboration and education, including language learning.
MOOs: MUDs were superseded by MOOs. MOO stands for Multi-User-Domain Object Oriented. A MOO is rather like an online computer game for players from all round the world. Players can log into a MOO to communicate with other MOO users either synchronously or asynchronously. MOOs can be described as text-based virtual worlds, some of which are specifically designed for language learning, such as:
MUVEs: MOOs were followed by more elaborate three-dimensional virtual environments, Multi-User Virtual Environments, which are also known as 3D virtual worlds. These are early examples of MUVEs:
Graham Davies has written a brief history of virtual worlds, which also appears in the preface of Molka-Danielsen & Deutschmann (2009) - click here Virtual worlds: a brief history.
14.2.1 Second Life
Second Life is the dominant 3D virtual world (MUVE) on the Web. There are competitors, many of which are listed by ArianeB and Chris Smith, including embedded videos that show how they look, but Second Life continues to flourish, especially among teachers of foreign languages. In Second Life there are thousands of simultaneous users who interact with one another in the guise of a chosen character or avatar. Second Life has parks, shops, schools, museums, islands and beaches, all designed and maintained by the virtual residents. It is also supported by an economy and a virtual currency, the Linden Dollar: L$. The exchange rate is US$1 = L$250. You can buy virtual land, build a virtual house and fill it with virtual furniture. Second Life is a remarkable virtual environment in which you can let your imagination run free. You can create an avatar of yourself in almost any shape or form, dress yourself in virtual clothes and explore the exciting Second Life mini-worlds (simulations or "sims" for short), where you will meet people speaking a variety of different languages. Second Life is ideally suited to the exploratory or constructivist styles of learning. Or you can just have fun: you can take a cable car to a mountain chalet (Figure 3), visit a club or pub and, if you want to spend a romantic evening, you can dance to beautiful music by a waterfall (Figure 4). This section is divided into the following sub-sections:
First, you need to register as a member of Second Life and download a piece of software known as a viewer. A viewer is to Second Life what a browser is to the Web, i.e. it enables you to explore this exciting 3D world in the same way as a browser enables you to explore the Web. Registering as a member of Second Life is free, quick and easy: click on Join Now at http://secondlife.com/
When you register as a member of Second Life you will find that the default choice is now Version 3. Learning how to use a Second Life Viewer will take some time, but it is worth the effort. Graham Davies aims to make your learning curve a little easier with this set of tutorial materials in Word format, Introduction to the Second Life Viewer, which take into account the new Version 3 interface. The materials take you step-by-step through the basics, including a tour of the CALICO/EUROCALL HQ in Second Life. They include many links to other resources on the Web, including YouTube videos. Feedback on the tutorial materials is welcomed.
Graham Davies's tutorial materials in Word format for the much earlier Viewer 1 are still available here: Introduction to Second Life Viewer 1.
There are a number of alternative viewers. See the list of viewers compiled by Chris Smith.
Getting started with Second Life, JISC: The first part of this PDF document briefly covers the basics of Second Life, and the second part focuses on the more advanced skills of building and scripting, designing courses in Second Life, as well as offering useful practical advice on setting up Second Life in an educational institution.
Stoerger S. (2010) Creating a virtual world mindset: a guide for first time Second Life teachers, The Journal of Distance Education 24, 3.
ii. Useful general references to Second Life
Educational uses of Second Life: A YouTube video, giving an overview of how Second Life may be used in education.
The Open University in Virtual Worlds: The Open University in Virtual Worlds Project aims to promote the philosophy, practices and curriculum of The Open University within virtual world environments, using innovative techniques, interdisciplinary strategies and varied pedagogical approaches to enhance lifelong learning through technology.
Real Life or Second Life? An amusing YouTube video showing real people behaving like their Second Life avatars.
Schome: An Open University project, which created Schome Park, an island in Teen Second Life, in order to collect evidence about approaches to supporting teenage learners. SchomeBase is the Schome HQ in Second Life for connecting with adults. See this Teachers TV video: ICT for the non-specialist, which focuses on the use of Second Life by John Hanson School, Hampshire, UK. See also the entry under The British Council (below).
Second Classroom is a project that explores ways in which educators can use immersive media such as 3D virtual worlds and online multiplayer games in learning:
Second Life Education wiki: A wiki edited by Randall Sadler.
SLOODLE is a free and open source project which integrates the multi-user virtual environments of Second Life and/or OpenSim with the Moodle learning-management system.
University of Western Australia: The University of Western Australian has an impressive presence in Second Life. Well worth a look.
Web 2.0 and Language Learning: A YouTube video with a section on Second Life, by Graham Stanley of The British Council.
iii. Language learning and teaching in Second Life
Association for Language Learning (ALL) London: See below under Language assocations in Second Life.
AVALON (Access to Virtual and Action Learning live ONline) is a project that was initiated with EU funding in 2009-2010, aiming to explore 3D worlds for language learning. AVALON now embraces the SL Experiments group.
AVATAR (Added Value of teAching in a virTuAl woRld): A two-year project (December 2009 to November 2011), co-financed by the European Commission under the Lifelong Learning Sub-Programme (Comenius). Also has a Facebook Group. See also the YouTube video.
Avatar Languages: Online language courses in English and Spanish in Second Life.
The British Council: The British Council has been teaching English to 13-17 year-olds in its restricted Second Life for Teens location for several years, but since January 2010 it has been possible for 16-17 year-olds to join the main grid of Second Life. 13-15 year-olds are admitted to limited locations, with appropriate controls for administrators. Visit the British Council Isle in Second Life and try your hand at the challenging quests. See Introduction to Second Life and the British Council Isle by Graham Stanley.
CALICO: See below under Language associations in Second Life.
Edunation Islands: An area in Second Life that focuses on the potential for virtual worlds to enhance the language learning process. The EduNation Islands are maintained by a community of educators.
EUROCALL: See below under Language associations in Second Life.
ExamSpeak: A project based at NovaUCD, the Innovation and Technology Transfer Centre at University College Dublin (UCD), and managed by a company named RendezVu. The project uses bots in a tailor-made virtual world to give students practice in speaking skills for the Cambridge Key English Test (KET). See the UCD website for further information. A trial beta test version of ExamSpeak is available here: http://www.examspeak.com/KET
Stefanie Hundsberger's Report (2009): Foreign language learning in Second Life and the implications for resource provision in academic libraries.
LanguageLab: Learn English online in Second Life.
MFL Resources: see below under Language assocations in Second Life.
NIFLAR (Networked Interaction in Foreign Language Acquisition and Research): an EU-funded project, which began in January 2009. See also the NIFLAR Ning.
Pegrum's wiki on Web 2.0 in Education - Virtual worlds: Includes references
to spaces in Second Life where language students can practise the target
language with natives and other learners.
RezEd, Language Learning in Virtual Worlds: a Ning for educators interested in sharing experiences and exploring language learning and teaching in virtual worlds.
Skoolaborate is a global initiative that uses a blend of technologies - including blogs, wikis and virtual worlds - to transform learning These tools are used to provide engaging collaborative learning experiences for students aged between 13 and 18 years of age. The Skoolaborate virtual learning space is secure and only accessible via invitation. Students from schools around the world are invited to participate. Initiated and managed by Westley Field at MLC School Sydney, Skoolaborate now has over 40 schools and organisations from Australia, New Zealand, Taiwan, Japan, Singapore, Chile, Portugal, Canada, the UK and the USA. The South East Grid for Learning (SEGFL), UK, website includes an introductory video on Second Life and Skoolaborate and a video by Helen Myers on the Lingualand project, which integrates SL with SLOODLE.
SLanguages: A wiki on learning languages in virtual worlds.
SL Experiments: A wiki written by Nergiz Kern (Daffodil Fargis in SL) for collecting and sharing ideas on how to use Second Life for teaching foreign languages. See also Nergiz's Teaching in Second Life blog. SL Experiments now has a group within the AVALON project.
TalkAcademy: Part of the non-profit Open Learning Association, based in Vienna, Austria. TalkAcademy maintains an island in Second Life.
TESOL Electronic Village Online (EVO): A professional development project and virtual extension of the TESOL Convention. TESOL EVO offers workshops on teaching in Second Life.
Virtlantis: Formerly referred to as Second Life English, Virtlantis is a non-profit project of the Oxford School for English, a language school located in Göppingen, Germany, which has been in operation since 1965. It is also a collaborative effort which includes volunteer language teachers and language learners from all over the world. The Oxford School has been actively promoting language learning in Second Life via the Second Life English, Virtlantis, and other related projects, since 2006.
Language teachers are discovering a variety of different ways in which Second Life can be used in language learning and teaching, for example:
Scavenger hunts - also known as treasure hunts - are becoming increasingly popular in Second Life. For example, the teacher can ask learners to search for an object that reflects the culture of a specific country, take a snapshot of it and write an accompanying textual description either in their own language or in the language that they are studying: see Section 7.3.1 (above), headed Webquests and scavenger hunts.
Ma Routine: Eleanor Kettley-Tomlinson from Millthorpe School has uploaded this video clip describing "My daily routine" in French, beginning with a cartoon character of a young girl waking up ("Je me réveille"), carrying our a series of daily tasks, and ending with going to bed ("Je me couche"). I am not sure which animation tool was used to create this clip, but something similar could be implemented in Second Life.
The Princess and the Pea: This Machinima video on YouTube is a wonderful illustration of the power of Second Life as a story-telling medium. I hadn't thought about using SL in this way - but it obviously works. Graham Davies's grandchildren love it! More of the same, please - and in foreign languages too!
Task-based learning: It is possible to set up tasks in Second Life that simulate tasks that could be set up in a real classroom. For example, a class of students could be divided into groups, with each group given the task of setting a small dinner table for invited guests. The students pick up items of food from a large central table and transfer (or rather copy) them to each of the guests' dinner tables. In doing so they learn the names of the items of food, how to understand instructions, use of verbs and prepositions of location and placement, etc. At the same time they also learn SL basics such as how copy and place an item, and use a great deal of language in the process of collaborating with one another.
Learning Spanish: Graham Davies writes: "During the SLanguages 2009 conference in Second Life I took part in an introductory class for learners of Spanish, conducted by Cristina Palomeque (Cristina Papp in Second Life). The class took place in a simulated Spanish city called Ciudad Bonita, where we first learned the names of different items of clothing and then went shopping to "buy" them - free of charge, of course! The class ended with a parade on a catwalk where we showed off our new clothes and other students were asked to describe them. I am using Second Life almost daily in order to brush up my Spanish. I have found some great simulations of Spanish cities, where I often meet Spanish native speakers. I cannot understand 100% of what they write in text chat, but I am using a text chat translator as an aid. It often produces nonsense, but it helps me get the gist of what is going on and I can usually match up the Spanish and English vocabulary items, which is great for reinforcement."
iv. Language conferences in Second Life
Second Life is used regularly for virtual conferences on a wide range of topics. Regular conferences include:
SLanguages: The first SLanguages colloquium, SLanguages 2007, took place on 23 June 2007. Figure 1 is a screenshot of the colloquium, which made use of the Ventrilo audioconferencing software as well as standard Second Life text chat. Graham Davies writes:
Speakers' and participants' voices came through very clearly at my end, and the speakers were able to put up PowerPoint slides on a large display at the conference venue, the Glass Pyramid one of the three EduNation Islands. You couldn't see anyone "for real", of course. Text chat was active throughout the conference - and, because text chat is silent, participants could chat among themselves without disturbing the presenters. In the discussion sessions, participants could use text chat with the presenters or they could illuminate a light bulb on their head to indicate that they wished to speak, and then the chair would call upon them in turn. It worked amazingly well. This approach to conferencing was new at the time, but it in the meantime it has become fairly commonplace now that Second Life has introduced its own voice chat facility. I use voice chat regularly in Second Life to run online courses and communicate with colleagues all over the world. I recently gave a talk to cancer sufferers and carers in the HQ of the American Cancer Society in Second Life.
SLanguages 2008 took place on 23-24 May 2008.
SLanguages 2009 took place on 8-9 May 2009. Graham Davies writes:
This was undoubtedly the best online conference that I have ever attended. I learned an enormous amount about teaching foreign languages in virtual worlds, and I even took part in a lesson for beginners in Spanish. The conference ran for 24 hours from Friday 8 May to Saturday 9 May, with many of the 39 presentations being repeated so that people in different time zones could attend them without having to stay up all night. A total of 359 participants took part in the conference, with a peak of 91 in attendance concurrently on Friday evening, 8 May.
SLanguages 2010 took place on 15-16 October 2010. Graham Davies writes:
This was an excellent conference. I was invited to play the role of Eckart in a performance (in German, of course) of Brecht's "Baal" in a simulation of a 1920s Berlin Theatre. I learned a lot about the Dogme approach to language teaching from Scott Thornbury, attended a beginners course in Modern Greek and I was the key speaker in the closing ceremony.
Slanguages 2011 took place on 16-18 September 2011.
b. Webheads in Action
The Webheads in Action Online Convergence (WIAOC) conferences also make use of Second Life. Webheads describes itself as "An online community of practice of teachers and educators, practising peace and professional development through Web 2.0 and computer mediated communication".
c. Virtual Round Table
The Virtual Round Table conference is a semi-annual live online conference on language learning technologies. A substantial part of the conference takes place in Second Life.
v. Using Second Life as an alternative to videoconferencing
Interestingly, many businesses are moving away from videoconferencing and are running their meetings in virtual worlds such as Second Life. This is a far cheaper option and apparently much liked by businessmen and businesswomen who don't have to dress smartly and worry about their appearance, i.e. they only have to dress their avatars, and if the meeting gets boring they can slip out for a coffee, leaving their avatar in place!
Graham Davies writes:
I attend virtual meetings regularly in Second Life in a variety of locations. The CALICO/EUROCALL HQ building is set up so that it can accommodate group meetings of up to a dozen people, complete with access to presentation screens on which I can project PowerPoint slides, photographs and other images, and motion video. I can also engage in text chat with the group or with individuals, send notecards containing textual information and call up Web pages, all within the virtual meeting rooms.
vi. Language associations in Second Life
These three professional associations have bases in Second Life:
a. Association for Language Learning (ALL London) and MFL Resources
The London branch of the Association for Language Learning (ALL London) and MFL Resources have a joint base in Second Life.
CALICO is the leading North American professional association dedicated to the promulgation of innovative research, development and practice relating to the use of technologies for language learning. CALICO and EUROCALL are affiliated associations that work together in a number of different ways. Their members share mutual benefits. See:
CALICO Virtual Worlds Special Interest Group (VW SIG): This is the website of the original CALICO VW SIG. CALICO and EUROCALL have now joined forces and set up a Joint Virtual Worlds Special Interest Group.
CALICO 2009 Workshop on Virtual Worlds and Language Teaching: Now a bit dated, but contains interesting information on the history of virtual worlds, going back to British Legends, SchMOOze University, Quantum Link, Habitat, Active Worlds and There. See also Davies (2009a) on the history of virtual worlds.
CALICO and EUROCALL have a Joint Headquarters on EduNation III Island in Second Life, which is maintained by Randall Sadler (Randall Renoir in SL) and Graham Davies (Groovy Winkler in SL). See Figure 2 and Figure 5.
There is a CALICO Group that you can join in Second Life. Use the SL search facility to find it.
EUROCALL is Europe's leading professional association dedicated to the promulgation of innovative research, development and practice relating to the use of technologies for language learning. EUROCALL and CALICO are affiliated associations that work together in a number of different ways. Their members share mutual benefits.
EUROCALL and CALICO have a Joint Headquarters on EduNation III Island in Second Life, which is maintained by Graham Davies (Groovy Winkler in SL) and Randall Sadler (Randall Renoir in SL). See Figure 2 and Figure 5.
EUROCALL and CALICO have also set up a Joint Virtual Worlds Special Interest Group.
There is also a EUROCALL Group that you can join in Second Life. Use the SL search facility to find it.
See Nergiz Kern's Interview with Graham Davies (July 2009) about EUROCALL's and CALICO's activities in Second Life.
vii. Further reading
viii. Second Life screenshots
Figure 1: The first SLanguages Colloquium, June 2007
Figure 2: The CALICO/EUROCALL HQ, interior view, upper floor
Figure 3: A mountain chalet
Figure 4: Dancing by a waterfall
Figure 5: The CALICO/EUROCALL HQ, exterior view
ix. Second Life videos
This YouTube video, Tour of the EUROCALL HQ Building in Second Life by Graham Davies, shows the old EUROCALL HQ. A video showing the new joint EUROCALL/CALICO HQ is in production:
Holodecks are a fascinating feature of Second Life. What is a holodeck? The term derives from Star Trek TV series and feature films, in which the holodeck is depicted as an enclosed room where realistic simulations can be created both for training and for entertainment. Holodecks in Second Life fulfill more or less the same functions. Think of them as mini-simulations within the Second Life virtual world simulation as a whole. Holodecks offer exciting possibilities of calling up a range of instantly available simulations that can be used for entertainment, presentations, conferencing and, of course, teaching and learning. See this YouTube video, Holodecks at the CALICO/EUROCALL HQ in Second Life by Graham Davies, which shows how holodecks work. It was captured on the old CALICO plot in Second Life. A new holodeck platform can be found at this location on EduNation III Island: http://maps.secondlife.com/secondlife/EduNation%20III/128/129/3500
A new feature of Second Life is Shared Media. The latest Viewer includes a feature that teachers have long been waiting for, namely the ability to display a live Web page on any surface in Second Life, for example on a large screen, on the faces of a cube, or even on a sphere. The Web page then behaves as it would in a normal browser: links are clickable, pages can be scrolled, and it is possible to log on to Ning, Twitter, Flickr, etc. Collaborative writing tasks are possible, and YouTube videos can also be displayed. This is a powerful new feature which aims to make sharing standard Web-based media in Second Life easy and seamless. You can even conduct an Adobe Connect Pro meeting, for example, by placing the meeting page on a screen in SL. See this YouTube video in which Graham Davies demonstrates Shared Media on the old EUROCALL/CALICO plot in Second Life. A Shared Media screen can be found on the upper floor of the new EUROCALL/CALICO HQ: see Figure 2.
Have a look at the ICT4LT Blog thread headed Second Life videos (May 2010). Feedback is welcomed.
A popular classroom application of email involves a group of younger learners in the UK sending out a message to groups of students in schools in a number of different French speaking countries requesting them to price a virtual shopping basket to provide the basis for practice in comparatives, having already given them the opportunity to make "real" use of previously learned vocabulary items and to formulate appropriate questions for their initial request for the information required. If they already have established and reliable links and request a prompt reply, they will probably get an answer within 24 hours. If schools do not have existing links but want to get specific information in this way on an ad hoc basis, they can find email addresses of possible partner schools at the websites listed below in Section 14.8. In the case of the shopping basket, it would be particularly appropriate to select schools in countries which have different standards of living.
14.3.1 Discussion topic
Can you think of a small scale activity like this that you could do with a specific class that you teach? What kind of learning outcomes would you anticipate? Would they be worth the time involved in setting it up?
14.4.1 Get to know your email package
In order to use email, you need an Internet connection and an email package. Microsoft Outlook Express is an email package that is bundled with Microsoft Windows, but there are other packages such as Eudora. If you are considering using email as a teaching and learning tool, you will find that time spent on investigating all the facilities provided by the package to which you have access.
14.4.2 Characteristics of email as a communications medium
The three most important characteristics of email for the language teacher are the following:
14.4.3 One-to-one and one-to-many
If you are already an email user you know that it is just as quick and easy to send a message to 50 people as it is to one. As you will realise from the example given above, this facility removes the need to establish and maintain a relationship with a single school, which can be difficult, whether through email alone or a combination of traditional and electronic communications. It also means that, with access to schools worldwide, it is possible to "visit" different countries according to the topic being studied. It is also possible to work with schools which share the same target language. One-to-one links tend to be between schools which teach each other's mother tongue. This can lead to difficulties about which language students should use when generating messages. Both schools can be guaranteed authentic incoming language and it may decided that all writing should be done in the students' mother tongue. If you prefer your students to compose in the target language, the one-to-one facility is a useful alternative for one-off activities.
One of the difficulties of maintaining traditional school links lay in the need for students to copy out letters to send which had been drafted in rough before being copied up neatly. Text generated in a word-processor or in an email package is flexible until the Send button is pressed. This brings with it three important benefits for the language learner:
14.5.1 Discussion topic
Think of an activity based on this cycle with both a thematic and grammatical content that you could use profitably with one of your classes. In what ways would you expect their knowledge of language and reading and writing skills to have improved? How does email affect the way learners write in a foreign language? See Biesenbach-Lucas S. & Weasenforth D. (2001). How could you integrate the activity with other activities undertaken over the same period of time as the email activity to give practice in speaking skills?
As indicated in Section 14.1.1, email is an asynchronous communications medium. This means that messages can be read and responded to at a time convenient to the user. This is a huge benefit in terms of timetable management. Incoming text does not have to read instantly, on-screen. It can be saved, printed off to provide single or multiple copies, mulled over and worked on to get at the meaning. If the content is a response to a request for information from a group of students, it is likely to contain things of interest to them, in the language of their contemporaries. You will be surprised at their willingness to tackle quite difficult language when they really want to know what it means!
You might have been put off the use of email with your students for a number of reasons including the following:
In the bad old days teachers and home users had to use a dial-up modem, which connected computers to the Internet via a standard telephone line. Typically a dial-up modem connects to the Internet at a very slow data transmission speed of only 56Kbps, and the quality of the connection is often poor. Now times have changed. Faster ADSL broadband connections are now widely available via standard telephone lines, reducing costs and improving connection speeds. See Section 1.3.2, Module 1.2 for more information on ADSL broadband.
You probably have at least one classroom in your school with one or more computers linked to your school's network and with a fast Internet connection. If that is the case, your students will be able to able to download incoming messages and upload outgoing ones for you, on the strict understanding that is all that they do! If you only have a stand-alone machine in your room, you can still engage in email activities. Your students can prepare messages, however small, in a word-processor and save them on a memory stick (see Glossary). You and/or they can then go to a machine which is on the Internet and has an email package installed.
When very few schools were on the Internet and even fewer had email addresses, it was very difficult to find anyone to exchange messages with, and it took a long time for messages to be exchanged. That has all changed now and many schools already have e-partners or are seeking partners with whom they can exchange messages via the Internet. For schools which have long standing links with partner schools, email is often used for the administration of exchange visits. Anyone who has arranged such a visit will know the frustration of never being free at the same time as the colleague in the other school and constantly missing phone calls. With email, both partners read messages and respond at times convenient to themselves. For most people there is an expectation that correspondents will reply rapidly to emails because of the speed of the medium itself.
If you are new to the use of email for curricular purposes it is worth considering alternative strategies before committing yourself to what might turn out to be a potential failure. The traditional model of establishing a link with an exchange school works for some schools and not for others - for a whole range of reasons. It was the only sensible model when letters had to be handwritten and sent by traditional mail, unless students were to be involved in excessive copy writing. When it works well, it brings great benefits to students and staff alike. Where such links exist, the added benefits of the use of email within the curriculum, as well as for the administration of exchanges should be exploited. The fact that teachers in both of the schools concerned already know each other and are likely to have a shared understanding about the schemes of work followed and the levels of achievement of the two sets of students will facilitate the use of the medium and greatly enhance its potential to improve standards.
If you do not have an existing link, but would like to be in contact with schools in countries where the target language is spoken, have a look at the sites listed below.
It is also possible, as suggested earlier, to set up ad hoc links for specific purposes which might, for example, involve a one-off questionnaire sent to a number of schools, in order to gather data, say, for the compilation of a database about the leisure interests of the 13-18 age group in Europe. It would also be possible for a number of schools to agree to work together over a longer period of time to undertake a project such as the investigation of the views of the students on a wide range of topics.
14.8.1 Discussion topic
Is it better to:
Taking the process of an email link between one stage further, it may be worth considering tandem learning also known as buddy learning. This form of learning involves two people with different native languages working together as a pair in order to help one another to improve their language skills and to learn more about one anothers character and culture. Each partner helps the other through explanations in the foreign language, through comparisons, etc. As this form of learning is based on communication between members of different language communities and cultures, it also facilitates intercultural learning. Tandem learning partners have the opportunity to give each other help through friendly corrections, advice, questions etc. Tandem learning is underpinned by principles of reciprocity - both partners benefit equally from the exchange, and each partner is responsible for their own language learning, establishing learning goals and deciding on methods and materials.
Tandem learning has been used successfully for many years. It was pioneered at the University of Sheffield, both in face-to-face mode and via the Internet.
A website is maintained at the Ruhr-Universität Bochum, where more information on tandem learning can be found and ways in which partners can be identified.
See also Tandem München.
Buddy systems for learning foreign languages are a growth area:
busuu: An online social network service where users can help each other to improve their skills in English, Spanish, French, German and Italian. The site has a large online community of native speakers and offers courses based on the Common European Framework of Reference for Languages. Users can also improve their conversational skills by connecting via video-chat directly with native speakers. Each user is both a student of a foreign language and also a tutor of his/her own language
italki: A network that connects people from around the world in a community to learn from each other. talki also helps students connect with teachers for paid online lessons. italki has many free language learning features, such as questions and answers, group discussions, and multimedia materials for self-study. italki is both a social network and a marketplace. The social network helps bring people together to communicate and learn. The marketplace gives students, teachers, and companies the abililty to transact online.
Livemocha: A language learning social network that integrates instructional content with a global community of language learners. Members of the network can aid others in learning the languages that they are proficient in while learning other languages themselves. See Brick (2011b), who reports on his students' experiences in using Livemocha.
Palabea: A social network site that connects people who share interests in learning languages and in discovering different cultures. Members can improve their foreign language skills by communicating with native speakers from all over the world in audio or video conferences. Each member is both a student and a teacher. Palabea has created virtual classrooms where all members can upload contents on which they can work together, and they correct one another.
Verbling: The Verbling site allows you to sign up and choose the language you want to learn. Once you join the site during a session time, you are automatically paired with a language speaker who is fluent in the language you wish to learn. The site encourages users to talk to a number of different speakers within each session. So if you speak French and want to learn English, you’ll be paired up with a native English speaker who wants to learn French. You start in one language and halfway through the video session, a timer tells you when to switch to the other.
There is also a Teach You Teach Me group in Second Life.
Language teachers have an extremely difficult task to perform daily. Unlike their colleagues in a subject area such as History, they are not only required to impart knowledge about the target culture, but also to enable students to acquire a knowledge of the structure of a language, to learn wide-ranging vocabulary and to apply their linguistic knowledge as they practise complex discrete and multiple skills. For the language teacher communication is content, not the means of delivery and checking the extent to which delivery has been successful. This suggests that email, the essence of which is communication, is an important tool.
Like any other tool, email will only result in improved standards of achievement if it is used in a planned and integrated way. Email itself gives students the opportunity to communicate in a way which they consider to be of their time and, therefore, important and interesting. Because it is an asynchronous medium, their input can be reflective. They can succeed in sending messages in which the language will be acceptable, if not perfect. They can receive replies swiftly which they can subsequently manipulate in various ways to improve their own linguistic performance, based on models provided by their peers.
The trick is, therefore, to identify points in your teaching programme where you either need information from one or more target language schools, or where your students are likely to create "products" which you would like them to share with others, or where both incoming and outgoing information plays its part.
Having identified the vital point in the programme and thought up an appropriate activity of which email is a component, work your way through the following:
Identify the contribution of the email component, based on the special characteristics which mark out email from other communications media and the way in which those characteristics can promote learning within the context of your programme of work.
Clarify the learning objectives that the use of email is designed to enable students to achieve and share the objectives with the students.
Plan assessment tasks which will enable you to measure the language learning outcomes against the objectives and evaluate the contribution of the email component.
Plan the entire activity, taking into account what students will do before and after the core email activity.
Ensure that language needed for the email activity is taught in advance.
Ensure that this language and language acquired as a result of the email activity is re-cycled, not only in text-based activities, but also in oral and mixed skill work.
Run the assessment tasks and reflect on what you can learn from them about the value of the email activity.
Make out a case to present to your headteacher for the planned use of email in your department. Remember to write the document bearing in mind the reader. S/he might not have too much time! So, begin with a bullet point list of no more than 10 points indicating how the use of email is likely to raise student achievement levels. Then go on to identify just what you need to get going in terms of hardware, classroom network access and access to a networked computer classroom as required by your projected activities.
If you surf the Web, use email or use memory sticks sent to you by other people, you need to be protected against virus invasions. A virus is a nasty program devised by a clever programmer, usually with malicious intent. Viruses can be highly contagious, finding their way on to your computer's hard drive without your being aware of it and causing considerable damage to the software and data stored on it. Viruses can be contracted from files attached to email messages, e.g. Microsoft Word files, or from a memory stick. Be very wary of opening an email attachment of unknown origin, as this is the commonest way of spreading viruses. See Graham Davies's Cautionary Tale, which includes references to viruses, spam, adware and spyware.
Atkinson T. (2002, 2nd Edition) WWW: the Internet, London: CILT.
Bangs P. (2001) EUROCALL 2001 paper titled "Will the Web catch enough flies? Where Web-based learning cannot yet reach".
Bel E. & Ingraham B. (1997) "Understanding the potential of the Internet for language teaching and learning". In Kohn J. et al. (eds.) New horizons in CALL: proceedings of EUROCALL 96, Szombathely, Hungary: Dániel Berzsenyi College.
Berners-Lee T. (1998) The World Wide Web: a very short personal history. Available at: http://www.w3.org/People/Berners-Lee/ShortHistory.html
Bertin J.-C., Gravé P. & Narcy-Combes J.-P. (2010) Second-language distance learning and teaching: theoretical perspectives and didactic ergonomics, Hershey, PA: IGI Global.
Bignell S. & Parson V. (2010) Best practice in virtual worlds teaching: a guide to using problem-based learning in Second Life. Available at: http://previewpsych.org/BPD2.0.pdf
Biesenbach-Lucas S. & Weasenforth D. (2001) "Email and word-processing in the ESL classroom: how the medium affects the message", Language Learning and Technology 5, 1: 135-165: http://llt.msu.edu/vol5num1/weasenforth/default.html
Bradin C. (1997) "The Dark Side of the Web", FLEAT 97 paper: http://edvista.com/claire/darkweb/index.html
Brandl K. (2005) "Are you ready to Moodle?" Language Learning and Technology 9, 2: http://llt.msu.edu/vol9num2/review1/default.html
Brick B. (2011a) "How effective are Web 2.0 language learning sites in facilitating language learning?" Compass: The Journal of Learning and Teaching at the University of Greenwich 3: 57-63.
Brick B. (2011b) "Social Networking Sites and Language Learning", International Journal of Virtual and Personal Learning Environments 2, 3: 18-31.
Britain S. & Liber O. (1999) A framework for pedagogical evaluation of Virtual Learning Environments, JISC Technology Applications (JTAP). Available (28 November 2005) from: http://www.jisc.ac.uk/uploaded_documents/jtap-041.doc
Bryant S. (2000) The story of the Internet, Edinburgh: Penguin Education.
Buckett J. & Stringer G. (2001) "ReLaTe: a case study in videoconferencing for language teaching". In Chambers A. & Davies G. (eds.) Information and Communications Technology: a European perspective, Lisse: Swets & Zeitlinger.
Burston J. (1998) "From CD-ROM to the WWW: coming full circle", CALICO Journal 15, 1-3: 67-74.
Bush M. (1996) "Internet mania. World Wide Web technology: what's hot and what's not!", Multimedia Monitor Newletter, February 1996 Edition, Philips Business Information, Inc.
Bush V. (1945) "As we may think", The Atlantic Monthly, July 1945.
Davies G. (1998) Exploiting Internet resources offline. Paper presented at the Language Teaching online Conference University of Ghent, Belgium, 8 May 1998.
Davies G. (1999) The Internet: an introduction for language teachers, Camsoft.
Davies G. (2001) Doing it on the Web, Language Learning Journal 24: 34-35, Journal of the Association for Language Learning.
Davies G. (2009a) Virtual worlds: a brief history. This article originally appeared as the preface to Molka-Danielsen & Deutschmann (2009).
Davies G. (2009b) Using virtual worlds, Languages Today 4: 8, Magazine of the Association for Language Learning.
Donaldson R.P. & Kötter M. (1999) "Language learning in cyberspace: teleporting the classroom into the target culture", CALICO Journal 16, 4: 531-558.
Dudeney G. (2000) The Internet and the language classroom, Cambridge: Cambridge University Press.
Dudeney G. & Hockly N. (2006) "Talk to the avatar", The Guardian Weekly, 20 October 2006: http://www.guardian.co.uk/education/2006/oct/20/tefl.nickyhockly
Evans N., Mulvihill T.M. & Brooks N.J. (2008) Mediating the tensions of online learning with Second Life, Journal of Online Education 4, 6.
Felix U. (1998a) Virtual language learning: finding the gems amongst the pebbles, Melbourne: Language Australia.
Felix U. (1998b) "Web-based language learning: a window to the authentic world". In Debski R. & Levy M. (eds.) WorldCALL: Global perspectives on Computer Assisted Language Learning, Lisse: Swets & Zeitlinger.
Felix U. (1999) "Exploiting
the Web for language teaching: selected approaches", ReCALL 11,
1: 30-37. Available at:
(2001) Beyond Babel: language learning online, Melbourne: Language Australia.
Felix U. (ed.) (2003) Language learning online: towards best practice, Lisse: Swets & Zeitlinger.
Gitsaki C. & Taylor R. (1999a) "Internet-based
activities for the ESL classroom", ReCALL 11, 1: 47-57. Available
Gitsaki C. & Taylor R. (1999b) Internet English: WWW-based communication activities, Oxford: Oxford University Press.
Gitsaki C. & Taylor R. (2000) Internet English: WWW-based communication activities. Teacher's book, Oxford: Oxford University Press.
Gläsmann S. (2004) Communicating online, London: CILT.
Godwin-Jones R. (2005) "Skype and podcasting: disruptive technologies for language learning", Language Learning & Technology 9, 3: 9-12: http://llt.msu.edu/vol9num3/emerging/default.html
Gourlay L. (2000) Chambers Guide to English for IT and the Internet, Edinburgh: Chambers. (A useful glossary of terminology.)
Hampel R. & Hauck M. (2003) "Using Lyceum, an audio-graphic conferencing system, to talk at a distance". In Goodfellow D., Fenner A.-B., Garrido C. & Tella S. (eds.) The educational use of ICT in teacher education and distance language learning. Graz: European Centre for Modern Languages of the Council of Europe.
Hampel R. & Hauck M. (2004) "Towards an effective use of audio conferencing in distance language courses", Language Learning and Technology 8, 1: 66-82: http://llt.msu.edu/vol8num1/hampel/default.html
Hanna B.E. & de Nooy J. (2003) "A funny thing happened on the way to the forum: electronic discussion and foreign language learning", Language Learning and Technology 7, 1: 71-85: http://llt.msu.edu/vol7num1/hanna/default.html
The HelpWeb; A guide to getting started on the Internet.
Howe W. (2001 - regularly revised) Walt Howe's Internet Learning Center, a mine of information about the Internet.
Hughes K. (1994) Entering the World Wide Web: a guide to cyberspace - a guide to the Internet, including explanations of the Internet, WWW, hyperlinking etc. Interesting from the historical point of view. Available at:: http://www.maths.tcd.ie/local/JUNK/guide/guide.toc.html
S. (2009) Foreign language learning in Second Life and the implications for
resource provision in academic libraries, Arcadia Fellowship Programme,
Cambridge University Library. Available at:: http://arcadiaproject.lib.cam.ac.uk/docs/second_life.pdf
Kern N. (2009) Starting a Second Life. Interesting article by a teacher of English as a Foreign Language on learning how to teach in Second Life: http://slexperiments.edublogs.org/2009/03/03/starting-a-second-life/
Koenraad A.L.M. & Westhoff G.J. (2004) Can you tell a LanguageQuest when you see one? Design criteria for TalenQuests. Paper presented at the EUROCALL 2003 Conference, University of Limerick, Ireland. In Meena Singhal (2004) Proceedings of the First International Online Conference on Second and Foreign Language Teaching and Research, 25-26 September 2004, The Reading Matrix Inc., USA, ISSN 1550-8501.
LeLoup J. & Ponterio R. (2003) "Interactive and multimedia techniques in online language lessons: a sampler", Language Learning and Technology 7, 3: http://llt.msu.edu/vol7num3/net/default.html
Lewis T. & Walker L. (eds.). (2003) Autonomous language learning in tandem, Sheffield: Academy Electronic Publications.
Little D. (2001) "Learner autonomy and the challenge of tandem language learning via the Internet". In Chambers A. & Davies G. (eds.) Information and Communications Technology: a European perspective, Lisse: Swets & Zeitlinger.
Little D. & Brammerts H. (eds.) (1996) A guide to language learning in tandem via the Internet, CLCS Occasional Paper No. 46, Dublin: Trinity College, Centre for Language and Communication Studies.
& Ushioda E. (1998) "Designing, implementing and evaluating a project
in tandem language learning via email", ReCALL 10, 1: 95-101. Available
Little D., Ushioda E., Appel M.C., Moran J., O'Rourke B. & Schwienhorst K. (1999) Evaluating tandem language learning by email: report on a bilateral project, CLCS Occasional Paper No. 55, Dublin: Trinity College, Centre for Language and Communication Studies.
Molka-Danielsen J. & Deutschmann M. (eds.) (2009) Learning and teaching in the virtual world of Second Life, Tapir Academic Press, Trondheim, Norway, ISBN: 9788251923538.
Nielsen J. (1995) Multimedia and hypertext: the Internet and beyond, Academic Press: Boston.
Nielsen J (1997) Be succinct! Writing for the Web, Alertbox, 15 March 1997.
Nielsen J. (1998) Fighting Linkrot, Alertbox, 14 June 1998.
Nielsen J. (2010) iPad and Kindle reading speeds, Alertbox, 2 July 2010.
O'Dowd R. (2006) Telecollaboration and the development of intercultural communicative competence, Munich: Langenscheidt.
O'Dowd R. (2007) (ed.) Online intercultural exchange: an introduction for foreign language teachers, Clevedon: Multilingual Matters.
(2005) What is Web 2.0? Design patterns and business models for the next
generation of software:
Peterson M. (2000) "SchMOOze University: A virtual learning environment", TESL-EJ 4, 4. Available at: http://tesl-ej.org/ej16/m2.html
Robb T. (2003) "Google as a Quick 'n Dirty Corpus Tool", TESL-EJ 7, 2. Available at: http://tesl-ej.org/ej26/int.html
Sherwood K. (1998) A beginner's guide to effective email: http://www.webfoot.com/advice/email.top.html. Also translated into German: v. Scheffner T. (1999).
Shield L. (2003) "MOO as a language learning tool". In Felix U. (ed.) Language learning online: towards best practice: Lisse: Swets & Zeitlinger.
Stanford J. (2009) Moodle 1.9 for second language teaching: engaging online language learning activities using the Moodle platform, Birmingham: Packt Publishing: http://www.packtpub.com/moodle-1-9-for-second-language-teaching/book
(2000) Writing for Webheads: an experiment in world friendship through online
language learning. Available at:
V. (2007) Second Life and online collaboration through peer to peer distributed
learning networks. Paper submitted to the Proceedings of the METSMaC Conference,
Abu Dhabi, 17-19 March 2007. Available at:
Stevens V. (2008) Second life in education. A multimedia brainstorming wiki prepared in March 2008 in preparation for an article on Second Life for The Linguist. Available at: http://www.vancestevens.com/secondlife_edu.htm
Stickler U. & Hampel R. (2007) "What I think works well...": Learners' evaluation and actual usage of online tools. In Proceedings of the ICL2007 Conference, Villach, Austria, September 2007.
Stiles M. (2007) "Death of the VLE? A challenge to a new orthodoxy", Serials, The Journal for the International Serials Community 20, 1: 31-36. Abstract available at: http://uksg.metapress.com/link.asp?id=55k7732dthrq6gk1
Stoerger S. (2010) "Creating
a virtual world mindset: a guide for first time Second Life teachers",
The Journal of Distance Education 24, 3. Abstract available at:
Svensson P. (2003) "Virtual worlds as arenas for language learning". In Felix U. (ed.) Language learning online: towards best practice: Lisse: Swets & Zeitlinger.
Teeler D. & Gray P. (2000) How to use the Internet in ELT, Harlow: Longman.
Thomas M. (ed.) (2008) Handbook of research on Web 2.0 and second language learning, Hershey, PA: IGI Global.
Townshend K. (1997) Email : using electronic communications in foreign language teaching, London: CILT.
Turkle S. (2010) Alone together: Why we expect more from technology and less from each other, New York: Basic Books.
Vilmi R. (1996) "Helsinki University of Technology email writing project". In Gimeno A.(ed.) Technology enhanced language learning: focus on integration: proceedings EUROCALL 95, Valencia: Universidad Politécnica de Valencia.
Vogel T. (2001) "Learning out of control: some thoughts on the World Wide Web in learning and teaching foreign languages". In Chambers A. & Davies G. (eds.) Information and Communications Technology: a European perspective, Lisse: Swets & Zeitlinger.
Warschauer M. (1995) Email for English teaching: bringing the Internet and computer learning networks into the language, Alexandria VA: TESOL Publications.
Warschauer M. (1996a) "Computer-assisted language learning: an introduction". In Fotos S. (ed.) Multimedia language teaching, Tokyo: Logos International. A copy of this article is located at the ICT4LT site: Warschauer. We thank Mark Warschauer for granting us permission to make his article available at the ICT4LT site.
Warschauer M. (ed.) (1996b) Telecollaboration in foreign language learning, Honolulu, HI: University of Hawai'i Second Language Teaching and Curriculum Center.
Warschauer M. (1999) Electronic literacies: language, culture, and power in online education, Hillsdale, New Jersey: Lawrence Erlbaum Associates.
Wikipedia: A huge online encyclopaedia which is created by the public at large, a typical example of collaborative publishing.
Windeatt S., Hardisty D. & Eastment D. (2000) The Internet, Oxford: Oxford University Press. Don't be misled by the very general sounding title. This is aimed at learners of English as Foreign Language. There is a good deal of useful material and activities which could be adapted for MFL too.
Woodin J. (1997)
"Email tandem learning and the communicative curriculum", ReCALL
9, 1: 22-33. Available at:
Woodin J. & Ojanguren A. (1996) "Email tandem work for learning languages". In Gimeno A. (ed.) Technology enhanced language learning: focus on integration: proceedings of EUROCALL 95, Valencia: Universidad Politécnica de Valencia.
If you wish to send us feedback on any aspect of the ICT4LT website, use our online Feedback Form or visit the ICT4LT blog.
The Feedback Form and a link to the ICT4LT blog can be found at the bottom of every page at the ICT4LT site.
Document last updated 18 March 2012. This page is maintained by Graham Davies.
Please cite this
Web page as:
Walker R., Davies G. & Hewer S. (2012) Introduction to the Internet. Module 1.5 in Davies G. (ed.) Information and Communications Technology for Language Teachers (ICT4LT), Slough, Thames Valley University [Online]. Available at: http://www.ict4lt.org/en/en_mod1-5.htm [Accessed DD Month YYYY].
ICT4LT Project 2012. This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. | 1 | 22 |
<urn:uuid:c8103af1-8f90-44db-98b8-d1e27a271981> | |SUMMIT COUNTY CHAPTER
of the Ohio Genealogical Society
P O Box 2232 Akron OH 44309-2232
Using the Soundex Coding System
A soundex code is a four character representation based on the way a name sounds rather than the way it is spelled. Theoretically, using this system, you should be able to index a name so that it can be found no matter how it was spelled.
The WPA used the soundex coding system in the 1930s to do a partial indexing on 3x5 cards of the 1880 (all households with a child age 10 or younger) and 1900 censuses and a nearly full indexing of the censuses of 1910 (not all states completed) and 1920.
The soundex indexes of the 1880, 1900, 1910 and 1920 census records are available on microfilm at the National Archives (and its branches) and many libraries or other archives. These microfilms also can be purchased or rented from the National Archives or borrowed through Family History Centers. The names are arranged on the soundex indexes by first letter, then numerically within that letter, then alphabetically by the first name of the head of household within each
different soundex code. There is usually a separate card for each individual within the household whose surname is different from that of the head of household.
Besides telling where the original record can be found, the microfilmed soundex cards usually give basic information about each person in the household, such as place of residence, age, sex, relationship to head of household, state born, state where parents were born, etc. However,
all of the information that is contained in the original census records is not included.
Figuring the code
Every soundex code consists of a letter and three numbers, such as
B525. The letter is always the first letter of the surname. The numbers
are assigned this way:
1 = b,p,f,v 2 = c,s,k,g,j,q,x,z
3 = d,t 4 = l
5 = m,n 6 = r
disregard - a,e,i,o,u,w,y,h
To figure out a surname's code, do this: JOHNSON
- Eliminate any a,e,i,o,u,w,y,h JNSN
- Write the first letter, as is, followed
by the codes found in the table above JNSN = J525
No matter how long or short the surname is, the soundex code is always the first letter of the name followed by three numbers. If you have coded the first letter and three numbers but still have more letters in the name,ignore them. If you have run out of letters in the name before you have three numbers, then add zeroes to the code:
WASHINGTON = WSNGTN = W252 (ignore the ending TN)
KUHNE = KN = K500 (add zeroes to the end)
If you have a surname with a prefix like Van, Von, De, Di, or Le, code
it with and without the prefix because it may be listed under either
code. Van Hoesen could be coded as VanHoesen or as Hoesen. Mac and Mc
are NOT considered prefixes.
Any double letters side by side should be treated as one letter. For
example LLOYD is coded as if it were spelled LOYD. GUTIERREZ is coded
as if it were GUTIEREZ.
Side by side letters with the same value
You may have different letters side by side that have the same code
value. For example PFISTER (P & F are both 1), JACKSON (CKS are all 2).
These letters should be treated as one letter. PFISTER is coded as
PSTR (P236) and JACKSON is coded as JCN (J250).
Thus, variations in spellings or mispellings should produce the same
SMITH = S530 SMITHE = S530
SMYTH = S530 SMYTHE = S530
Note, however, that some names which are pronounced essentially the
same produce different codes. An example is the "tz" sound in German
names, which is normally pronounced the same as "ce" or "se." Also, the
German "B" is often pronounced as the English "P." Thus the German name
Bentz could be spelled that way or as Benz, Bens, Bents, Bennss, Bense,
Bennss, Bants and Banz, or as Penz, Pentz, Pence, Pens, Pense, Penz,
Pents, Penns, Pense, Penze, Pentze, etc. Indeed, it has been found in
census record indexes under all of these - and more. Remember: Those
making the index have as hard a time reading the handwriting of census
takers as we do. They will sometimes mistake an script "z" as a "y" and
record Penty instead of Pentz, or mistake a "c" for an "e" and record
Penee, for examples.
Therefore, to make sure you don't miss finding your ancestor, you may
have to look under a half dozen or more different soundex codes if you
are searching for the name PENCE (soundex code 530):
BENTZ (and equivalents) = B532 PENTZ (and equivalents) = P532
BENZ (and equivalents) = B520 PENZ (and equivalents) = P520
BENTY (and equivalents) = B530 PENTY (and equivalents) = P53
PENEE = P500
Think through the possible variant spellings (and misspellings and misreadings) of the surname you are searching before concluding that it can't be found in the soundex listings. Use your imagination. No mistake is beyond possibility! For instance, the name Pence has been
indexed as Peirce (the reader mistook the written letter "n" for an "i-r" combination) and vice versa.
Provided by: SUMMIT COUNTY CHAPTER, OGS
P O Box 2232
Akron OH 44309-2232
Back to the Summit County Genealogy Home Page
Last modified March 2, 2009
Copyright ©2000 Summit County Chapter OH Genealogical Society. All rights reserved. | 1 | 20 |
<urn:uuid:7277e3a7-d1ee-40a8-9eec-6e886615b50f> | Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards).
Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes.
Highlighting and Taking Notes:
If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination.
If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections.
To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result).
View Full Student FAQs
13.2 Financial Institutions
- Distinguish among different types of financial institutions.
- Discuss the services that financial institutions provide and explain their role in expanding the money supply.
For financial transactions to happen, money must change hands. How do such exchanges occur? At any given point in time, some individuals, businesses, and government agencies have more money than they need for current activities; some have less than they need. Thus, we need a mechanism to match up savers (those with surplus money that they’re willing to lend out) with borrowers (those with deficits who want to borrow money). We could just let borrowers search out savers and negotiate loans, but the system would be both inefficient and risky. Even if you had a few extra dollars, would you lend money to a total stranger? If you needed money, would you want to walk around town looking for someone with a little to spare?
Depository and Nondepository Institutions
Now you know why we have financial institutions: they act as intermediaries between savers and borrowers and they direct the flow of funds between them. With funds deposited by savers in checking, savings, and money market accounts, they make loans to individual and commercial borrowers. In the next section, we’ll discuss the most common types of depository institutions (banks that accept deposits), including commercial banks, savings banks, and credit unions. We’ll also discuss several nondepository institutions (which provide financial services but don’t accept deposits), including finance companies, insurance companies, brokerage firms, and pension funds.
Commercial banksFinancial institution that generates profits by lending funds and providing customers with services, such as check processing. are the most common financial institutions in the United States, with total financial assets of about $13.5 trillion (85 percent of the total assets of the banking institutions).Insurance Information Institute, Financial Services Fact Book 2010, Banking: Commercial Banks, http://www.fsround.org/publications/pdfs/Financial_Services_Factbook_2010.pdf (accessed November 7, 2011). They generate profit not only by charging borrowers higher interest rates than they pay to savers but also by providing such services as check processing, trust- and retirement-account management, and electronic banking. The country’s 7,000 commercial banks range in size from very large (Bank of America, J.P. Morgan Chase) to very small (local community banks). Because of mergers and financial problems, the number of banks has declined significantly in recent years, but, by the same token, surviving banks have grown quite large. If you’ve been with one bank over the past ten years or so, you’ve probably seen the name change at least once or twice.
Savings banksFinancial institution originally set up to provide mortgages and encourage saving, which now offers services similar to those of commercial banks. (also called thrift institutions and savings and loan associations, or S&Ls) were originally set up to encourage personal saving and provide mortgages to local home buyers. Today, however, they provide a range of services similar to those offered by commercial banks. Though not as dominant as commercial banks, they’re an important component of the industry, holding total financial assets of almost $1.5 trillion (10 percent of the total assets of the banking institutions).Insurance Information Institute, Financial Services Fact Book 2010, Banking: Commercial Banks, http://www.fsround.org/publications/pdfs/Financial_Services_Factbook_2010.pdf (accessed November 7, 2011). The largest S&L, Sovereign Bancorp, has close to 750 branches in nine Northeastern states.Todd Wallack, “Sovereign Making Hub its Home Base,” Boston.com, http://articles.boston.com/2011-08-16/business/29893051_1_sovereign-spokesman-sovereign-bank-deposits-and-branches (accessed November 7, 2011). Savings banks can be owned by their depositors (mutual ownership) or by shareholders (stock ownership).
To bank at a credit unionFinancial institution that provides services to only its members (who are associated with a particular organization)., you must be linked to a particular group, such as employees of United Airlines, employees of the state of North Carolina, teachers in Pasadena, California, or current and former members of the U.S. Navy. Credit unions are owned by their members, who receive shares of their profits. They offer almost anything that a commercial bank or savings and loan does, including savings accounts, checking accounts, home and car loans, credit cards, and even some commercial loans.Pennsylvania Association of Community Bankers, “What’s the Difference?,” http://www.pacb.org/banks_and_banking/difference.html (accessed November 7, 2011). Collectively, they hold about $812 billion in financial assets (around 5 percent of the total assets of the financial institutions).
Figure 13.3 "Where Our Money Is Deposited" summarizes the distribution of assets among the nation’s depository institutions.
Figure 13.3 Where Our Money Is Deposited
Finance companiesNondeposit financial institution that makes loans from funds acquired by selling securities or borrowing from commercial banks. are nondeposit institutions because they don’t accept deposits from individuals or provide traditional banking services, such as checking accounts. They do, however, make loans to individuals and businesses, using funds acquired by selling securities or borrowed from commercial banks. They hold about $1.9 trillion in assets.Insurance Information Institute, Financial Services Fact Book 2010, Banking: Commercial Banks, http://www.fsround.org/publications/pdfs/Financial_Services_Factbook_2010.pdf (accessed November 7, 2011). Those that lend money to businesses, such as General Electric Capital Corporation, are commercial finance companies, and those that make loans to individuals or issue credit cards, such a Citgroup, are consumer finance companies. Some, such as General Motors Acceptance Corporation, provide loans to both consumers (car buyers) and businesses (GM dealers).
Insurance companiesNondeposit institution that collects premiums from policyholders for protection against losses and invests these funds. sell protection against losses incurred by illness, disability, death, and property damage. To finance claims payments, they collect premiums from policyholders, which they invest in stocks, bonds, and other assets. They also use a portion of their funds to make loans to individuals, businesses, and government agencies.
Companies like A.G. Edwards & Sons and T. Rowe Price, which buy and sell stocks, bonds, and other investments for clients, are brokerage firmsFinancial institution that buys and sells stocks, bonds, and other investments for clients. (also called securities investment dealers). A mutual fundFinancial institution that invests in securities, using money pooled from investors (who become part owners of the fund). invests money from a pool of investors in stocks, bonds, and other securities. Investors become part owners of the fund. Mutual funds reduce risk by diversifying investment: because assets are invested in dozens of companies in a variety of industries, poor performance by some firms is usually offset by good performance by others. Mutual funds may be stock funds, bond funds, and money market fundsFund invested in safe, highly liquid securities., which invest in safe, highly liquid securities. (Recall our definition of liquidity in Chapter 12 "The Role of Accounting in Business" as the speed with which an asset can be converted into cash.)
Finally, pension fundsFund set up to collect contributions from participating companies for the purpose of providing its members with retirement income., which manage contributions made by participating employees and employers and provide members with retirement income, are also nondeposit institutions.
You can appreciate the diversity of the services offered by commercial banks, savings banks, and credit unions by visiting their Web sites. For example, Wells Fargo promotes services to four categories of customers: individuals, small businesses, corporate and institutional clients, and affluent clients seeking “wealth management.” In addition to traditional checking and savings accounts, the bank offers automated teller machine (ATM) services, credit cards, and debit cards. It lends money for homes, cars, college, and other personal and business needs. It provides financial advice and sells securities and other financial products, including individual retirement account (IRA)Personal retirement account set up by an individual to save money tax free until retirement., by which investors can save money that’s tax free until they retire. Wells Fargo even offers life, auto, disability, and homeowners insurance. It also provides electronic banking for customers who want to check balances, transfer funds, and pay bills online.See Wells Fargo, https://www.wellsfargo.com/ (accessed November 7, 2011).
How would you react if you put your life savings in a bank and then, when you went to withdraw it, learned that the bank had failed—that your money no longer existed? This is exactly what happened to many people during the Great Depression. In response to the crisis, the federal government established the Federal Depository Insurance Corporation (FDIC)Government agency that regulates banks and insures deposits in its member banks up to $250,000. in 1933 to restore confidence in the banking system. The FDIC insures deposits in commercial banks and savings banks up to $250,000. So today if your bank failed, the government would give you back your money (up to $250,000). The money comes from fees charged member banks.
To decrease the likelihood of failure, various government agencies conduct periodic examinations to ensure that institutions are in compliance with regulations. Commercial banks are regulated by the FDIC, savings banks by the Office of Thrift Supervision, and credit unions by the National Credit Union Administration. As we’ll see later in the chapter, the Federal Reserve System also has a strong influence on the banking industry.
Crisis in the Financial Industry (and the Economy)
What follows is an interesting, but scary, story about the current financial crisis in the banking industry and its effect on the economy. In the years between 2001 and 2005, lenders made billions of dollars in subprime adjustable-rate mortgages (ARMs) to American home buyers. Subprime loans are made to home buyers who don’t qualify for market-set interest rates because of one or more risk factors—income level, employment status, credit history, ability to make only a very low down payment. In 2006 and 2007, however, housing prices started to go down. Many homeowners with subprime loans, including those with ARMs whose rates had gone up, were able neither to refinance (to lower their interest rates) nor to borrow against their homes. Many of these homeowners got behind in mortgage payments, and foreclosures became commonplace—1.3 million in 2007 alone.Justin Lahart, “Egg Cracks Differ in Housing, Finance Shells,” Wall Street Journal, http://online.wsj.com/article/SB119845906460548071.html?mod=googlenews_wsj (accessed November 7, 2011). By April 2008, 1 in every 519 American households had received a foreclosure notice.RealtyTrac Inc., “Foreclosure Activity Increases 4 Percent in April,” realtytrac.com, http://www.realtytrac.com/content/press-releases/ (accessed November 7, 2011). By August, 9.2 percent of the $12 trillion in U.S. mortgage loans was delinquent or in foreclosure.Mortgage Bankers Association, “Delinquencies and Foreclosures Increase in Latest MBA National Delinquency Survey,” September 5, 2008, http://www.mbaa.org/NewsandMedia/PressCenter/64769.htm (accessed November 11, 2011); Charles Duhigg, “Loan-Agency Woes Swell from a Trickle to a Torrent,” nytimes.com http://www.nytimes.com/2008/07/11/business/11ripple.html?ex=1373515200&en= 8ad220403fcfdf6e&ei=5124&partner=permalink&exprod=permalink (accessed November 11, 2011).
The repercussions? Banks and other institutions that made mortgage loans were the first sector of the financial industry to be hit. Largely because of mortgage-loan defaults, profits at more than 8,500 U.S. banks dropped from $35 billion in the fourth quarter of 2006 to $650 million in the corresponding quarter of 2007 (a decrease of 89 percent). Bank earnings for the year 2007 declined 31 percent and dropped another 46 percent in the first quarter of 2008.Federal Deposit Insurance Corporation, Quarterly Banking Profile (Fourth Quarter 2007), http://www.2.fdic.gov/qbp/2007dec/qbp.pdf (accessed September 25, 2008); FDIC, Quarterly Banking Profile (First Quarter 2008), at http://www.2.fdic.gov/qbp/2008mar/qbp.pdf (accessed September 25, 2008).
Losses in this sector were soon felt by two publicly traded government-sponsored organizations, the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac). Both of these institutions are authorized to make loans and provide loan guarantees to banks, mortgage companies, and other mortgage lenders; their function is to make sure that these lenders have enough money to lend to prospective home buyers. Between them, Fannie Mae and Freddie Mac backed approximately half of that $12 trillion in outstanding mortgage loans, and when the mortgage crisis hit, the stock prices of the two corporations began to drop steadily. In September 2008, amid fears that both organizations would run out of capital, the U.S. government took over their management.
Freddie Mac also had another function: to increase the supply of money available in the country for mortgage loans and new home purchases, Freddie Mac bought mortgages from banks, bundled these mortgages, and sold the bundles to investors (as mortgage-backed securities). The investors earned a return because they received cash from the monthly mortgage payments. The banks that originally sold the mortgages to Freddie Mac used the cash they got from the sale to make other loans. So investors earned a return, banks got a new influx of cash to make more loans, and individuals were able to get mortgages to buy the homes they wanted. This seemed like a good deal for everyone, so many major investment firms started doing the same thing: they bought individual subprime mortgages from original lenders (such as small banks), then pooled the mortgages and sold them to investors.
But then the bubble burst. When many home buyers couldn’t make their mortgage payments (and investors began to get less money and consequently their return on their investment went down), these mortgage-backed securities plummeted in value. Institutions that had invested in them—including investment banks—suffered significant losses.Shawn Tully, “Wall Street’s Money Machine Breaks Down,” Fortune, CNNMoney.com, November 12, 2007, http://money.cnn.com/magazines/fortune/fortune_archive/2007/11/26/101232838/index.htm (accessed November 7, 2011). In September 2008, one of these investment banks, Lehman Brothers, filed for bankruptcy protection; another, Merrill Lynch, agreed to sell itself for $50 billion. Next came American International Group (AIG), a giant insurance company that insured financial institutions against the risks they took in lending and investing money. As its policyholders buckled under the weight of defaulted loans and failed investments, AIG, too, was on the brink of bankruptcy, and when private efforts to bail it out failed, the U.S. government stepped in with a loan of $85 billion.See Greg Robb et al., “AIG Gets Fed Rescue in Form of $85 Billion Loan,” MarketWatch, September 16, 2008, http://www.marketwatch.com/story/aig-gets-fed-rescue-in-form-of-85-billion-loan (accessed November 7, 2011). The U.S. government also agreed to buy up risky mortgage-backed securities from teetering financial institutions at an estimated cost of “hundreds of billions.”Mortgage Bankers Association, “Delinquencies and Foreclosures Increase in Latest MBA National Delinquency Survey,” Press Release, September 5, 2008, http://www.mbaa.org/NewsandMedia/PressCenter/64769.htm (accessed November 7, 2011). And the banks started to fail—beginning with the country’s largest savings and loan, Washington Mutual, which had 2,600 locations throughout the country. The list of failed banks kept getting longer: by November of 2008, it had grown to nineteen.
The economic troubles that began in the banking industry as a result of the subprime crisis spread to the rest of the economy. Credit markets froze up and it became difficult for individuals and businesses to borrow money. Consumer confidence dropped, people stopped spending, businesses cut production, sales dropped, company profits fell, and many lost their jobs. It would be nice if this story had an ending (and even nicer if it was positive), but it might take us years before we know the ending. At this point in time, all we do know is that the economy is going through some very difficult times and no one is certain about the outcome. As we head into 2012, one in three Americans believe the United States is headed in the wrong direction. Our debt has been downgraded by Moody’s, a major credit rating agency. Unemployment seems stuck at around 9 percent, with the long-term unemployed making up the biggest portion of the jobless since records began in 1948. “As the superpower’s clout seems to ebb towards Asia, the world’s most consistently inventive and optimistic country has lost its mojo.”“America’s Missing Middle,” The Economist, November 2011, 15.
How Banks Expand the Money Supply
When you deposit money, your bank doesn’t set aside a special pile of cash with your name on it. It merely records the fact that you made a deposit and increases the balance in your account. Depending on the type of account, you can withdraw your share whenever you want, but until then, it’s added to all the other money held by the bank. Because the bank can be pretty sure that all its depositors won’t withdraw their money at the same time, it holds on to only a fraction of the money that it takes in—its reserves. It lends out the rest to individuals, businesses, and the government, earning interest income and expanding the money supply.
The Money Multiplier
Precisely how do banks expand the money supply? To find out, let’s pretend you win $10,000 at the blackjack tables of your local casino. You put your winnings into your savings account immediately. The bank will keep a fraction of your $10,000 in reserve; to keep matters simple, we’ll use 10 percent. The bank’s reserves, therefore, will increase by $1,000 ($10,000 × 0.10). It will then lend out the remaining $9,000. The borrowers (or the parties to whom they pay it out) will then deposit the $9,000 in their own banks. Like your bank, these banks will hold onto 10 percent of the money ($900) and lend out the remainder ($8,100). Now let’s go through the process one more time. The borrowers of the $8,100 (or, again, the parties to whom they pay it out) will put this amount into their banks, which will hold onto $810 and lend the remaining $7,290. As you can see in Figure 13.4 "The Effect of the Money Multiplier", total bank deposits would now be $27,100. Eventually, bank deposits would increase to $100,000, bank reserves to $10,000, and loans to $90,000. A shortcut for arriving at these numbers depends on the concept of the money multiplierThe amount by which an initial bank deposit will expand the money supply., which is determined using the following formula:Money multiplier = 1/Reserve requirement
In our example, the money multiplier is 1/0.10 = 10. So your initial deposit of $10,000 expands into total deposits of $100,000 ($10,000 × 10), additional loans of $90,000 ($9,000 × 10), and increased bank reserves of $10,000 ($1,000 × 10). In reality, the multiplier will actually be less than 10. Why? Because some of the money loaned out will be held as currency and won’t make it back into the banks.
Figure 13.4 The Effect of the Money Multiplier
- Financial institutions serve as financial intermediaries between savers and borrowers and direct the flow of funds between the two groups.
- Those that accept deposits from customers—depository institutions—include commercial banks, savings banks, and credit unions; those that don’t—nondepository institutions—include finance companies, insurance companies, and brokerage firms.
- Financial institutions offer a wide range of services, including checking and savings accounts, ATM services, and credit and debit cards. They also sell securities and provide financial advice.
- A bank holds onto only a fraction of the money that it takes in—an amount called its reserves—and lends the rest out to individuals, businesses, and governments. In turn, borrowers put some of these funds back into the banking system, where they become available to other borrowers. The money multiplier effect ensures that the cycle expands the money supply.
Does the phrase “The First National Bank of Wal-Mart” strike a positive or negative chord? Wal-Mart isn’t a bank, but it does provide some financial services: it offers a no-fee Wal-Mart Discovery credit card with a 1 percent cash-back feature, cashes checks and sells money orders through an alliance with MoneyGram International, and houses bank branches in more than a thousand of its superstores. Through a partnering arrangement with SunTrust Banks, the retailer has also set up in-store bank operations at a number of outlets under the cobranded name of “Wal-Mart Money Center by SunTrust.” A few years ago, Wal-Mart made a bold attempt to buy several banks but dropped the idea when it encountered stiff opposition. Even so, some experts say that it’s not a matter of whether Wal-Mart will become a bank, but a matter of when. What’s your opinion? Should Wal-Mart be allowed to enter the financial-services industry and offer checking and savings accounts, mortgages, and personal and business loans? Who would benefit if Wal-Mart became a key player in the financial-services arena? Who would be harmed?
Congratulations! You just won $10 million in the lottery. But instead of squandering your newfound wealth on luxury goods and a life of ease, you’ve decided to stay in town and be a financial friend to your neighbors, who are hardworking but never seem to have enough money to fix up their homes or buy decent cars. The best way, you decide, is to start a bank that will make home and car loans at attractive rates. On the day that you open your doors, the reserve requirement set by the Federal Reserve System is 10 percent. What’s the maximum amount of money you can lend to residents of the town? What if the Fed raises the reserve requirement to 12 percent? Then how much could you lend? In changing the reserve requirement from 10 percent to 12 percent, what’s the Fed trying to do—curb inflation or lessen the likelihood of a recession? Explain how the Fed’s action will contribute to this goal. | 1 | 2 |
<urn:uuid:c4d803aa-a094-4852-8d4c-552b83d5d041> | every morning went out to feed his chicken. Each morning, when it saw
the farmer approach, the bird got ready for breakfast. This scenario
happened over and over until, one morning, the farmer arrived and,
instead of feeding the fowl, wrung its neck.
The point is this: The past is no guarantor of the
future. Though things that have happened before, even regularly, can
and often do happen again, they don't, automatically, have to. The
unexpected does arise and often when least expected (which is part of
what makes it unexpected).
This concept was hard for many seventeenth- and
eighteenth-century Europeans to grasp. The tremendous advances in
science, particularly through the seminal work of Isaac Newton, led
many to believe that all of nature works through cold, uncaring, and
unvarying laws. Once these laws were understood, it was conceivable (if
enough other information were given) that a person could know
everything that would happen in the future because everything—from what
the king would want for dessert on New Year's Eve to the number of
hailstones in the next hailstorm over Paris—could be predicted with
By the early twentieth century, however, scientists
like Niels Bohr, Max Planck, and Erwin Schrodinger-with their
discoveries in quantum physics—brought these deterministic assumptions
into great question. According to quantum theory, reality at its most
fundamental level reveals itself in a transitory, elusive, even
statistical, manner, so that we can know only the probability of
events, nothing more. Gone, now, was the clockwork universe of the
previous few centuries. Einstein, responding incredulously to quantum
uncertainty, once said, "I shall never believe that God plays dice with
No, God doesn't. But He can be full of surprises, and
some of His most unexpected ones appear in the topic for this
quarter—the book of Jonah, which on the surface seems filled with the
uncertainty and surprise of the quantum realm, though, in fact, it is
based on a certitude more solid and constant than the physics of
seventeenth-and eighteenth-century Europe.
First, there's Jonah, a prophet who refuses to accept
his call—hardly the usual biblical paradigm, to be sure. Though a
Daniel he isn't, a prophet he, nevertheless, is: "He restored the coast
of Israel from the entering of Hamath unto the sea of the plain,
according to the word of the Lord God of Israel, which he spake by the
hand of his servant Jonah, the son of Amittai, the prophet,
which was of Gathhepher" (2 Kings 14:25, emphasis supplied). This is
the same Jonah, son of Amittai (hard as it, at times, might be to
believe), whom we'll be following for the next few months.
Next, this prophet flees from the Lord in a boat (A
prophet fleeing the Lord?), only to have the Lord send a storm that
threatens to sink the vessel. Amid the storm, it's the pagans, not the
Hebrew, who pray for deliverance (another surprise), and Jonah is
thrown overboard, only to get swallowed alive by a big fish that holds
him in its stomach for three days before spewing him out, alive, on the
Jonah, finally, after all this prodding, delivers the
message of warning to the Ninevites, who en masse repent from their
evil ways, sparing themselves divine condemnation (a rather surprising
turn of events, as well). But the greatest surprise comes next, because
Jonah becomes saddened, even angry, over their repentance. A
prophet angry over those who repent and turn away from sin?
(As said before, this book is full of surprises.)
Yet, the most important point of Jonah isn't found in
the surprises that spill out of its 48 verses but in the one thing
that's constant all the way through those verses, and that is, God's
incredible grace toward wayward, erring people, even wayward, erring
prophets like Jonah. If the Lord would continue to work with someone
who squandered privileges and ignored light, then there's hope for us,
we who surely have done as badly as this weak-willed, spiritual
pipsqueak of a prophet who should have known better than to do what he
did, even though he did it just the same. Of course, grace is the most
gracious when bestowed upon those who know better but do wrong anyway
(Who among us can't relate?).
The focus of Jonah, then, really isn't on the "great
fish" that swallowed Jonah alive but on "the great God" who prepared
that fish. The great God who never manifested His greatness more than
when He was the most "helpless"; that is, when in the person of His Son
He was nailed to the cross, His life crushed out for the sins of those
who don't know better and even, maybe especially, of those who do. In
one sense, it hardly matters which, because we're all spiritual charity
cases, taking where we don't give, receiving what we don't deserve, and
getting what we don't earn . . . like Jonah.
Many thanks to this quarter's able author, Dr. JoAnn
Davidson, assistant professor of theology, in the Department of
Theology and Christian Philosophy, at the Andrews University Seminary.
Her love for the book of Jonah, and especially for the God revealed in
that book, is apparent all through this Bible Study Guide.
Challenging, baffling, even occasionally disturbing,
the book of Jonah, with all its surprises—maybe even through those
surprises—reveals one truth that never changes: God's love, for even
the most unlovable, which, at times, is all of us.
(all lessons may not be posted)
School Study Helps
Jerry Giardina of Pecos, Texas, assisted by his wife,
Cheryl, prepares a series of helps to accompany the Sabbath School
lesson. He includes all related scripture and most EGW quotations.
Jerry has chosen the "New King James Version" of the scriptures this
quarter. It is used with permission. The study helps are
provided in three wordprocessing versions Wordperfect;
RTF for our
MAC friends; and HTML (Web
Last updated on September 11,
12501 Old Columbia Pike, Silver Spring, MD 20904.
Principal Contributors: JoAnn Davidson
Editor: Clifford Goldstein ( )
Associate Editor: Lyndelle Brower Chiomenti.
Editorial Production Manager: Soraya Homayouni Parish.
Art and Design: Lars Justinen.
Pacific Press Coordinator: Paul A. Hey.
© 2003 Office of the Adult Bible Study Guide,
General Conference of Seventh-day Adventist. All Rights Reserved.
This page is Netscape friendly | 1 | 2 |
<urn:uuid:5a27e121-04ea-4a1b-a2ea-73cddf97ea92> | Select the product you need help with
- Internet Explorer
- Windows Phone
- More products
Excel statistical functions: GROWTH
Article ID: 828526 - View products that this article applies to.
This article describes the GROWTH function in Microsoft Office Excel 2003 and in later versions of Excel, illustrates how the function is used, and compares results of the function for Excel 2003 and for later versions of Excel with results of GROWTH in earlier versions of Excel. GROWTH is evaluated by calling the related function, LINEST. Extensive changes to LINEST for Excel 2003 and for later versions of Excel are summarized, and their implications for GROWTH are noted.
Microsoft Excel 2004 for Macintosh informationThe statistical functions in Excel 2004 for Mac were updated by using the same algorithms that were used to update the statistical functions in Excel 2003 and in later versions of Excel. Any information in this article that describes how a function works or how a function was modified for Excel 2003 or for later versions of Excel also applies to Excel 2004 for Mac.
The GROWTH(known_y's, known_x's, new_x's, constant) function is used to perform a regression analysis where an exponential curve is fitted. A least squares criterion is used, and GROWTH tries to find the best fit under that criterion. Known_y's represent data on the "dependent variable" and known_x's represent data on one or more "independent variables". The GROWTH Help file discusses rare cases where the second or third argument may be omitted.
Assuming that there are p predictor variables, GROWTH essentially calls LOGEST. LOGEST fits an equation of the form:
Values of the coefficients, b, m1, m2, ..., mp are determined that give the best fit to the y data.
If the last argument "constant" is set to TRUE, you want the regression model to include the multiplicative coefficient b in the regression model. If set to FALSE, b is excluded by essentially setting it to 1. The last argument is optional; if the argument is omitted it is interpreted as TRUE.
For ease of exposition in the remainder of this article, assume that data is arranged in columns so that known_y's is a column of y data and known_x's is one or more columns of x data. Of course the dimensions (lengths) of each of these columns must be equal. New_x's will also be assumed to be arranged in columns and there must be the same number of columns for new_x's as for known_x's. All our observations below are equally true if the data is not arranged in columns, but it is just easier to discuss this single (most frequently used) case.
After you compute the best fit regression model (by essentially calling Excel's LOGEST function), GROWTH returns predicted values that are associated with new_x's.
This article uses examples to show how GROWTH relates to LOGEST and to point out problems with LOGEST in versions of Excel that are earlier than Excel 2003 that translate to problems with GROWTH. GROWTH effectively calls LOGEST, executes LOGEST, uses regression coefficients in LOGEST output in its calculation of predicted y values that are associated with each row of new_x's, and presents this column of predicted y values to you. Therefore, you must know about problems in the execution of LOGEST. When LOGEST is called, it in turn effectively calls LINEST. While code for GROWTH and LOGEST have not been rewritten for Excel 2003 and for later versions of Excel, extensive changes (and improvements) in LINEST code have been made.
As supplements to this article, the following article about LINEST is highly recommended. It contains several examples and documents problems with LINEST in versions of Excel that are earlier than Excel 2003.
For more information about LINEST, click the following article number to view the article in the Microsoft Knowledge Base:
828533The LINEST Help file, as revised for Excel 2003, is also recommended.
(http://support.microsoft.com/kb/828533/ )Description of the LINEST function in Excel 2003 and in Excel 2004 for Mac
The following article about LOGEST explains how LOGEST interacts with LINEST. These details are omitted here.
For more information, click the following article number to view the article in the Microsoft Knowledge Base:
828528Because the focus in this article is on numeric problems in versions of Excel that are earlier than Excel 2003, this article does not have many practical examples of the use of GROWTH. The Help file in GROWTH contains useful examples.
(http://support.microsoft.com/kb/828528/ )Excel statistical functions: LOGEST
The arguments, known_y's, known_x's, and new_x's must be arrays or cell ranges that have related dimensions. If known_y's is one column by m rows then known_x's is c columns by m rows where c is greater than or equal to one. C is the number of predictor variables; m is the number of data points. New_x's must then be c columns by r rows where r is greater than or equal to one. (Similar relationships in dimensions must hold if data is laid out in rows instead of columns.) Constant is a logical argument that must be set to TRUE or FALSE (or 0 or 1 that Excel interprets as FALSE or TRUE, respectively). The last three arguments to GROWTH are all optional; see the GROWTH Help file for options of omitting the second argument, third argument, or both; omitting the fourth argument is interpreted as TRUE.
The most common usage of GROWTH includes two ranges of cells that contain the data, such as GROWTH(A1:A100, B1:F100, B101:F108, TRUE). Note that because there is typically more than one predictor variable, the second argument in this example contains multiple columns. In this example, there are one hundred subjects, one dependent variable value (known_y) for each subject, and five dependent variable values (known_x's) for each subject. There are eight additional hypothetical subjects where you want to use GROWTH to compute predicted y values.
Example of usageAn Excel worksheet example is provided to illustrate the following key concepts:
To illustrate the GROWTH function, create a blank Excel worksheet, copy the following table, select cell A1 in your blank Excel worksheet and then paste the entries so that the table following fills cells A1:K35 in your worksheet.
Note After you paste this table in your new Excel worksheet, click the Paste Options button, and then click Match Destination Formatting. With the pasted range still selected, use one of the following procedures, as appropriate for the version of Excel that you are running:
Collapse this tableExpand this table
GROWTH and LOGEST can be viewed as interacting in the following steps:
Predictor columns (known_x's) are collinear if at least one column, c, can be expressed as a sum of multiples of others, c1, c2, and other columns. Column c is frequently called redundant because the information that it contains can be constructed from the columns c1, c2, and other columns. The fundamental principle in the existence of collinearity is that results should be unaffected by whether a redundant column is included in the original data or removed from the original data. Because LINEST in versions of Excel that are earlier than Excel 2003 did not look for collinearity, this principle was easily violated. Predictor columns are almost collinear if at least one column, c, can be expressed as almost equal to a sum of multiples of others, c1, c2, and other columns. In this case "almost equal" means a very small sum of squared deviations of entries in c from corresponding entries in the weighted sum of c1, c2, and other columns. "Very small" might be less than 10^(-12), for example.
The first model, in rows 10 to 12, uses columns B and C as predictors and requests Excel to model the constant (last argument set to TRUE). Excel then effectively inserts an additional predictor column that looks just like cells D2:D6. It is easy to notice that entries in column C in rows 2 to 6 are exactly equal to the sum of corresponding entries in columns B and D. Therefore, there is collinearity present because column C is a sum of multiples of the following items:
The second model, in rows 14 to 16, is one that any version of Excel can handle successfully. There is no collinearity, and the user again requests Excel to model the constant. This model is included here for the following reasons:
In the second model in rows 30 to 35, there is no collinearity and no column removed. You can see that the predicted y values are the same in both models. This issue occurs because removing a redundant column that is a sum of multiples of others does not reduce the goodness of fit of the resulting model. Such columns are removed precisely because they represent no value added in trying to find the best least squares fit. Also, if you examine the LOGEST output in cells I23:K35 in Excel 2003 and in later versions of Excel, you will notice that the last three rows of the output tables are the same. Additionally, the entries in cells I31:J32 and cells J24:K25 coincide. This demonstrates that the same results are obtained when column C is included in the model, but found to be redundant (output in cells I24:K28) as when column C was eliminated before LOGEST was run (output in cells I31:J35). This satisfies the fundamental principle in the existence of collinearity.
In cells A18:C21, Microsoft uses data from Excel 2003 and from later versions of Excel to illustrate how GROWTH takes LOGEST output and computes the relevant predicted y-values. By examining the formulas in cells A20:A21 and cells C20:C21, you can see how LOGEST coefficients are combined with new_x's data in cells B7:C8 for each of the two models (using columns B, C as predictors; using only column B as a predictor).
Collinearity is identified in LOGEST in Excel 2003 and in later versions of Excel because LOGEST calls LINEST. LINEST uses a completely different approach to solving for the regression coefficients. This approach is QR Decomposition. The LINEST article contains a walkthrough of the QR Decomposition algorithm for a small example.
Summary of results in earlier versions of ExcelGROWTH results are adversely affected in versions of Excel that are earlier than Excel 2003 because of inaccurate results in LOGEST that, in turn, stem from inaccurate results in LINEST.
LINEST was calculated using an approach that paid no attention to collinearity issues. The existence of collinearity caused roundoff errors, inappropriate standard errors of regression coefficients, and inappropriate degrees of freedom. Sometimes roundoff problems are sufficiently severe that LINEST filled its output table with #NUM!. If, as in the great majority of cases in practice, you can be confident that there were not collinear (or almost collinear) predictor columns, then LINEST would generally provide acceptable results. Therefore, users of GROWTH can be similarly reassured if they can see the absence of collinear (or almost collinear) predictor columns.
Summary of results in Excel 2003 and in later versions of ExcelImprovements in LINEST include switching to the QR Decomposition method of determining regression coefficients. QR Decomposition has the following advantages:
ConclusionsGROWTH's performance has been improved because LINEST has been greatly improved for Excel 2003 and for later versions of Excel. Improvements in LINEST also affect LOGEST, because LOGEST is essentially called by GROWTH. Users of earlier versions of Excel should verify that predictor columns are not collinear before they use GROWTH.
Much of the material presented in this article and in the LINEST article might at first appear alarming to users of versions of Excel that are earlier than Excel 2003. However, it should be noted that collinearity is a problem in only a small percentage of cases. Earlier versions of Excel give acceptable GROWTH results when there is no collinearity.
Fortunately, improvements in LINEST also affect the Analysis ToolPak's linear regression tool (this tool calls LINEST) and two other related Excel functions: LOGEST and TREND. | 1 | 21 |
<urn:uuid:53b335b3-ff65-4e07-b3b2-65be39cc3b9a> | National Heart, Lung, and Blood Institute
NHLBI provides leadership for a national program in the causes, diagnosis, treatment, and prevention of diseases of the heart, blood vessels, lungs, and blood, and sleep disorders, and in the uses of blood and the management of blood resources. It conducts and supports, through research in its own laboratories and through extramural research grants and contracts, an integrated and coordinated program that includes basic investigations, clinical trials, epidemiological studies, and demonstration and education projects.
Although the major part of the research supported by NHLBI addresses common conditions such as hypertension, coronary heart disease, and chronic obstructive pulmonary disease, a significant amount of research is devoted to rare diseases in children and adults. NHLBI activities related to rare disease research in fiscal year (FY) 2001 are described below.
Heart and Vascular Diseases Program
Abetalipoproteinemia is a recessive disorder characterized by the absence of apoprotein B-containing lipoproteins from plasma. Fat malabsorption is severe and triglyceride accumulation occurs. Acanthocytosis, a rare condition in which the majority of red blood cells have multiple spiny cytoplasmic projections, is common. Additional symptoms appear to be secondary to defects in the transport of vitamin E in blood. Projects using genetic, biochemical, and metabolic approaches to study various aspects of the disease were underway in four grants in FY 2001. The disorder appears to be related to abnormal processing of apolipoprotein B (apoB) due to an absence of the microsomal triglyceride transfer protein (MTP). In FY 2001, studies indicated that MTP is implicated in both apoB lipoprotein and triglyceride secretion. Cells lacking the ability to make MTP are unable to assemble and secrete apoB-containing lipoproteins. However, when MTP production is rectified (through appropriate transfection), apoB-containing lipoproteins are once more assembled and secreted.
Antiphospholipid Syndrome (APS)
APS is characterized by the presence of circulating autoantibodies to certain phospholipids (lipids containing phosphorus). It is clinically manifested by recurrent blood clotting disorders, a history of fetal deaths, and autoimmune diseases such as thrombocytopenia. One NHLBI-supported grant is engaged in efforts to develop more standardized imunoassays that will reliably detect individual antiphospholipid antibodies and is also investigating the role of the syndrome in atherogenesis. Circulating antibodies to oxidized phospholipids, particularly cardiolipin, were found in FY 2001 to correlate with the presence of isoprostanes, strong biomarkers for atherogenesis and a means of indicating the extent of atherosclerosis. Genes of autoantibodies that were cloned on the basis of their ability to bind to oxidized phospholipids have been discovered to play an important role in atherogenesis and to confer protection against certain bacterial infections.
Arrhythmogenic Right Ventricular Dysplasia (ARVD)
ARVD is a family of rare cardiomyopathies that results in sudden cardiac death and heart rhythm disturbances including fibrillation. Most forms are believed to be due to the inheritance of autosomal dominant mutations in genes whose identities remain largely unknown but that clearly affect myocardial integrity. ARVD is characterized by marked, selective, right ventricular dilatation, myocardial cell death, and cell replacement with fat cells and fibrous tissue. Expression in gene carriers is variable, but in those who display symptoms the outcome is frequently lethal. NHLBI supports work on ARVD at one of its Specialized Centers of Research (SCOR) in Sudden Cardiac Death and sponsors a network of three separate groups to investigate causes of familial forms of ARVD and genotpye-phenotype relationships. In FY 2001 the SCOR investigators identified a candidate gene product for a Neuroblastoma apoptosis-related RNA-binding protein that may correspond to a chromosomal mutation identified earlier as being common to patients with the congenital form of ARVD.
Bartter's syndrome, a rare autosomal recessive disease, typically manifests itself through salt imbalance and low blood pressure. Research on Bartters syndrome is currently being pursued as a part of the NHLBI SCOR program in the Molecular Genetics of Hypertension. The discovery that a mutation in an ATP-sensitive K channel can lead to Bartters syndrome establishes the genetic heterogeneity of the disease and demonstrates that this K channel may be an important regulator of blood pressure, ion balance, and fluid balance.
Beta-sitosterolemia is a rare inborn error of
metabolism characterized by increased absorption of dietary cholesterol and
plant and shellfish sterols. Patients with beta-sitosterolemia have a markedly
increased risk of premature cardiovascular disease. Effective treatment is not
available at present, although a number of drugs are under development. NHLBI
supports research into beta-sitosterolemia through its intramural Molecular
Diseases Branch and its extramural grant programs. One NHLBI-supported
investigator at the Medical University of South Carolina, who is investigating
the molecular mechanisms of cholesterol absorption and excretion in families
with beta-sitosterolemia, has identified two separate defective genes.
Additional research is identifying specific ABCG sterol transporter protein
mutations in affected families.
Brugadas syndrome is a rare inherited disorder characterized by cardiac electrophysiological abnormalities specifically, right bundle branch block and ST elevation in the precordial leads and is associated with a high occurrence of sudden cardiac death. The condition is currently believed to be similar in cause and potential treatment to some forms of the long QT syndrome. Both appear to be caused by mutations at different locations in the SCN5A cardiac muscle sodium ion channel gene and by resulting aberrations in depolarization and repolarization of these cells. One NHLBI-supported study demonstrated in FY 2001 that distinct mutations within a single residue of SCN5A can give rise to either Brugada syndrome (tyrosine to histidine mutation) or to long QT syndrome (tyrosine to cysteine mutation) adding evidence for a close relationship between these disorders.
Congenital Heart Disease
Congenital heart disease affects about 8 in 1,000 live-born infants (32,000 per year in the US), making it the most common birth defect. Abnormal formation of the embryonic heart results in both structural and functional heart defects. It is an important cause of infant mortality, pediatric and adult morbidity, and shortened adult life expectancy. About one-third of affected infants and children require open heart surgery or interventional cardiac catheterization to repair or ameliorate their defects. Approximately the same proportion have associated extracardiac anomalies such as chromosomal abnormalities and syndromes involving other organ systems.
NHLBI has supported research in pediatric cardiovascular medicine since it first funded heart research grants in 1949. Researchers supported by NHLBI have been instrumental in developing diagnostic imaging techniques, including fetal imaging; surgical techniques, including various operations and refinements in cardiopulmonary bypass; and medical therapies now used to ensure healthy survival for most affected children. They have also made significant contributions to the epidemiology of congenital heart disease and to understanding the molecular and genetic basis of normal and abnormal heart development.
A key finding from NHLBI-funded researchers this year was the identification of a new gene, Bop, that is the primary controller in a cascade of genetic events that lead to the development of heart ventricles in mouse embryos. This finding may eventually lead to understanding ventricular malformations in humans.
DiGeorge syndrome occurs with an estimated frequency of 1 in 4,000 live births. It is characterized by many abnormalities, including cardiac outflow tract anomalies, hypoplasia of the thymus and parathyroid glands, cleft palate, facial dysmorphogenesis, learning difficulties, and other neurodevelopmental deficits. It is usually sporadic, but may be inherited, and is caused by deletion of a segment of chromosome 22. The specific gene that is abnormal has not been identified. NHLBI supports both human and animal studies of DiGeorge syndrome through several grants, including two SCORs in Pediatric Cardiovascular Disease. The finding by NHLBI-funded researchers that mice with chromosomal deletions similar to those found in humans with DiGeorge syndrome have deficits in learning and memory could lead to improved treatments for psycho-developmental abnormalities in affected individuals. Such results support the need for a comprehensive therapeutic approach to children with DiGeorge syndrome, such as the team approach developed by a SCOR at the Childrens Hospital of Philadelphia.
The generic drug, doxorubicin (brand name, Adriamycin) is a potent, broad-spectrum antitumor agent effective in treating a variety of cancers including solid tumors and leukemia. Unfortunately, its clinical use is limited by dose-dependent cardiac side effects that lead to degenerative cardiomyopathy, congestive heart failure, and death. In addition, some adult patients treated with the drug when they were children are now developing dilated cardiomyopathy. Endocardial biopsies from patients undergoing doxorubicin therapy reveal a disruption of myofibrils, impairment of microtubule assembly, and a swelling of the endoplasmic reticulum. Doxorubicin cardiotoxicity is also characterized by a dose-dependent decline in mitochondrial oxidative phosphorylation and a decrease in high-energy phosphate pools.
Several NHLBI-supported investigators have reported research advances in the past year. One has demonstrated that cardiac tissue from doxorubicin-treated rats expresses an increased tolerance for withstanding short periods of oxygen deprivation. This observation is providing novel insights into the molecular regulation of compensatory responses that may underlie the adaptation phenomenon that has been widely described for other types of cardiac challenge. The same investigator has observed a potential cardioprotective effect against doxorubicin-induced mitochondrial cardiomyopathy by carvedilol, a non-selective beta-blocker with alpha-1 blocking (vasodilating) and anti-oxidant properties. This promising result provides exciting opportunities for supporting clinical trials of carvedilol as protection against the debilitating side-effects of doxorubicin. This is particularly relevant in that the class of drugs known as beta-adrenergic receptor antagonists, of which carvedilol is one, are currently widely prescribed as safe and effective prophylactic measures for treating many other cardiovascular disorders, including congestive heart failure. Adding doxorubicin-induced cardiomyopathy to the list of indications for carvedilol may prove to be a highly effective means of reducing the incidence and/or severity for cardiac failure that limits the clinical success currently achievable with doxorubicin.
Another investigator is examining explicit pathways through which reactive oxygen species are involved in doxorubicin-induced cardiomyopathy. She has discovered a marked inhibition of activity of the myocardial membrane-associated enzyme phospholipase A2 by clinically relevant concentrations of doxorubicin. This novel observation suggests new means of doxorubicin action and has significant implications for elucidating the mechanisms underlying doxorubicin cardiotoxicity and pharmacological interventions to prevent it.
Dysbetalipoproteinemia is a rare disorder with a strong heritable component characterized by the presence of beta-migrating very low-density lipoprotein (VLDL). The disorder leads to the formation of characteristic yellow skin plaque (xanthomas) and predisposes to early ischemic heart disease and peripheral vascular disease. Research into the genetics and biochemical events underlying the etiology and pathophysiology of the disease is underway in two NHLBI-supported grants. A mutant form of apoprotein E (apo-E2) has been identified as the primary molecular defect in dysbetalipoproteinemia. Animal models synthesizing apo-E variants are being created to facilitate basic research. In FY 2001, animals expressing human apo-E2 were found to have significant increases in the apo-E2 content of VLDL and intermediate-density lipoproteins (IDL). High levels of apo-E2 are accompanied by higher levels of total cholesterol and plasma triglycerides.
Familial Hypercholesterolemia (FH)
FH is an inherited autosomal dominant trait characterized by elevated concentrations of low-density lipoproteins (LDL). Cholesterol derived from LDL is deposited in arteries and causes heart attacks and xanthoma lesions on tendons and skin. The defect in FH is a mutation in the gene specifying the receptor for plasma LDL. The receptors facilitate removal of LDL and, when deficient or absent, the rate of LDL removal is low, resulting in an elevated LDL level. The homozygous form of FH is rare (one in a million), but people who have it are highly prone to premature coronary heart disease. Several NHLBI grants support studies on the biochemistry, genetics, and potential treatment of the disease. A major program project supports research on various aspects of regulating LDL receptors and cholesterol levels in the blood. Genetically-manipulated animal models have been created specifically to study FH. Regulation of LDL receptor activity and other lipoprotein receptors involved in disease progression is being elucidated. Development of apheresis methods for removing excess LDL from plasma is progressing, and testing of a combination of pharmacological agents is being planned.
Familial Hypertrophic Cardiomyopathy (FHCM)
FHCM is associated with myofibrillar disarray in the heart muscle that in turn leads to fibrosis and hypertrophy (enlargement of the heart). Although patients may remain asymptomatic for some time, eventually shortness of breath, palpitations, and heart failure emerge, and sudden death ensues. Some die during childhood whereas others survive to their sixth or seventh decade. FHCM is associated with mutations in more than one protein, suggesting a heterogeneous group of disorders. During the past decade, scientists made significant progress in uncovering the genes associated with FHCM. It is known, for instance, that FHCM can be caused by many different mutations in the contractile proteins that comprise the heart wall. However, understanding of who will die suddenly or whether certain factors, such as high blood pressure or extreme stress, will trigger sudden death remains elusive. NHLBI supports research on the genetic basis and mechanisms involved through several investigator-initiated grants and in two SCORs in Heart Failure.
One SCOR program has demonstrated that simvastatin reverses cardiac hypertrophy and fibrosis in a rabbit model, and losartan reverses fibrosis in a mouse model. This is the first time that any drug has been shown effectiveness in an animal model of FHCM. Additionally, these investigators have preliminary findings that spironolactone is equally effective as a treatment, indicating that angiotensin II is involved in fibrosis and hypertrophy formation. An investigator in the second SCOR program has observed that the immunosuppressive drug, cyclosporin A, dramatically exacerbates the hypertrophic response in his mouse model of FHCM and that the calcium channel blocker, diltiazem, prevents this cyclosporin A-mediated response. He is currently assessing the effects of calcium channel blockers on the course of FHCM in mice.
Familial Hypobetalipoproteinemia (FHBL)
FHBL is an apparently autosomal dominant disorder of lipid metabolism characterized by very low levels of apoprotein B-containing lipoprotein cholesterol. One NHLBI-supported project is using genetic, biochemical, and metabolic approaches to study various aspects of the disease. In FY 2001, information gained from newly-identified families with FHBL enabled researchers to markedly narrow down the chromosome region containing the responsible genes. The most promising of the 60 genes in this narrower region are now being sequenced. Also, eight families have been identified that may have a new form of FHBL, since they have a susceptibility region near, but not in, the apoB gene on chromosome 2.
Infectious myocarditis, which affects both children and adults, is an inflammation of the heart muscle that sometimes leads to progressive heart failure and the need for heart transplantation. NHLBI supports both human and animal studies of the disease. The infectious agent, Coxsackievirus B3 (CB3), is believed to be involved in many clinical cases of human myocarditis. One NHLBI-supported investigator is studying both susceptible and resistant strains of CB3 to determine the role of natural killer cells and cytokines components of the innate immune system in the pathology of myocarditis. The presence of certain cytokines in mice indicates that a Th1 immune response is taking place. In male mice, this response appears to be related to increased disease. Another investigator is looking at the pathogenesis of acute rheumatic fever (ARF), a consequence of group A streptococcal bacteria. Here, too, evidence supports a role for a Th1 response in the pathogenesis of the disease. Reovirus-induced myocarditis in mice provides an outstanding tool to investigate non-immune mediated myocarditis. The reovirus (reo is an acronym for respiratory enteric orphan) is a naturally occurring virus that is believed to cause mild infections of the upper respiratory and gastrointestinal tract of humans. Studies have shown that viral RNA synthesis in cardiac myocytes is a determinant of reovirus-induced myocarditis. Furthermore, genetic analysis of reoviruses that cause myocarditis has implicated several specific viral genes.
Klippel-Trenaunay-Weber Syndrome (KTWS)
KTWS is a very rare vascular deformation disease involving capillary, lymphatic, and venous channels. It usually manifests as cutaneous port-wine capillary malformations, varicose veins, and enlargement of soft tissues and bone in one limb. KTWS symptoms are usually present at birth, with 75 percent of patients having symptoms before the age of 10. A molecular approach to characterizing the genes that contribute to KTWS is being taken in an NHLBI-supported grant at the Cleveland Clinic Foundation. The investigator proposes that KTWS pathogenesis involves the disruption of key genes for vascular morphogenesis during embryonic development. He has characterized a KTS translocation involving chromosomes 5 and 11 and identified a novel vascular gene as the strong candidate gene for KTWS.
Long QT Syndrome (LQTS)
LQTS is characterized clinically by a prolonged QT segment on an electrocardiograph and is associated with syncope, ventricular arrhythmias, and sudden cardiac death. This family of related diseases is believed to be caused by alterations in the cardiac cell action potential induced by mutations in at least six cardiac ion channel genes. NHLBI currently supports research on LQTS through a SCOR on Sudden Cardiac Death and through numerous individual grants that address the various molecular, clinical, and genetic bases of the condition. One NHLBI-supported study identified in FY 2001 mutations in the genes encoding the beta adrenergic receptors that may be associated with acquired LQTS, thus indicating a possible role for non-channel proteins in contributing to the development of arrhythmias and sudden death.
Niemann-Pick Type C Disease (NPC)
NPC is an autosomal recessive, lipid-storage disorder that is usually characterized by excessive accumulation of cholesterol in the liver, spleen, and other vital organs. Patients have cardiovascular disease, enlargement of the liver and spleen (hepatosplenomegaly), and severe progressive neurological dysfunction. Biochemical analyses of NPC cells suggest an impairment in the intracellular transport of cholesterol to post-lysosomal destinations. The gene deficiency in Niemann-Pick disease types A and B has been identified as sphingomyelinase. The gene deficiency in types C and D has been identified as the NPC-1 protein, but few clues regarding its potential function(s) have been derived. Two NHLBI grants and a subproject in a SCOR program support research to study the regulation of intracellular cholesterol movement that leads to cholesterol accumulation in NPC disease.
The accumulation of cholesterol in NPC results from an imbalance in the flow of cholesterol among membrane compartments. Characterization of a putative cholesterol sensor in the plasma membrane that affects cholesterol trafficking into or out of cells is underway. The gene deficiency in NPC that encodes a cholesterol-binding protein has been identified. These new data will help fill a major gap in current understanding of cholesterol transport in the cell. Building on our knowledge of how cholesterol gets into lysosomes and what happens after lipid reaches the endoplasmic reticulum (ER), researchers now face the challenge of elucidating how cholesterol gets out of the lysosomes and into the ER.
Smith-Lemli-Opitz Syndrome (SLOS)
SLOS is an inherited autosomal recessive disorder caused by a defect in the enzyme that catalyzes the last step in cholesterol biosynthesis. As a result, endogenous cholesterol synthesis is inadequate to meet biological demands for functions such as membrane structure and bile acid synthesis, and the precursor 7-dehydrocholesterol and its derivatives accumulate. Newborns with SLOS have a distinctive facial dysmorphism; suffer from multiple congenital anomalies including cleft palate, congenital heart disease, genitourinary abnormalities, and malformed limbs; and exhibit severe developmental delays, digestive difficulties, and behavioral problems. The syndrome is thought to account for many previously unexplained cases of mental retardation. During FY 2001 NHLBI supported two investigator-initiated grants whose research foci are relevant to SLOS. One is conducting basic studies in sterol balance and lipid metabolism on 50 infants with SLOS. The study is also investigating the effectiveness of cholesterol-supplemented baby formula in ameliorating some of the behavioral and digestive symptoms of SLOS, and the effectiveness of simvastatin therapy in lowering the plasma concentrations of toxic forms of abnormal cholesterol precursor compounds. Intermediate evaluation of progress indicates that infants tolerate the treatments well. The second grant, which ended in FY 2001, focused on basic analytical chemistry aspects of SLOS in the hope of developing an improved diagnostic test. Diagnostic and screening tests for SLOS are based on the presence of abnormally high levels of certain compounds from the sterol biosynthesis pathway that build up due to a lack of needed enzymes. Improved chemistry methods developed with support from this grant have led to improved separation of these compounds and more accurate determination of their concentrations in blood and other biological fluids, such as amniotic fluid.
Tangier disease is a rare syndrome characterized by a deficiency of high-density lipoprotein (HDL), mild hypertriglyceridemia, neurologic abnormalities, and massive cholesterol ester deposits in various tissues, such as the tonsils. The disease is inherited as an autosomal co-dominant trait and appears due to hypercatabolism rather than a defect in HDL synthesis. A member of the ATP-binding cassette (ABC) transporter family, human ABCA1, located on chromosome 9 has been identified as the defective gene. ABCA1 is conceived as the gatekeeper for eliminating excess cholesterol from tissues and therefore key in determining the amount of cholesterol accumulating in the artery wall. Research on the cell biology and biochemistry of the human ABCA1 and its role in the disease is underway in two NHLBI-supported studies. In FY 2001, these studies found that unsaturated fatty acids reduce macrophage ABCA1 content by enhancing its degradation rate. Also, ABCA1 was shown to be responsible for the transport of a-tocopherol from cells.
The NHLBI intramural Molecular Disease Branch also has been actively studying Tangier disease for a number of years and announced five major findings in FY 2001:
(1) The complete genomic sequence and the regulatory elements modulating gene expression have been determined for the ABCA1 transporter,
(2) ABCA1 transgenic mice have been developed to study the mechanisms involved in the removal of excess cholesterol from cells,
(3) the ABCA1 transporter has been shown to recycle from the cell surface to a late endocytic compartment establishing a new pathway for the transport of intracellular cholesterol to the cell membrane for removal by HDL,
(4) the specific plasma apolipoproteins in HDL which act as acceptors for cholesterol removed from cells mediated by the ABCA1 transporter have been identified and the molecular structural requirements to function as cholesterol acceptors have been elucidated, and
(5) Overexpression of the ABCA1 transporter in mice results in a marked decrease in diet induced atherosclerosis indicating that the development of drugs to upregulate the expression of the ABCA1 transporter may be a new approach to the treatment of cardiovascular disease.Lung Diseases Program
Advanced Sleep Phase Syndrome (ASPS)
ASPS is a rare, genetically-based sleep disorder characterized by an early evening onset of sleep, normal sleep duration, and spontaneous early awakening. NHLBI supports basic research to elucidate the neural pathways through which the biological clock mechanism regulates sleep; clinical research to elucidate genetic risk factors; and applied research on the role of the biological clock in disturbed sleep and alertness of shift workers, school-age children, and drowsy drivers.
Alpha-1 Antitrypsin (AAT) Deficiency
AAT deficiency is an inherited deficiency of a circulating proteinase inhibitor that is manufactured primarily in the liver. Deficiency states (circulating serum AAT levels below 0.6 mg/ml) are associated with emphysema, presumably from inadequate protection against enzymatic destruction by neutrophil elastase. Fifteen percent of the AAT-deficient population also develop liver disease. NHLBI funds a variety of clinical and basic research on AAT deficiency, including studies of the molecular mechanisms that impair secretion of AAT, methods of gene therapy delivery, and how to increase the availability of defective, but partially active, AAT. NHLBI-supported investigators are defining the abnormalities and degradation pathways of the AAT protein, characterizing the inflammation that leads to disease in various AAT deficiency states, and evaluating the possibility of treating the disease with drugs that would enhance the release of partially active mutant protein from liver cells. A genetics study of families is seeking to identify other genes that may modify the nature and severity of the disease as expressed in different individuals. In addition to research that specifically focuses on AAT, NHLBI supports related studies addressing the general causation of emphysema; the function, synthesis, secretion, and interaction of the enzymes that are inhibited by AAT; animal models of other enzyme inhibitor deficiencies; gene regulation; gene therapy; cellular signaling, injury, and repair; and protein processing.
Asbestosis, an occupational lung disease, is the interstitial pneumonitis and fibrosis caused by exposure to asbestos fibers. New research findings have improved understanding of the role of genetic susceptibility in lung injury from asbestos. Following asbestos exposure, lung fibroblasts are activated to grow and produce connective tissue. Certain inbred mice, the 129 mouse strain, do not develop asbestos-induced fibrogenesis, whereas other inbred strains do. Studies using growth factors suggest that the fiber deposition in the lungs of 129 mice is due to an intrinsic difference in the ability of the lung fibroblasts to respond to growth factors. In mice it was also demonstrated that excess levels of transforming growth factor-beta (TGF-beta) can induce fibrogenesis in the lungs of fibrogenic-resistant mice, thus providing a clue as to how individual growth factors may contribute to the development of fibroproliferative lung disease.
Bronchopulmonary Dysplasia (BPD)
BPD is a chronic lung disease characterized by
disordered lung growth, specifically, changes in cell size and shape and a
reduction in the number of alveolar structures available for gas exchange. It
affects at least 10,000 very-low-birth-weight infants each year and is
associated with neonatal intensive care costs as high as $60,000 per patient.
The incidence of BPD has increased in recent years due to the increased
survival of smaller premature infants. The NHLBI Collaborative Program for
Research in BPD provides a well-characterized primate model for a
multi-disciplinary exploration of the etiology of the disease. NHLBI also
supports two clinical trials on the role of nitric oxide in preventing and
treating chronic lung disease in premature infants. A safety and dosage Phase
II clinical trial for intratracheal instillation of the anti-inflammatory
uteroglobin, CC10, in premature infants is nearing completion. In FY 2001,
reduction of ventilatory injury with Nasal Continuous Positive Airway Pressure
(nCPAP) was demonstrated in the pre-term baboon model of BPD; histologic
examination revealed thin saccular walls with minimal fibro-proliferation and
improvements in internal alveolar surface area. In addition, the NHLBI
intramural research program developed new technologies for the prevention of
nosocomial pneumonia and ventilator-induced injury that may reduce patient
morbidity and mortality in the intensive care unit.
Churg-Strauss syndrome is a rare disorder that was first reported in the 1950s. It is characterized by the formation and accumulation of an abnormally large number of certain white blood cells (eosinophils), inflammation of blood vessels (angiitis or vasculitis), and inflammatory nodular lesions (granulomatosis). Onset typically occurs between 15 to 70 years of age, and the disease affects both males and females. Patients with the syndrome are often affected by asthma. Churg-Strauss syndrome can be severely debilitating, and even fatal if untreated, but patients usually respond well to corticosteroid treatment. Over 90 cases of Churg-Strauss Syndrome have been reported in less than 2 years by physicians who had switched asthma patients from corticosteroid therapy to anti-leukotriene therapy. It is unclear whether the increased reports of Churg-Strauss are the result of an untoward effect of the anti-leukotriene therapy or a primary eosinophilic disease that had been clinically recognized and treated as asthma but was uncovered as Churg-Strauss once the corticosteroid therapy was withdrawn. NHLBI does not currently support research specifically investigating Churg-Strauss syndrome, however, it does support numerous investigator-initiated grants studying the basic mechanisms of asthma, including examination of the role of eosinophils. NHLBI also supports clinical studies of severe asthma and of medications used in asthma management, such as anti-leukotriene therapy. An NIH workshop report on the relationship of asthma therapy and Churg-Strauss syndrome was published in the Journal of Allergy and Clinical Immunology in FY 2001.
Congenital Central Hypoventilation Syndrome (CCHS)
CCHS, also known as Ondine's Curse, is a rare disorder characterized by normal breathing while awake but shallow breathing during sleep that is not effective in moving fresh air into the lungs. NHLBI supports a basic research program to elucidate the anatomical and physiological organization responsible for neural rhythm generation and translation into breathing. Research is focused on improving our understanding of how breathing is regulated and the conditions under which reflexive generation of respiratory rhythm is abolished. Identification of the neuronal pathways producing respiratory rhythm and pattern are prerequisite for a full understanding of a variety of respiratory sleep disorders such as CCHS. Recent findings obtained from overnight sleep studies indicate that CCHS is associated with a diminished sensitivity to levels of carbon dioxide in blood during non-rapid eye movement (non-REM) sleep. During REM sleep, other neural drives to breathe appear to supervene to enable adequate ventilation. Genetic and pathological studies of CCHS patients may enable identification of the genes or areas of the central nervous system involved in the syndrome and the abnormalities in ventilation.
Congenital Diaphragmatic Hernia (CDH)
CDH is a developmental disorder that occurs once in every 2,400 births. Often CDH occurs in isolated fashion, i.e., not associated with other life-threatening anomalies or chromosomal aberrations. Affected neonates usually die soon after birth because lung tissue compressed by the herniated viscera is inadequately developed, and hypoplasia of the pulmonary vascular bed leads to pulmonary hypertension or persistent fetal circulation syndrome. For infants who survive this disease, the cost of postnatal care can exceed $100,000. In June 1999, NHLBI awarded a grant for an investigator-initiated clinical trial to test the efficacy of an in utero surgical technique to correct lung hypoplasia as compared to post-natal care in a group of human fetuses at 24-28 weeks gestation in whom the most severe form of congenital diaphragmatic hernia had been identified. Because the group assigned to post-natal care had an approximate mortality of 30 percent, rather than the expected 80 percent, it was concluded that a much larger sample size would be required. The investigators were unable to arrange for the necessary multi-site collaborations, so enrollment had to be terminated in July 2001. The patients already enrolled in the trial continue to be followed.
Cystic Fibrosis (CF)
CF is a multi-system disease characterized by defective transport of chloride and sodium across the cell membrane. It is the nations number one genetic cause of death among children and young adults. More than 25,000 Americans have CF, with an incidence of about 1 in 3,300 among Caucasians. Ninety percent of persons with CF die from pulmonary complications. The responsible gene, the CF transmembrane conductance regulator (CFTR), was identified in 1989. More than 800 mutations and DNA sequence variations identified in the CFTR gene contribute to the highly variable presentation and course of the disease. NHLBI supports a vigorous program of basic, clinical, and behavioral research focused on the etiology, pathophysiology, and treatment of the pulmonary manifestations of CF. The NHLBI Program in Gene Therapy for Cystic Fibrosis and Other Heart, Lung, and Blood Diseases is focused on overcoming the many barriers to gene therapy for CF, such as vector entry, persistence of expression, selective targeting to appropriate organ or cell, toxicity of the vector, and host immune response. The program also evaluates potential new pharmacologic therapies. An example of a promising therapeutic strategy being investigated for CF is the screening of compounds that upregulate the chaperone proteins that maintain CFTR in its proper shape to function correctly.
Lack of understanding of the pathogenesis of CF airways disease reflects, in part, ignorance of the physiology of the airway surface liquids (ASL) that are vital for gas exchange and lung defense in the normal lung. A major scientific advance over the past year has provided definitive evidence for the importance of low ASL volume in the pathogenesis of CF airway epithelial disease, contributing to thickened mucus generated by ASL volume depletion and greater adherence of mucins to the surfaces of the CF airways. Based on these findings, studies are underway to develop therapeutic approaches to normalize ASL volume in CF. In addition, the NHLBI intramural program reported in FY 2001 that diagnostic approaches based on immunological detection of the Pseudomonas aeruginosa type III secretory apparatus and its associated cytotoxins provide evidence for early colonization and/or infection in children with CF.
Idiopathic Pulmonary Fibrosis (IPF)
IPF is a rare chronic lung disease of unknown cause affecting between 3 and 30 individuals per 100,000 population. Individuals with IPF develop abnormal, excessive scarring in the lungs that can cause progressive shortness of breath and coughing. Currently available treatments, most commonly with corticosteroids in combination with other potent drugs, and less commonly with lung transplantation, do little to prevent a relatively rapid death in most patients. NHLBI-supported research on IPF is examining the molecular and cellular events that trigger the inflammation of alveoli seen in the early stages of the disease and that influence progression to the irreversible, fibrotic end stage. Three NHLBI intramural observational clinical research protocols focusing on the natural history and pathogenesis of the disease are open for enrollment of subjects with familial and non-familial forms of IPF. The protocols have established collaborations with extramural sites and are working with the Pulmonary Fibrosis Association and other patient-support organizations to recruit patients. In FY 2001 NHLBI intramural scientists found that aberrant transcriptional control in alveolar macrophages may be a contributing factor in the pathogenesis of IPF.
LAM is a rare lung disease that affects women, usually during their reproductive years. Symptoms develop as the result of proliferation of atypical, non-malignant smooth muscle cells in the lungs. Diagnosis is usually made by lung biopsy. Common symptoms include shortness of breath, cough, and sometimes coughing up blood. Patients often develop spontaneous pneumothorax or chylous pleural effusion (collapse of the lung or collection of milky looking fluid around the lung). The clinical course of LAM is quite variable, but is usually slowly progressive, eventually resulting in death from respiratory failure. Although no treatment has been proven effective in halting or reversing LAM, lung transplantation is a valuable treatment for patients with end-stage lung disease. Some patients with tuberous sclerosis complex (TSC), a genetically transmitted disease, develop lung lesions identical to those seen in LAM. In some cases, the clinical distinction between TSC and LAM may be difficult.
NHLBI supports research on LAM in both its intramural and extramural programs. As part of the intramural program, the Institute has established a research laboratory at the NIH Clinical Center to learn more about the cause and progression of LAM at the clinical, cellular, and molecular levels. Researchers are determining the characteristics of the unusual smooth muscle cells that damage the lungs of LAM patients. An important aspect of the research is learning how growth is regulated in these cells.
The NHLBI extramural program supports a national LAM Patient Registry that is coordinated by the Cleveland Clinic Foundation. The Office of Research on Womens Health co-funds the registry with NHLBI. The LAM Registry began enrolling patients in the summer of 1998. Enrollment closed in September 2001 with 253 LAM patients recruited. The Registry is helping to manage the collection and distribution of LAM tissue for current LAM projects, as well as serving as a repository of LAM tissue for future research.
During FY 2001 additional progress was made in understanding the genetic mechanisms leading to smooth muscle proliferation in LAM and the relationship between LAM and TSC. Previously it was reported that mutations in the TSC2 gene can cause pulmonary LAM. More recently it was shown that in an individual patient the same types of mutations occur in cells taken from LAM lesions in the lungs and in cells taken from kidney tumors, known as angiomyolipomas. This suggests that the cells in the lung and the kidney have a common genetic origin. This discovery may lead to new diagnostic and therapeutic strategies for women with LAM. Also in FY 2001, the NHLBI intramural program reported that data from its ongoing study of the natural history of LAM have established clinical, pathological, physiological, and genetic criteria that define disease severity and progression.
Narcolepsy is a disabling sleep disorder affecting over 100,000 people in the US. It is characterized by excessive daytime sleepiness and rapid onset of deep (REM) sleep. Other symptoms involve abnormalities of dreaming sleep, such as dream-like hallucinations and transient periods of physical weakness or paralysis (cataplexy). Through programs such as the SCOR in Neurobiology of Sleep and Sleep Apnea, NHLBI supports research on the regulation of sleep and wakefulness, the regulation of muscle tone during sleep, and the genetic basis of narcolepsy in humans and animals. One new study finds that low cerebrospinal fluid levels of hypocretin, a neurochemical messenger linking sleep with the regulation of muscle tone, are highly specific to narcolepsy and could potentially have utility as a diagnostic procedure. Another study has determined that hypocretin is an excitatory chemical in brain regions regulating sleep.
Persistent Pulmonary Hypertension of the Newborn (PPHN)
PPHN affects approximately 1 in 1,250 live born term infants. Due to inappropriate muscularization of fetal pulmonary vessels, the lung arteries of affected newborns fail to dilate after birth to allow for normal blood flow through the lung. Such infants are poorly oxygenated and require costly and prolonged medical care including: intubation of the airway, inhalation of 100 percent oxygen, mechanical ventilation, and, often, heart/lung bypass (extracorporeal membrane oxygenation). One NHLBI SCOR on the Pathobiology of Lung Development is focused on the unique vascular response of the neonate to injurious stimuli with a view toward identifying the basic molecular mechanisms involved in the development of the vasculature. Such research may provide information for the treatment of hypertensive pulmonary disorders such as PPHN. Enrollment began in late1999 for a clinical study that will address maternal risk factors such as cigarette smoking and antenatal exposure to the non-steroidal anti-inflammatory drugs, aspirin and ibuprofen. Experimental evidence consistently suggests that maternal exposure to these agents plays a role in the etiology of the disorder. Buccal cell specimens are being collected and stored for future genetic analyses should a relationship be demonstrated.
Inhaled nitric oxide (NO) is an experimental therapy
that offers promise for less invasive treatment of PPHN. Recent studies point
to a critical role for endogenous NO as a modulator of levels of vasoactive
mediators whose levels determine pulmonary vascular tone and reactivity. There
are three known isoforms of NO synthase (NOS) in mammals and all are
developmentally regulated in the fetal lung. Recent work with a premature
baboon model of BPD has demonstrated a decline in two NO isoforms, nNOS and
eNOS, during the genesis of chronic lung disease. Other investigators report a
bi-phasic release of NO in response to shear stress during development. These
findings suggest that NO plays an important role during lung development.
Primary Pulmonary Hypertension (PPH)
PPH is a rare, progressive lung disorder characterized by a sustained elevation of pulmonary artery pressure. It is associated with structural changes in the small pulmonary arteries and arterioles resulting in resistance to blood flow. The process eventually leads to an enlarged, overworked right ventricle that is unable to pump enough blood to the lungs resulting in heart failure and death, usually within 3-5 years of initial diagnosis. Estimates of the incidence of PPH range from 1 to 2 per million, with women being predominantly affected. Approximately 6-10 percent of cases are familial PPH, a form inherited as an autosomal dominant trait. NHLBI supports basic research on the cellular and molecular events underlying the pathogenesis of PPH. The dominant themes of this research are: (a) isolation and characterization of a familial PPH gene, (b) better understanding of the structural aspects of the disease that cause proliferative and obliterative changes in the vasculature, (c) identification of genetic factors that affect functional and structural changes in the vasculature, (d) development of preclinical markers, and (e) identification and evaluation of more effective treatments.
In November 2001, the FDA approved a new drug, Bosentan (Tracleer) for the treatment of PPH. Bosentan is the first oral treatment approved for PPH, and is also the first in a new class of drugs, known as endothelin receptor anatagonists, to be available commercially. Endothelin, a potent vasoconstrictor that also stimulates growth of vascular cells, is present in high concentrations in the bloodstream of patients with PPH. Results of a small (32 patient) double-blind, placebo-controlled study suggest that Bosentan, which acts by blocking endothelin receptors, increases exercise capacity and improves heart function in patients with pulmonary hypertension. Future trials should help clinicians better define the place of this new class of agents in the therapy of pulmonary hypertension. Larger studies will be needed to address important issues such as improvement in survival and their potential use in severely ill patients who are receiving prostacyclin therapy. Other recent work suggests that levels of circulating endothelin may serve as prognostic markers for patients with PPH and as a tool for the selection of patients who may benefit from treatment with endothelin receptor antagonists.
Multiple agents represent another approach beginning to be applied to the treatment of PPH. Investigators are exploring, for example, the use of an oral phosphodiesterase inhibitor (sildenafil) as a therapeutic adjunct to inhaled iloprost. In very preliminary studies, sildenafil caused a long-lasting reduction in pulmonary artery pressure and pulmonary vascular resistance, with a further additional improvement after iloprost inhalation. Similarly, a small pilot study of iloprost inhalation combined with epoprostenol treatment in patients who had adverse effects during treatment with epoprostenol showed that the combination therapy significantly reduced pulmonary artery pressure and improved cardiac index and other indicators of cardiopulmonary function. These findings suggest that combined therapies may be useful in improving treatment of PPH.
The discovery last year of a genetic cause of PPH has opened up a host of opportunities for research into the etiology and pathogenesis of PPH. New findings this year suggest that there is considerable heterogeneity in mutations of a gene (BMPR2) that is associated with PPH. Additional factors, either genetic and/or environmental, may be required for development of the clinical phenotype. Other PPH genes remain to be identified since approximately half of the PPH families have BMPR2 mutations. Data suggest that many cases of apparently sporadic PPH may in fact be familial, as failure to detect familial PPH is complicated by incomplete expression within families, skipped generations, and insufficient family pedigrees.
Recent findings on the pathogenesis of PPH indicate that endothelial cells within plexiform lesions of patients with PPH have genetic alterations associated with genetic (microsatellite) instability and abnormal growth and gene expression similar to that seen in neoplasia. Other studies suggest that the disorganized growth of endothelial cells in plexiform lesions from PPH patients involves disordered angiogenesis thus allowing for the expansion of endothelial cells. Pulmonary smooth muscle cells from patients with PPH have recently been reported to show abnormal responses to cell growth signaling pathways, and a recently published study from researchers in France is providing evidence for a link between abnormal expression of a serotonin transport protein and abnormal proliferation of vascular smooth muscle in PPH patients.
Sarcoidosis is a disease involving organ systems throughout the body, in which normal tissue is invaded by pockets of inflammatory cells called granuloma. Most sarcoidosis patients have granuloma in their lungs. The disease can exist in a mild form that spontaneously disappears or in a severe form that results in a life-long condition. Estimates of the number of Americans afflicted range from 13,000 to 134,000, and between 2,600 and 27,000 new cases appear each year. As many as 5 percent of individuals with pulmonary sarcoidosis die of causes directly related to the disease. The morbidity associated with the disease can be severe and result in significant loss of function and decrease in quality of life. The causes of sarcoidosis are presently unknown, but disease development is thought to involve the victim's immune system. NHLBI supports laboratory-based research to investigate granuloma formation and to obtain a better understanding of initiating events, the disease process, and the contribution of susceptibility genes.
A multi-center NHLBI study conducted from 1996 to 1999 found that sarcoidosis patients were almost five times more likely than controls to report a sibling or parent with a history of sarcoidosis. White sarcoidosis cases were much more likely to have an affected relative than were African-American cases. However, the investigators found that even for family members (siblings and parents) the risk of sarcoidosis is small, about one percent, and therefore concluded that increased surveillance is probably not warranted. They also found that sarcoidosis appears to increase the risk of depression.
Blood Diseases and Resources Programs
Aplastic Anemia (AA) and Paroxysmal Nocturnal Hemoglobinuria (PNH)
AA is a form of bone marrow failure in which hematopoietic cells are replaced by fat, resulting in low blood counts. In PNH, a clone derived from a single hematopoietic stem cell expands, leading to marrow failure, red blood cell destruction, and venous thrombosis. The NHLBI intramural Hematology Branch has a large clinical and laboratory program devoted to bone marrow failure syndromes, including AA and PNH. Bench studies include immunology, cell biology, virology, and molecular biology approaches to the failure to produce blood cells. Clinical studies include therapeutic interventions to reduce autoimmunity in patients with AA. In FY 2001 the branch established an animal model of immune-mediated AA, showing the pivotal role of type 1 cytokines in causing severe marrow cell destruction. In addition, it completed analysis of its large trial of immunosuppression in severe AA and found that early robust improvement in blood counts is highly predictive of long-term survival without malignant evolution.
Cooleys anemia (also called beta-thalassemia, thalassemia major, or Mediterranean anemia) is a genetic blood disease that results in an inadequate production of hemoglobin. Individuals affected with Cooley's anemia require frequent and lifelong blood transfusions. Because the body has no natural means to eliminate iron, the iron contained in transfused red blood cells builds up over many years and eventually becomes toxic to tissues and organ systems. In addition, many affected children acquire other diseases such as hepatitis through years of transfusion exposure.
NHLBIs extramural research efforts related to Cooleys anemia include (a) identification of mutations in the globin gene cluster that lead to the disorder, (b) elucidation of the mechanisms and therapeutic approaches associated with naturally occurring mutations that result in elevated levels of fetal hemoglobin (Hb F) in adult red blood cells, (c) iron chelation, (d) clinically-useful therapies and drugs, including gene therapy, (e) efficient identification and targeting of hematopoietic stem cells, (f) how ex vivo manipulation of stem cells alters their biologic properties, and (g) improved vectors for use in gene transfer efforts. The Institutes strategic approach also includes a clinical research network to test new therapies and a program of sibling donor cord blood banking and transplantation for hemoglobinopathy families.
FY 2001 witnessed a number of important scientific advances for Cooleys anemia. New methods of transfusion therapy are being developed to improve adherence with deferoxamine regimens for patients receiving chronic transfusions. Less toxic methods of stem cell transplantation are being developed that may be useful for patients with thalassemia. For example, the NHLBI intramural program is working on a vaccine to prevent cytomegalovirus reactivation after stem cell transplantation using a CMV pp65 protein canarypox construct. Finally, several compounds that increase Hb F values have been described. They are hydroxyurea, which is a compound in routine use in sickle cell disease, a number of butyrate-based compounds, and 5-azacytidine.
Creutzfeldt-Jakob Disease (CJD)
CJD is a slowly degenerative, invariably fatal, rare disease of the central nervous system, characterized by motor dysfunction, progressive dementia, and vacuolar degeneration of the brain. The disease has been associated with a transmissible agent. A protease-resistant protein or prion is the hallmark of the transmissible spongiform encephalophaties (TSE) family of diseases to which CJD belongs. Classical CJD occurs worldwide at a rate of 1-2 cases per million per year. The lack of a rapid, sensitive, and specific test for TSE infectivity has slowed progress in the study and control of CJD and other prion diseases. The development of assay systems to detect prion diseases is a high priority in public heath. It could form the basis of a blood/tissue donor screening test, and provide a diagnostic test for neurologists; there is currently no way of detecting the disease in its pre-clinical stage. These assays could also be useful in testing for TSE in animals, especially domestic animals used for human consumption. In FY 2001, NHLBI-supported investigators reported that mouse skeletal muscle can propagate prions and accumulate substantial titers of them. Because significant dietary exposure to prions might occur through the consumption of meat, even if it is largely free of neural and lymphatic tissue, a comprehensive effort to map the distribution of prions in the muscle of infected livestock is needed. Furthermore, muscle may provide a readily biopsied tissue that can be used to diagnose prion disease in asymptomatic animals and even humans.
Fanconi Anemia (FA)
FA is an autosomal recessive bone marrow failure
syndrome characterized by a decrease in blood cells and platelets
(pancytopenia), developmental defects, and cancer susceptibility. Many FA
patients can be identified at birth because of congenital anomalies, although
approximately 25 percent do not have birth defects. FA is a clinically
heterogeneous disorder; it can currently be divided into at least eight
different complementation groups designated A through G. Delineation of the
interrelationship of the FA proteins and their functions through localization
and functional studies is a high priority research area for NHLBI. In addition,
NHLBI supports research that focuses on identifying and cloning the remaining
FA genes, developing protocols for efficient identification and targeting of
hematopoietic stem cells, obtaining information on how ex vivo
manipulation of stem cells alters their biologic properties, producing improved
vectors, and exploring the utility of cord blood banking. Two FA genes, FAC and
FAA, that account for an estimated 75 percent of all FA patients world wide,
have been cloned. The cellular localization of the functional complex and the
role of the complex in DNA repair and prevention of mutagenesis have been
exciting developments over the past year. Recent transplantation protocols
using Fludarabine have provided new hope that stem cell transplantation may be
a therapeutic option for patients with FA.
Acute GvHD is a condition that typically occurs within 3 months after allogeneic hematopoietic stem cell transplantation. Donor T cells react against foreign tissue antigens in the recipient. GvHD is characterized by skin rash, liver dysfunction, vomiting, and diarrhea. Acute GvHD often precedes development of chronic GvHD, which may require years of treatment with immunosuppressive drugs. NHLBI supports basic and clinical research grants focused on understanding the pathophysiology of GvHD, especially in unrelated transplants. The NHLBI program emphasizes understanding of the roles of both major and minor histocompatibility antigens in disease pathogenesis, development of tolerance, function of donor T cells in allogeneic hosts, and mechanisms of GvHD prevention including depletion of donor T cells from the graft. Current studies attack the problem of GvHD from several directions: the variables that affect its induction and severity; the effector mechanisms; and whether GvHD can be suppressed while other necessary immune responses are maintained. The program supports two multi-center clinical studies: the Unrelated-Donor Marrow Transplant Trial of T-cell Depletion and the Cord Blood Banking and Transplantation Study. The Blood and Marrow Clinical Transplant Network was funded in FY 2001 to conduct Phase III trials including studies of GvHD. To date, older or sicker patients have been excluded from allogeneic hematopoietic cell transplantation (HCT) because of toxicities from the treatment regimen. Recently, investigators have developed a less toxic regimen, based on the use of postgrafting immunosuppression to control graft rejection and GvHD, that has dramatically reduced the acute toxicities of allografting. Now HCT with the induction of potent graft-versus-tumor effects can be performed in previously ineligible patients, largely in an outpatient setting. Finally, the NHLBI intramural research program in FY 2001 described how the alloimmune environment reshapes the immune response of the donor after stem cell transplantation by identifying innate T cell responses to known and putative tumor specific antigens.
Hemophilia is a hereditary bleeding disorder that
results from a deficiency in either blood coagulation factor VIII or factor IX.
There are about 20,000 hemophiliacs in the US, all of whom are dependent on
lifelong treatment to control periodic bleeding episodes. NHLBI supports a
broad spectrum of activities on blood coagulation and its disorders. Research
on hemophilia includes viral and non-viral approaches for gene therapy,
mechanisms of antibody inhibitor formation, modification of factors for
improved therapeutics, safety of plasma derived products, and blood product
associated infections. In addition, basic genetic, molecular biology, and
protein biochemistry studies of factors VIII and IX are supported to improve
understanding of the mechanisms of action and regulation of these critical
coagulation proteins. One program project studies multiple approaches to
developing gene-based therapies for hemophilia A and B, and another studies new
therapies that can be used in the presence of inhibitory antibodies.
Immune Thrombocytopenic Purpura (ITP)
ITP is an autoimmune disease manifested by production of antibodies that react with specific proteins on the surface of platelets. The reaction results in rapid clearance or destruction of platelets (thrombocytopenia) and clinically-significant bleeding. The underlying cause is unknown, but the disease is associated with other autoimmune disorders. Although ITP may occur at any age, acute (temporary) thrombocytopenic purpura is most commonly seen in young children. About 85 percent of affected children recover within 1 year and experience no recurrence. Thrombocytopenic purpura is considered chronic when it lasts more than 6 months. Its onset may occur at any age. Adults more often have the chronic disorder and females are affected two to three times more often than males. Most adult patients respond at least transiently to standard therapies including steroids and splenectomy, but a majority eventually relapse and some develop very severe chronic refractory ITP.
Part of the NHLBI research program on thrombosis and hemostasis is directed toward understanding the biology of platelet production from megakaryocytes, the function of the growth factor thrombopoietin (TPO), and the structure and function of platelet surface glycoprotein antigens. Studies on TPO have not borne out the initial promise of this therapeutic strategy. While mice with the TPO gene knocked out maintained a basal level of circulating platelets and did not bleed, a number of human subjects with thrombocytopenia who received TPO developed antibodies to the protein and their clinical conditions worsened. The investigators concluded that TPO is an amplification factor, but it may not be essential for megakaryocytopoiesis and platelet production. On the other hand, migration of the bone marrow megakaryocytes to a more permissible environment for platelet production could be a critical factor. In another development, a monoclonal antibody, rituximab, directed to B-lymphocytes for the treatment of cancer, was found in initial studies to be beneficial for patients with ITP.
Lymphedema is an accumulation of lymphatic fluid in interstitial tissue that causes swelling, most often in the arm(s) and/or leg(s), and occasionally in other parts of the body. Lymphedema can develop when lymphatic vessels are missing or impaired (primary or congenital), or when lymph vessels are damaged or lymph nodes removed (secondary). When the impairment becomes so great that the lymphatic fluid exceeds the lymphatic transport capacity, an abnormal amount of protein-rich fluid collects in the tissues of affected areas. Left untreated, this stagnant, protein-rich fluid not only causes tissue channels to increase in size and number, but also reduces oxygen availability in the transport system, interferes with wound healing, and provides a culture medium for bacteria that can result in a lymphangitis infection. The incidence of primary lymphedema has been estimated to be between 1 in 6,000 and 1 in 300 live births, so it may be a rare disease, or it may be a more common disease which predisposes to the secondary type and is under recognized. NHLBI investigator-initiated projects are seeking to identify the developmental, molecular, and cellular defects that contribute to lymphedema and are seeking to design effective therapeutic interventions to treat both primary and secondary lymphedemas. In December 2000, NHLBI issued a Program Announcement (PA) inviting applications to study the pathogenesis and treatment of lymphedema.
Sickle Cell Disease (SCD)
SCD is an inherited blood disorder that is most common among people whose ancestors come from Africa, the Middle East, the Mediterranean basin, or India. SCD is the most common genetic blood disorder in the U.S., affecting approximately 1 in 500 African-American newborns and 1 in 1,000 Hispanic newborns. It occurs when an infant inherits the gene for the sickle hemoglobin from both parents or the gene for sickle hemoglobin from one parent and the gene for another abnormal hemoglobin from the other parent. In patients with the disease, the hemoglobin molecules in the red blood cells (RBCs) which carry oxygen throughout the body tend to damage the RBC walls, causing them to stick to blood vessel walls. This leads to the painful sickle cell episodes that are the hallmark of the disease. Chronic end-organ damage occurs to the brain, lungs, kidneys, spleen, and liver, and leads to premature death, with the median age at death for severely affected individuals occurring between 42 and 48 years.
NHLBIs current sickle cell disease portfolio includes research on the following topics: (a) development of methods for gene transfer and gene replacement in the hematopoietic stem cell; (b) characterization of interactions between sickle cells and the vascular endothelium; (c) improved understanding of hemoglobin gene switching to allow increased production of fetal hemoglobin; (d) a Phase III clinical trial of hydroxyurea in children with sickle cell anemia to determine if hydroxyurea can prevent the onset of chronic end organ damage; (e) an epidemiologic study of the incidence of parvovirus B19 seroconversion in children with sickle cell disease; (f) an epidemiologic study of the adult patients who participated in the Multicenter Study of Hydroxyurea Trial; (g) a study of non-myeloablative preparative regimens for bone marrow transplantation leading to mixed chimerism as curative therapy for severely affected SCD patients; and (h) a study of sibling cord blood banking and transplantation of cord blood derived stem cells to cure severely affected sickle cell disease patients. In addition, the NHLBI Intramural program continued an ongoing seroconversion study of B19 parvovirus in sickle cell anemia patients in preparation for a recombinant vaccine trial of baculovirus-engineered empty capsids.
Progress in SCD research in FY 2001 was highlighted by a report of a gene therapy cure of the transgenic mouse model of sickle cell disease. Investigators at the Massachusetts Institute of Technology and the Albert Einstein College of Medicine announced the insertion of a beta hemoglobin A gene variant into hematopoietic stem cells in two transgenic SCD mouse models (Berkeley and SAD). The animals were able to produce the corrected hemoglobin cells for up to 10 months with associated correction of hematologic parameters, splenomegaly, and prevention of urine concentrating defect. This experiment paves the way for additional animal studies and ultimately human clinical trials to find a safe way to neutralize the abnormal blood producing gene before the introduction of gene-therapy-treated blood producing cells.
Systemic Lupus Erythematosus (SLE)
SLE or lupus is an autoimmune disorder in which the body produces antibodies that harm its own cells and tissues. Typical symptoms of SLE are fatigue, arthritis, fever, skin rashes, and kidney problems. Lupus affects more women than men. Patients with SLE have a higher incidence of thrombosis and spontaneous loss of pregnancy. Its cause is unknown and there is no known cure, but the symptoms can be controlled with appropriate treatment and most patients can lead an active life. As part of its broad program of research in hemostasis and thrombosis, NHLBI is supporting studies on the development of diagnostic tests in pregnant women with SLE. Recent studies suggest that circulating antibodies in lupus patients compete with a protein, annexin V, that forms an anti-thrombotic shield in the placenta. The result is that procoagulant phospholipids remain exposed, cause thrombosis in the vessels of the placenta, and lead to fetal loss.
Thrombotic Thrombocytopenic Purpura (TTP)
TTP is a potentially fatal disease characterized by low blood platelet levels and widespread platelet thrombi in arterioles and capillaries. Both endothelial cell damage and intravascular platelet aggregation have been suggested in the pathogenesis of TTP. Microscopic examination of thrombi has revealed the presence of abundant von Willebrand factor (vWf), a plasma protein. An interaction between vWf and the platelet surface glycoprotein complex I (GP I) is believed to be essential for the formation of a thrombus. vWf is synthesized as large polymers and is then cleaved into smaller units by a plasma protease. NHLBI grantees have confirmed the presence of inhibitory antibodies to this enzyme in the plasma of some patients with TTP. Inhibition of the enzyme results in large multimers of vWf in plasma that can spontaneously aggregate platelets. The NHLBI grantees succeeded in isolating a new metalloprotease, the ADAMTS 13 degradative enzyme inhibitor, and established that mutations in the gene that expresses ADAMTS 13 are the genetic basis of familial TTP. Efforts are being made to produce recombinant ADAMTS 13.
Rare Disease Research Initiatives
Adult Hydroxyurea Patient Follow-up Study (aka:
Multicenter Study of Hydroxyurea in Sickle Cell Anemia (MSH) Patients'
Initiatives Started in 2001
Blood and Marrow Transplant Clinical Research Network
A new RFA, initiated by NHLBI and cosponsored by NCI, organizes a network to accelerate research on the management of hematopoietic stem cell transplantation, standardize existing treatments, and evaluate new ones. The network of 14 interactive clinical centers and a data coordinating center provides a coordinated, flexible mechanism over a maximum period of 10 years to accept ideas and build consensus from the transplant community; develop protocols; expeditiously perform multi-center Phase II and Phase III clinical trials; provide information to physicians, scientists, and the public; and, in turn, improve stem cell transplantation therapy for diseases such as leukemia, sickle cell disease, thalassemia, and Fanconi anemia.
Genetic Aspects of Tuberculosis in the Lung
A new RFA, initiated by NHLBI and cosponsored by NIAID, stimulates research on the genetic aspects of tuberculosis in the lung, exploiting advances in molecular biology and genomics research. Special attention is paid to the interaction between host and microbial genes and to the identification of genes or families of genes that determine virulence or latency, or that determine reactivation of disease or resistance to antituberculous drugs. Of particular interest are studies using new biotechnologies such as microarrays, molecular beacon technology, or differential signature-tagged mutagenesis (DSTM), and studies involving innovative collaborations with computational biologists to identify genes that mediate the pathogenesis of tuberculosis and elucidate the responsible mechanisms. To encourage junior level quantitative biologists to work on the genetic aspects of tuberculosis, the Mentored Quantitative Research Career Development Award (K25) has been included as one of the mechanisms of support .
Genetic Modifiers of Single Gene Defect Diseases
A new RFA, initiated by NHLBI and cosponsored by NIDDK, encourages studies to identify and characterize the genes responsible for modifying the clinical progression and outcomes of heart, lung, and blood diseases due to single gene defects. Examples of such single gene defect diseases are cystic fibrosis; sickle cell disease; hemophilia; alpha-1-antitrypsin deficiency; glucocorticoid remediable aldosteronism (GRA); Liddles syndrome; and cardiac myopathies, dysplasias, and arrhythmias that result in sudden cardiac death. The modifier genes are likely to encode a wide variety of proteins that either interact directly with the disease gene, influence pathways involving the disease gene, or affect metabolic processes altered as a result of the disease gene defect. Identification of the genes responsible for these differences should lead to a better understanding of disease pathogenesis, earlier diagnosis, and improved treatment.
Novel Approaches to Enhance Animal Stem Cell Research
A new PA, co-sponsored by ten institutes, encourages studies to isolate, characterize, and identify totipotent and multipotent stem cells from nonhuman biomedical research animal models and generate reagents and techniques to characterize and separate them from other cell types. The PA stresses innovative approaches to the problems of making multipotent stem cells available from a variety of nonhuman sources, and innovative approaches to creating reagents that will identify them across species and allow for separation of multipotent stem cells from differentiated cell types.
Pathogenesis and Treatment of Lymphedema
A new PA, initiated by NHLBI and co-sponsored by NICHD, NIAMS, and NCI, encourages investigation of the pathogenesis of, and new treatments for, primary and secondary lymphedema. The PA seeks to stimulate research on the biology of the lymphatic system and to characterize at the molecular, cellular, tissue, organ, and intact organism levels, the pathophysiologic mechanisms that cause the disease, and to discover new therapeutic interventions. Such knowledge will help to improve early diagnosis of affected individuals, the choice and timing of treatment, and genetic counseling.
Pediatric Heart Disease Clinical Research Network
A new RFA establishes a network of interactive pediatric clinical research centers to promote the efficient evaluation of new treatment methods and management strategies that offer potential benefit for children with structural congenital heart disease, inflammatory heart disease, heart muscle disease, and arrhythmias. Therapeutic trials and studies involve investigational drugs, drugs already approved but not currently used, as well as devices, interventional procedures, and surgical techniques. The network approach, consisting of five to six clinical centers and a data coordinating center, is an effective, flexible way to study adequate numbers of patients with uncommon diseases such as congenital cardiovascular malformations. Efficiencies are achieved through standardizing procedures to recruit, characterize, monitor, and follow-up patients. Approximately 2,000 patients are expected to participate in 6-12 different protocols over the 5-year project period. The network also serves as a platform to train junior investigators in pediatric clinical research and as a vehicle for rapid and wide-spread dissemination of findings.
Initiatives Planned for the Future
Animal Models of Antigen-Specific Tolerance for Heart and Lung Transplantation
A new PA in FY 2002 will encourage the development of large animal models of antigen-specific tolerance induction for heart and lung transplantation and small animal models of tolerance induction for lung transplantation. Development of stable immune tolerance between donor and recipient would decrease morbidity and mortality due to chronic rejection, toxic effects of immunosuppressive therapy, and graft versus host disease.
Chemical Screens for New Inducers of Fetal Hemoglobin for Treatment of Sickle Cell Disease
and Cooleys Anemia
A new PA in FY 2002 will support high-throughput chemical activity screens for new pharmacologic inducers of fetal hemoglobin, with the long-term objective of developing better drugs to treat sickle cell disease and Cooleys anemia. The screens should include but not be limited to compounds in the short chain fatty acid and carbonic acid classes. Promising compounds identified through these Small Business Innovation Research (SBIR) grants will later be subjected to toxicology and pharmacokinetic testing in primates.
Heritable Disorders of Connective Tissue
A new RFA, to be initiated by NIAMS and to be co-sponsored by NHLBI in FY 2002, will promote research on heritable disorders of connective tissue caused by abnormalities in the molecules involved in the biosynthesis, processing, and degradation of structural macromolecules, as well as abnormalities in regulatory and signaling molecules that reside within the extracellular matrix. This initiative should increase understanding of, and lead to novel therapeutic strategies for, the Mafan's and Ehler-Danlos syndromes, diseases that involve alterations of the integrity of the connective tissue compartments within the wall of the blood vessel and the subsequent formation of aneurysms in the aorta and smaller arteries.
Multicenter Study of Hydroxyurea in Sickle Cell Disease: Patient Follow-Up Extension I
A renewal of an RFP in FY 2002 will continue the follow-up study of the 299 adult patients who participated in the Multicenter Study of Hydroxyurea in Sickle Cell Disease (MSH Trial) from 1992 to 1995 in order to ascertain the long term toxic effects of hydroxyurea usage in this patient population. The 240 patients known to be alive will be followed annually for five additional years at the 21 MSH clinical centers to determine health status, quality of life, incidence of malignancies, and birth defects in their offspring. Mortality rates will be compared to the mortality data from the Cooperative Study of Sickle Cell Disease adult cohort and the normal African-American population. In addition, long term efficacy of hydroxyurea will be estimated in terms of its effects on fetal hemoglobin levels (Hb F), blood cell counts, and selected organ function.
Plasticity of Human Stem Cells in the Nervous System
A new PA, to be co-sponsored by NHLBI and three other institutes in FY 2002, will encourage studies on the plasticity and behavior of human stem cells and the regulation of their replication, differentiation, and function in the nervous system. Because of their ability to generate neurons and glia, stem cells are promising candidates for the development of cellular and genetic therapies for neurological disorders, including neuroregulatory problems in heart, lung, and blood diseases, and sleep disorders. Studies will be encouraged to confirm, extend, and compare the behavior of human stem cells that are derived from different sources and ages or exposed to different regimes in vitro and in vivo. In addition, studies will be encouraged to develop methods for identifying, isolating, and characterizing specific human precursor populations at intermediate stages of differentiation into neurons and glia.
Stem Cell Plasticity in Hematopoietic and Non-Hematopoietic Tissue
A new RFA, to be initiated by NHLBI and to be co-sponsored by NIDDK and NINDS in FY 2002, will encourage studies to elucidate and characterize the molecular and cellular mechanisms that influence stem cell plasticity or versatility. Stem cells are the most primitive cells in the bone marrow from which all the various types of blood cells are derived. Studies are needed to identify genes responsible for the maintenance of "stemness" and genes responsible for initiating and/or maintaining the development of specific cell types. Human adult stem cells could potentially be exploited to become more embryonic-like and therefore useful for drug screening, replacement of diseased or injured tissue, and gene therapy.
Transactivation of Fetal Hemoglobin Genes for Treatment of Sickle Cell Disease and Cooleys
A new RFA, to be initiated by NHLBI and co-sponsored by NIDDK in FY 2002, will encourage studies to identify the transcriptional regulatory proteins involved in fetal hemoglobin gene activation, determine their mechanisms of action and the induction mechanisms of the structural genes encoding the regulators, and identify drugs that induce fetal hemoglobin via action on the regulators. A better understanding of the molecular basis of fetal hemoglobin gene regulation, and of fetal to adult hemoglobin isoform switching in development, will facilitate the development of new approaches to cure beta-chain hemoglobinopathies such as SCD and Cooley's anemia.
Transfusion Medicine/Hemostasis Clinical Research Network
A new RFA will establish in FY 2002 a network of interactive clinical research groups to promote the efficient comparison of new management strategies of potential benefit for children and adults with hemostatic disorders and also to evaluate new as well as existing blood products and cytokines for the treatment of hematologic disorders. Hemostasis, the arrest of bleeding from an injured blood vessel, requires the combined activity of vascular, platelet, and plasma factors counterbalanced by regulatory mechanisms to limit the accumulation of platelets and fibrin in the area of injury. Hemostatic abnormalities may be congenital; immune-mediated, such as ITP and TTP; or due to coagulopathies resulting from chemotherapy, surgery, or trauma and can lead to excessive bleeding or thrombosis. The network will consist of a Data Coordinating Center and up to sixteen core clinical centers to perform multiple clinical trials.
Cell-Based Therapies for Heart, Lung, Blood, and Sleep Disorders and Diseases
A new RFA will encourage in FY 2003 basic research on stem cell biology and on the use of stem cells in cellular therapies for the treatment of cardiovascular, lung, blood, and sleep disorders and diseases. Because of their plasticity, adult, embryonic, and fetal stem cells hold great potential for use in new strategies to regenerate and repair damaged or diseased cardiovascular, lung, and blood tissues, and for sleep disorders. Areas supported would include the basic biology and characterization of embryonic, fetal, and adult stem cells and progenitor cells important for heart, lung, blood and sleep disorders; the use and differentiation of stem and progenitor cells for cell transplantation; stem cell homing to sites of tissue injury or specific tissue or organ sites, including the mechanisms underlying the homing process; and tissue engineering using stem or progenitor cells.
Comprehensive Sickle Cell Centers
A renewal of an RFA will continue in FY 2003 the operation of a nationwide network of collaborative, comprehensive centers in basic and translational research focused on the development of cures or significantly improved treatments for SCD. The network of ten centers and a statistics and data management core carries out basic research, inter-center collaborative clinical research, and local clinical research focused on the most promising biomedical and behavioral therapeutic modalities. The centers also support career development of young investigators in SCD research and support services including patient education, patient counseling, community outreach, and hemoglobin diagnosis. This is the eighth re-competition of a program that was established by a Presidential initiative and Congressional mandate in 1972.
Hutchinson-Gilford Progeria Syndrome (HGPS): Exploratory/Developmental (R21) Grants
A new PA with Review, to be co-sponsored by four Institutes, will encourage in FY 2003 studies to elucidate the molecular and mechanistic bases of HGPS, an incurable and terminal premature aging disorder characterized by short stature, abnormal skeletal and tooth development, scleroderma-like skin changes, and cardiovascular disease. Children with the disorder usually die of heart attacks or strokes at an average age of 13 years. Little research has been done on the syndrome because it is extremely rare (about 1 in 10 million births) and access to the patient population has been limited. Fibroblast and lymphoblastoid cell lines from HGPS patients from ten different families will be available to awardees. A better understanding of the mode of inheritance, molecular basis, and pathomechanisms of HGPS could lead to new insights into mechanisms of development, aging, and vascular occlusive diseases.
Mechanisms of Fetal Hemoglobin Gene Silencing for Treatment of Sickle Cell Disease and Cooley's Anemia
A new RFA will encourage studies in FY 2003 to delineate the mechanisms involved in fetal hemoglobin (gama-globin) gene silencing during normal human development and develop therapeutic approaches to inhibit silencing. Both cis- and trans-acting elements important in gamma-globin gene silencing will be identified and their mechanisms of action will be determined. Pharmacologic or gene-based approaches to interfere with silencing may ultimately be pursued. Increased understanding of the molecular basis of fetal hemoglobin silencing will facilitate the development of new gene-based therapeutic approaches to increase fetal hemoglobin in red blood cells and thereby cure beta-chain hemoglobinopathies.
Mesenchymal Stem Cell Biology
A new RFA will encourage studies in FY 2003 to conduct basic research on mesenchymal cell biology in order to provide the basis for clinical application of mesenchymal stem cells (MSCs) to hematopoietic and non-hematopoietic stem cell transplantation. MSCs are pluripotent progenitor cells located in bone marrow that can differentiate into a variety of non-hematopoietic tissues including bone, cartilage, tendon, fat, muscle, and early progenitors of neural cells. Preclinical studies suggest MSCs facilitate hematopoietic stem cell transplantation while decreasing immune rejection of allogeneic transplants. To realize the therapeutic potential of these results, the initiative will support the identification of population and assay methods to characterize the clinical potential of candidate human MSCs and the development of isolation and characterization standards for use in comparing.
Molecular Target and Drug Discovery for Idiopathic Pulmonary Fibrosis
A new RFA will encourage studies in FY 2003 to develop new therapeutic approaches for IPF. One approach to inhibit progression or reverse fibrosis in IPF patients would be to identify new agents, ranging from small molecules to vaccines, that interact with previously identified molecules or pathways known to be involved in the development of fibrosis. Other promising approaches supported by this initiative would be to use new technologies to identify additional molecular targets for treatment and to identify agonists or antagonists that interact with the previously or newly identified targets to attenuate, halt, or reverse the fibrotic process.
SCOR in (a) Neurobiology of Sleep and Sleep Apnea and (b) Airway Biology and Pathogenesis of Cystic Fibrosis
A renewal of an RFA to foster multidisciplinary basic and clinical research in FY 2003 enabling basic science findings to be more rapidly applied to clinical problems of sleep and CF. The objective of the sleep SCOR is to integrate clinical research on the etiology and pathogenesis of sleep disorders, particularly sleep apnea, with molecular, cellular, and genetic approaches to the study of sleep. The objective of the CF SCOR is to use our current knowledge of the CF transmembrane conductance regulator (CFTR) as a focus to promote advances in research on the pathogenesis of CF, the role of CFTR in airway biology, and the development of new treatment strategies. Each SCOR must consist of three or more projects, all of which are directly related to the SCOR program topic. This will be the second and final 5-year solicitation for these two SCORs.
Rare Disease-Related Program Activities
A Task Force on Research in Pediatric Cardiovascular Disease was held in January 2001. The Task Force identified the following eight research priorities for the next 5 years:
Fundamental studies of the formation of heart
and blood vessels
A symposium called Lamposium 2001", held in March 2001 in Cincinnati Ohio, was co-sponsored by NHLBI and the LAM Foundation.
An RFA meeting for Clinical Research for Cooleys Anemia and Biology of Iron Overload was held in April 2001.
A workshop on AAT Deficiency: The Challenge Of Genetic Conditions, held in June 2001, and co-sponsored by the Alpha One Foundation, NHLBI, NIDDK, and ORD, was designed to promote a multi-disciplinary understanding of psychosocial and scientific challenges of AAT deficiency.
A workshop on Host Response in Sickle Cell Disease, held in June 2001, discussed the clinical manifestations of problems with the immune response in SCD. Specific topics included resistance to pneumococcus, genetic modifiers of the immune response, loss of splenic function in SCD, and the response to encapsulated bacteria, iron overload, autoimmune disorders, developmental immunity, white blood cell function in SCD, and consequences of chronic transfusions. It was recommended that research be pursued to ascertain the genetic factors that modify phenotypic differences in the responsiveness of SCD patients to infections.
A workshop on Protein Processing and Degradation in Pulmonary Health and Disease, held in September 2001, and cosponsored by NHLBI and ORD, was designed to evaluate the current state of knowledge of protein biosynthetic processing and intracellular degradation.
A working group on Targeting Technologies for Repair of Single Nucleotide Mutations in Single Gene-Defect Blood Diseases, held in September 2001, assessed the potential of various approaches for correction of pathogenic single nucleotide mutations in SCD, beta-thalassemia, hemophilia A and B, and hemochromatosis.
A research training program designed for clinicians interested in performing biomedical research related to PH will be co-sponsored by NHLBI and the Pulmonary Hypertension Association. The training will be supported by the Mentored Clinical Scientist Development Award (NIH activity code K08).
A working group on translational research in PPH, sponsored by the ORD and NHLBI, is planned for FY2003.
Problem Areas Related to Rare Diseases
Research needs include better animal models of the disease, identification of biomarkers, and development of chemical chaperones that could specifically enhance the secretion of the mutant alpha-1 antitrypsin protein.
Arrhythmogenic Right Ventricular Dysplasia
A concerted multi-laboratory program, combining basic, clinical, and genetic approaches, is needed to identify the causes of this highly lethal form of cardiomyopathy. Once contributing factors are found, the next challenge will be to begin a search for therapies. Additional clinical centers, and perhaps a national registry, would be useful to investigators who are already studying the origins of ARVD and potential treatments.
Congenital Central Hypoventilation Syndrome
A significant limitation is the difficulty of recruiting CCHS subjects for clinical research. CCHS is a very rare condition often presenting within a few hours of birth. Only 150 surviving CHS patients are estimated to exist world-wide. Relocation to clinical research sites is made difficult by the spectrum of clinical symptoms associated with CCHS and related dysfunction of the autonomic nervous system.
Congenital Diaphragmatic Hernia
As a result of advances in ultrasonography, CDH is now diagnosed before birth with increasing frequency. Development of micro surgical techniques has made it possible to execute surgical repair in utero. With multiple options currently available to families, accurate counseling on the expected outcome is crucial. Scientific information must be provided to assist affected families in making decisions about management.
Congenital Heart Disease
Long-term follow-up studies are required to answer certain types of questions, but, because congenital heart disease is often repaired in infancy, such studies are difficult to initiate. Additional research is needed on adult congenital heart disease, pulmonary malformations in congenital heart disease, and pediatric ventricular assist devices.
A standardized reference material repository is needed to validate assay systems to detect TSE. Materials under consideration to calibrate in-house reference materials of individual laboratories to a single international standard include human brain tissue, human blood, animal tissues, and animal blood. Blind panels are needed for validation of all assays, specifically the validation of their sensitivity, reproducibility, and predictive abilities. NHLBI is presently developing an initiative to support the establishment of a standardized reference TSE material repository.
Graft versus Host Disease
Promising agents that could be used to treat GvHD are also under investigation for use in other diseases, for example, arthritis. Pharmaceutical companies are reluctant to allow transplant research investigators access to these investigational drugs for fear that the complications experienced by HCT patients will interfere with the approval process for new agents.
Dilated cardiomyopathy is thought to be a consequence of myocarditis in a subgroup of genetically predisposed people. Identification of the genetic basis for more severe disease may allow clinicians to target patients who would benefit from more aggressive therapy. There is a need for more specific and sensitive noninvasive methods for diagnosis. The current gold standard is endomyocardial biopsy but this procedure suffers from limited specificity and sensitivity. Also, the concept of myocarditis as an autoimmune phenomenon is supported by studies linking persistence of viral RNA in the myocardium to the induction of autoantibodies. More research is needed to determine the effectiveness of immunosuppressive modalities in myocarditis. NHLBI-supported investigators are tackling several of these problem areas.
Long QT Syndrome
Access and identification of sufficient numbers of new patients for studies remain a problem. Identification of mutant gene carriers would be greatly facilitated by accurate means of screening individuals in afflicted families for specific founder mutations. Improved means of identifying new mutations in the various genes involved would also be helpful. Investigators are working to increase the visibility of an international LQTS registry in minority communities.
LAM tissue is scarce and cell lines are difficult to establish and maintain. Currently no animal models of LAM exist.
Primary Pulmonary Hypertension
A detailed understanding of the function of the BMPR-2 gene has not yet been achieved, and how this gene may cause the structural and functional changes in the lungs of PPH patients is not clear. Although progress is being made, no animal models have been developed that completely mimic PPH in humans. The etiology and pathogenesis of PPH must be understood before successful therapies can be developed. Current therapies are cumbersome and expensive and are not effective for all patients. Innovative mechanisms are needed to accelerate the translation of new findings into better treatments for PPH.
Systemic Lupus Erythematosus (SLE)
The fear of miscarriage is a great concern for many pregnant women with SLE. Anticoagulation therapy of high risk pregnant women with antibodies to phospholipids needs to be evaluated. | 1 | 6 |
<urn:uuid:ac1a7a2e-f894-47c4-8422-937a35993fa9> | HISTORY OF FLIGHT Use your browsers 'back' function to return to synopsisReturn to Query Page
On November 20, 2002, at 1430 central standard time, a Robinson R22 helicopter, N559DD, piloted by a commercial pilot, was substantially damaged during a forced landing following a loss of main rotor power and an autorotation near St. Jacob, Illinois. This was accompanied by an "intense" vibration as reported by the pilots following the accident. The local instructional flight departed the St. Louis Downtown Airport (CPS), Cahokia, Illinois, at 1400 with a dual student and a certified flight instructor on-board. The dual student was a helicopter-rated commercial pilot and flight instructor receiving training for a instrument helicopter instructor rating. The flight was being conducted under 14 CFR Part 91 and was not on a flight plan. Visual meteorological conditions prevailed at the time of the accident. The dual student and flight instructor reported no injuries.
In his written statement, the flight instructor noted: "We were at 2500 feet MSL and approximately 85 knots indicated airspeed, heading north when the failure occurred." The flight was being radar vectored by St. Louis approach control under VFR for a practice instrument approach to St. Louis Regional Airport (ALN), Alton, Illinois. The flight was approximately 2 miles north of the St. Louis Metro-East Airport/Shafer Field (3K6), when a loss of main rotor power occurred.
The instructor reported that the clutch light flickered several times and an unusual noise was heard. He stated that he pulled the clutch circuit breaker, "however the noise continued to worsen and [he] suspected the drive belts were loosening." He reported that he reset the clutch circuit breaker and the clutch light illuminated steadily. He noted that "immediately afterward the engine and rotor RPM needles split indicating a drive train failure, and a loud metal to metal grinding noise began."
The instructor stated that he took control of the aircraft and began an autorotation. "Upon termination of the autorotation as airspeed was reduced to zero the aircraft yawed to the left into the wind and did not respond to tail rotor pedal inputs. We touched down on the right skid with the nose of the aircraft pointed approximately 30 degrees to the left of our direction of travel. The aircraft rocked forward and right and then rocked back."
The dual student, who was flying the aircraft when the loss of rotor power occurred, reported that "the tail rotor pedals began to shake under my feet and then went still." Concerning the instructor's autorotation and landing, he stated: The instructor "flared at the bottom and the ship turned to the left and landed fairly hard on the right front and came to rest in a level condition."
The flight instructor held a commercial pilot certificate with airplane single and multi-engine land, rotorcraft helicopter, instrument airplane and instrument helicopter ratings. He held a flight instructor certificate with rotorcraft helicopter and instrument helicopter ratings. He held a Second Class medical certificate issued on January 10, 2002. He reported total logged flight time as 4,140 hours, with 890 in the same make and model as the accident aircraft.
The dual student held a commercial pilot certificate with rotorcraft helicopter, airplane single-engine land and instrument airplane ratings. The airplane and instrument ratings were limited to private pilot privileges. He held a flight instructor certificate, issued January 2002, with a rotorcraft helicopter rating. He held a First Class medical certificate, issued on January 18, 2002, with a limitation for corrective lenses. He reported total logged flight time as 3,200 hours and 1,650 hours in the same make and model as the accident aircraft.
The helicopter was a 2002 model year, Robinson R22, serial number 3299. The aircraft was issued an airworthiness certificate on February 15, 2002. It was registered to Helicopter Operation, Inc., St. Louis, Missouri, and was used primarily for flight instruction.
The helicopter had accumulated a total flight time of 72.6 hours since new with 6.9 hours at the time the airworthiness certificate was issued, according to the airframe logbook.
The engine installed was a Lycoming O-360-J2A, S/N L-38406-36A, rated at 180 horsepower. According to Robinson documentation, it is derated to 145 horsepower. It was installed in a normally aspirated, air-cooled, carbureted configuration. Air cooling is supplied by a fan wheel mounted to the engine crankshaft. The fan wheel assembly is enclosed by a fiberglass scroll.
An additional airframe logbook entry, dated February 22, 2002, at 17.4 hours, stated: "Removed original A051-1 actuator and A190 V-belts and replaced them with new parts A051-1 S/N 4190 Rev AM Lot 232 Rev X. The fan was rebalanced per the RHC maintenance manual and the aircraft was returned to service."
According to the dual student, this entry was related to an incident which occurred during the delivery flight. He reported that one of the drive V-belts broke and an emergency landing under partial power was made. He stated that the helicopter was repaired by Robinson technicians who replaced the V-belts and the clutch actuator.
WRECKAGE AND IMPACT INFORMATION
The helicopter impacted into a plowed field approximately two miles north of 3K6. A global positioning system (gps) receiver indicated the position of the forced landing site as 38 degrees 46.504 minutes north latitude and 89 degrees 48.558 minutes west longitude.
A post-accident examination of the helicopter revealed that the engine cooling fan assembly had departed the aircraft. This component was subsequently found approximately 1.2 miles south of the forced landing site. A gps receiver indicated the location of the fan wheel assembly as 38 degrees 45.297 minutes north latitude and 89 degrees 48.702 minutes west longitude.
Complete failure of the fan shaft, running from within the shaft bearing race to the perimeter mounting holes, was observed. The belt tension actuator had failed at the upper attachment fitting. The actuator remained attached to the fan wheel assembly that departed the aircraft.
The drive belts were found looped over the sheave. Several cuts or tears were noted the belts. Otherwise they appeared to be intact.
The tail rotor control intermediate bellcrank assembly (P/N A331-1) showed extensive scrapes and gouges on the inboard arm. The tail rotor blade pitch push-pull tube (P/N A121-17) was completely severed at the forward rod end. Extensive scraping and gouging was noted on the rod end fitting. Approximately 50% of the structural tubing common to the forward end of the tail boom, immediately aft of the intermediate bellcrank, was worn through.
RESEARCH AND TESTING
The fan shaft, fan wheel and bearing were sent to the NTSB Materials Laboratory for examination. The examination noted the fan shaft was fractured in a "V" shape from a transition in shaft diameter adjacent to the bearing at the aft end, forward to the attachment bolt holes.
The report noted that examination of the fan shaft revealed that the fracture "initiated at a location corresponding to the ... aft break-edge of the bearing inner race. Here circumferential scoring marks were observed. ... All portions of the fracture propagated normal to the exterior surface of the shaft, were flat, and contained multiple gently curving crack arrest marks, all features indicative of fatigue propagation."
The clutch actuator was tested under supervision of NTSB personnel at Robinson Helicopter Company. The motor switches and drive gear functioned normally. The tension springs could not be tested due to damage.
A review of the R22 maintenance manual indicated that a detailed inspection of the lower actuator bearing is required when the cooling fan and scroll are removed, such as when replacing a drive belt. This includes a visual inspection of the bearing and review of the fan shaft/bearing interface for evidence of movement or fretting.
According to Robinson Helicopter, this visual inspection was conducted during repairs following the belt failure incident and no discrepancies were found. A complete visual inspection of the fan shaft and bearing race is not possible without disassembly of the fan shaft from the bearing. The fan shaft is installed into the lower actuator bearing with a .0008 - .0018 inch interference fit and is normally handled as an assembly.
Parties to the investigation were the Federal Aviation Administration and the Robinson Helicopter Company. | 1 | 9 |
<urn:uuid:39c5a9fd-8da3-489d-9674-c22b1298212b> | 423 Vol. 36, No. 6 Type 2 diabetes disproportionately burdens the elderly and minority groups in the United States.
1,2 Mexican Americans, the largest Hispanic/Latino subgroup, are almost twice as likely to have diabetes as non-Hispanic whites of similar age. 3 Diet plays an important role in the management of blood glucose control in diabetes, and inadequate diet is a commonly identified problem of diabetes management. 4-9 Research has indicated that several barriers exist to adherence to a diabetic prudent diet.
8,10-12 Barriers to self-care refer to the environmental and cognitive factors that interfere with following the rec- ommended treatment regimen. For older adults, family support may be important in overcoming barriers to self- care. The characteristics of the patient 9s family envi- ronment in which diabetes management takes place have been associated with self-management behav- iors.
13,14 Among Hispanics, the extended family is con- sidered a primary support group. 15,16 Although most would agree that family function and perceived and actual family support would influence a patient 9s adherence to diet, surprisingly little research has been conducted on this matter in adults with diabe- tes and even less among older Hispanics with diabetes. Instead, most of the research on the families 9 ... more. less.
influences on diabetes management has focused on children, ado- lescents, and young adults.<br><br> 17-19 The implications of these findings for older Hispanics are unknown. Fisher et al found that family structure and organi- zation were associated with good diet and exercise among non-elderly Hispanic patients with diabetes. 13 In another study of predominantly older African Ameri- can adults with diabetes, researchers reported that fam- ily support was related to the pattern of diet self-care behaviors.<br><br> 11 We hypothesized that perceived family function and family support are associated with barri- ers to diet self-care among older Hispanic adults with type 2 diabetes. This study examined how family function, family support, selected demographic variables, and disease characteristics are related to the older Hispanic adult 9s perception of barriers to diet self-care. The specific objectives of the study included: (1) to determine the level of perceived barriers to diet among older Hispan- F a m il y S u ppo rt a n d D i e t B a rr i e rs A m o n g O l d e r H i s p a n i c A d u l ts W i t h T y p e 2 D i a b e t e s L o nni e K .<br><br> W e n, PhD; Mi c h ae l L . P a r c h m a n, MD, MPH; M a rvin D. Sh e p h e r d , PhD From VERDICT, South Texas Veterans Health Care System, San Antonio, Tex (Drs Wen and Parchman); the Department of Family and Community Medicine, University of Texas Health Science Center at San Antonio (Dr Parchman); and the College of Pharmacy, University of Texas at Austin (Dr Shepherd).<br><br> B a c k g r o u n d a n d O b j e c t i v e s : Diet plays an important role in the management of diabetes, and a subop- timal diet is a commonly identified problem. Family support may be important in overcoming barriers to good diet. We conducted this study to examine the role of the family in overcoming barriers to diet self-care among older Hispanic patients with diabetes.<br><br> M e t h o d s : We performed a cross-sectional sur- vey of 138 older Hispanic adults seeking care at an outpatient university clinic. Patients reported on their perception of family functioning, family support for diet, and barriers to diet self-care. R e s u l t s : Level of family functioning was related to family support for diet self-care, and family support for diet was related to perceived barriers to diet self-care.<br><br> Scores for family support were higher for those who perceived their family as functional compared to those who perceived their family as mildly dysfunc- tional or dysfunctional. As family support for diet increased, perceived barriers to diet self-care de- creased. C o n c l u s i o n s : To fully understand difficulties encountered by older Hispanic adults with ad- herence to a diabetic diet, primary care physicians should explore the role of family support and family functioning.<br><br> For those with poorly functioning families or low levels of family support, family-level interventions may need to be considered. (Fam Med 2004;36(6):423-30.) Clinical Research and Methods 424 June 2004Family Medicine ics who have diabetes, (2) to evaluate the level of per- ceived family support specific to diet and level of fam- ily function, and (3) to examine the relationship be- tween perceived family support and demographic and disease characteristics with perceived barriers to diet. Methods Participants Older patients at an ambulatory care center, within a tax-supported county health care system in the South- west, were approached as they presented for care in the clinic reception area by the principal investigator or trained bilingual research assistant.<br><br> The patients were asked to participate in a survey about their family and factors related to diabetes self-care. The inclusion cri- teria included: (1) adults ages 55 or older, (2) diagnosed with diabetes (type 2) for at least 1 year, (3) prescribed diabetes medication, (4) living in a family environment, and (5) able to provide informed consent. Living in a family environment was defined as (1) living with a spouse/significant other, (2) living with spouse/significant other and children, (3) living with children, or (4) living with family or friends.<br><br> Inclusion criteria included patients who were prescribed medi- cations, because this study is part of a larger study that examined the relationship between the family environ- ment and diabetes self-care in the four regimen areas 4 diet, exercise, medications, and self-monitoring of blood glucose. 12 The exclusion criteria included (1) treatment for major psychiatric problems within the previous 6 months, because patients who received treatment for major psychiatric problems such as schizophrenia may not provide valid responses to questions about their diabetes self-care behaviors, (2) scoring of 15 or higher on the Patient Health Questionnaire depression screen, 20 because depression might affect their perception of barriers to self-care and perception of family function- ing, or (3) insulin therapy initiated during the 6-month period preceding the study, since this would represent a major modification in medication management that would require adjustment from both patient and family member(s) and may not accurately reflect the perceived support or barriers to self-care. Other exclusions in- cluded (4) presence of major complications that may affect performance of diabetes self-management activi- ties such as cognitive impairment, end-stage renal dis- ease, and blindness or (5) a requirement for nursing care, such as home health nurse assisting with diabetes management.<br><br> Procedures The interviewer briefly explained the purpose of the study to patients during their clinic visit and screened for eligibility for the study. Patients were asked if they were age 55 or older, if they have been diagnosed with diabetes for more than a year, and if they live with fam- ily. Those who met the inclusion criteria were given more information about the purpose of the study and were asked to participate.<br><br> The survey was available in English and in Spanish and was completed either be- fore or after the physician visit. Each participant was given a book on diabetes (either in English or in Span- ish) as a token of appreciation for participating in the study. Family members who accompanied patients were asked to leave the area so the participant could com- plete the survey.<br><br> Approval from our Institutional Re- view Board was obtained. Measures Barriers to Diet Self-care. Barriers to diet self-care were measured with the diet subscale of the Barriers to Self-care Scale developed by Glasgow and associates.<br><br> 21 The seven-item scale measures the frequency of both environmental and cognitive factors that interfere with following a recommended diet. The scale has been vali- dated on adults with type 2 diabetes. The internal con- sistency for the diet subscale ranges from 0.55 to 0.92 (Cronbach 9s alpha).<br><br> 8,21 The instrument asks respondents to rate how fre- quently they experience various barriers to diet self- care using a 7-point frequency of occurrence scale from 1 (very rarely or never) to 7 (daily). The scale was scored by averaging the responses across the items. Higher scores indicate a higher frequency of barriers.<br><br> Family Support. Perceived family support for diet was assessed with the diet subscale from the Diabetes Fam- ily Behavior Checklist II (DFBC-II). 4 There were two items that measure positive and two items that mea- sure negative support specific to diet.<br><br> For example, participants were asked to rate how often a particular family member will cpraise you for following your diet d (positive support) and will ceat foods that are not part of your diabetic diet d (negative support). The response format is a 5-point scale from 1 (never) to 5 (at least once a day). The diet component scores for the DFBC-II were calculated by adding the positive items and subtracting the ratings of the negative items.<br><br> 4 A high component score indicates a strong perception of positive interac- tions with the rated family members. To complete the DFBC-II, respondents were asked to think about one family member with whom they generally have the most contact. Family Function.<br><br> Family function was measured us- ing the Family APGAR Scale. 22 The Family APGAR is a validated scale of family function. The scale was de- veloped as a tool to measure a family member 9s per- ception of five dimensions of family function: adapt- ability, partnership, growth, affection, and resolution.<br><br> 425 Vol. 36, No. 6 Scores on the Family APGAR assess the overall satis- faction with family life and provide a composite mea- sure of perceived family functioning.<br><br> In diabetes, the Family APGAR has been used in several studies ex- amining family function and the relationship to glyce- mic control 23,24 and the relationship between family function and quality of life in adults with type 2 diabe- tes. 25 This instrument can be used with either a 3- or 5- point scale. For research purposes, the authors of the Family APGAR recommended that the 5-point scale be used because this improves the instrument 9s reliabil- ity.<br><br> 26 Each question has five possible responses: cal- ways d (4 points), calmost always d (3 points), csome of the time d (2 points), chardly ever d (1 point), and cnever d (0 points). The participants answer questions dealing with the level of satisfaction with each one of the five aspects of family life as they apply to each family mem- ber. For example, participants rated how satisfied they were with cthe help that I receive from family member when something is troubling me. d The APGAR score for each family member was calculated by summing the scores of the five items in the scale.<br><br> The overall APGAR score for each participant was calculated by summing the APGAR scores for the participant and dividing by the number of family members rated. The total score ranges from 0 to 20. The higher the score, the higher the level of perceived family function.<br><br> The 5-point scale was interpreted as functional (15 320), mildly dysfunctional (9 314), and dysfunctional (0 38). The interpretation of the scores is based on previous work by other researchers with the Family APGAR. 24,27-29 The internal consistency for the tool with a five-choice response format has been reported to be 0.86 (Cronbach 9s alpha).<br><br> 22 The instrument has been cor- related with the Pless-Satterwhite measure of family function and with clinicians 9 rating of family. 30 Demographic and Health Variables. In addition to the above scales, there were items on the survey regarding age, gender, education, income, acculturation (language based), duration of diabetes, and number of diabetes- related comorbidities.<br><br> Education, income, and duration of diabetes were self-reported. The comorbidities were obtained from the clinical chart. The comorbidities re- lated to diabetes included microvascular and macrovascular disorders.<br><br> Microvascular disorders in- cluded retinopathy, nephropathy, neuropathy, and foot problems. Macrovascular disorders included cardiovas- cular disease, cerebral vascular disease, and peripheral vascular disease. The scale developed by Deyo and associates is a simple scale for quantifying English use among Mexi- can Americans.<br><br> 31 The scale consists of four brief ques- tions regarding language. Language has been found to be an important behavioral indicator of acculturation. 32 The language scale appears to be reliable and valid.<br><br> Scale scores were found to have significant associa- tions with major demographic characteristics that were considered to be correlated with acculturation. 31 Each patient in our study was given a total score by assign- ing 1 point for each response favoring English and zero points for each response favoring Spanish. The patient has a score ranging from zero to 4, with higher scores reflecting higher levels of acculturation.<br><br> Spanish Translation of Instrument A Spanish version of the instrument was developed by translating the English version of the instrument into Spanish and then back translating it into English. Lin- guistics professionals experienced with health surveys translated and back translated the instrument. Any dis- crepancies were corrected using the consensus of three bilingual experts.<br><br> The bilingual experts included two linguistic professionals and a bilingual staff member with the Institutional Review Board, whose responsi- bility is to review surveys. Statistical Analysis Descriptive statistics provided information on all variables. For the analyses, marital status categories were collapsed into two categories 4married and not married.<br><br> Married include living with a significant other. Not married included being divorced, separated, wid- owed, or never married. Household status was also col- lapsed into two categories for the analysis 4lives with spouse/significant other only (couple only) or lives with family (included spouse/significant other and children; children and or other family members).<br><br> In addition, educational level was collapsed into two response lev- els: (1) 8 or less years of schooling and (2) some high school or high school graduate/some college or col- lege graduate. Non-parametric test (Mann-Whitney U) was used with variables with non-normal distributions. Paramet- ric tests were used when appropriate.<br><br> Univariate analy- ses were used to examine the relationship between the initial set of predictors and barriers to diet. A regres- sion model was used, and the variables included in the model were those that showed a significance level of 0.25 in the univariate analysis. 33,34 All other analyses were established a priori at P <.05 for acceptance.<br><br> The Statistical Package for the Social Sciences (SPSS) for Windows ® Version 11.5 was used for all statistical analyses. Results Of the 186 patients who were approached for par- ticipation, 170 agreed to participate, and of those, 138 were self-identified as Mexican Americans and met the inclusion criteria for the study. Demographic and fam- Clinical Research and Methods 426 June 2004Family Medicine ily characteristics of the participating subjects are pre- sented in Table 1.<br><br> The mean scores for the diet barrier scale are shown in Table 2. The most frequent barrier reported was cbe- ing around people who are eating or drinking things that I shouldn 9t. d Results of the family support scale (DFBC-II), on which respondents were asked to select one family member with whom they generally have the most contact, are shown in Table 1. Almost half of the sample (44.2%) reported that the family member selected ate foods that were not a part of their diet cat least once a day. d The overall median score for diet family support was 1.00 (interquartile range=3.0), which indicates a moderate level of positive support.<br><br> The range for the scale is -8 to 8, with higher numbers indicating more perceived positive support. The maximum number of family members rated by a single participant with the Family APGAR scale was five. The median APGAR score for the sample was 18 (interquartile range=6), which indicates a high level of family function (range=0 to 20).<br><br> The scores for the Family APGAR were skewed so that the scores were collapsed to categories for the analyses. A score of 15 or above was categorized as cfunctional. d A score of 14 or less was categorized as cmildly dysfunctional d or cdysfunctional. d Approximately 72% were catego- rized as cfunctional, d and 28% were cmoderately dys- functional d or cdysfunctional. d Table 3 presents the average rank scores for diet fam- ily support and the mean diet barriers scores by family function (APGAR) and gender. The average rank for diet support score was significantly higher in the func- tional group.<br><br> There were no significant differences in the diet barrier scores among the functional and dys- functional groups or by gender. Additionally, there were no significant differences in family function scores among men and women (chi square=0.820, P =.365). The initial set of independent variables selected for the univariate analyses included age, gender, educa- tion, income, duration of diabetes, number of diabe- tes-related comorbidities, marital status, household sta- tus, family APGAR, and diet family support.<br><br> Table 4 presents the results of the analyses. Univariate analy- ses were used to condense the pool of initial variables entered into the final multiple regression model. Vari- ables that were significant at the 0.25 level were se- lected for the final model, and these included age, gen- der, marital status, diabetes comorbidities, duration of diabetes, and diet family support.<br><br> A multiple regres- sion analysis was conducted to examine the relation- ship between these variables and barriers to diet (Table 5). The final model explained 14.4% of the variance for barriers to diet self-care. The linear combination of the predictor variables was significantly related to bar- riers to diet (F=3.62; df =6, 135; P =.002).<br><br> In the final model, age and diet family support were the only two Table 1 Demographic and Family Characteristics Characteristic n Mean (SD) Age (years)13864.1(6.84) Duration of diabetes (years)13813.4(9.46) Number of diabetes-related comorbidities137 1 1.9(1.15) Acculturation score (range from 0 to 4) 2 1381.8(0.98) Gender Percentage Females9266.7 Total138100.0 Marital status Married7554.3 Widowed3223.2 Divorced or separated2719.6 Never married42.9 Total138100.0 Household status Lives with spouse or significant other5439.1 Lives with children4431.9 Lives with spouse or significant other and children2215.9 Lives with relatives and friends1813.0 Total13899.9 3 Educational level Grade school or less (0 38)6648.2 Some high school (9 311)2619.0 High school graduate or GED3122.6 Some college or college degree1410.2 Total137 4 100.0 Total family monthly income Less than $5002116.9 $501 to $1,0004737.9 $1,001 to $1,5004334.7 $1,501 or greater1310.5 Total124 5 100.0 Employment status Employed2216.1 Not employed/retired11583.9 Total137 5 100.0 Family member with most contact Son or daughter4633.3 Husband4230.4 Wife3023.9 Other (siblings, nephews, aunts, housemate)2012.3 Total13899.9 3 Mean (SD) Average time spent with family member (waking hours) in hours per day7.6(4.69) 1 One chart not available 2 Acculturation scale ranges from 0 to 4 (higher numbers indicate more acculturation.) 3 Does not equal 100% due to rounding error. 4 One respondent did not provide a response. 5 Fourteen respondents did not provide responses.<br><br> SD 4standard deviation GED 4general equivalency diploma 427 Vol. 36, No. 6 of family support for diet were also more likely to re- port living in a functional family setting.<br><br> Why should level of family support be inversely re- lated to perceived barriers for diet self-care? Barriers to care that have been associated with the management of diabetes are based primarily within the family set- ting. 35 The most frequent diet barrier reported in this study was cbeing around people who are eating or drink- ing things that I shouldn 9t. d This may be a problem for Hispanic older adults because the Hispanic family household size is larger than those of non-Hispanic whites.<br><br> 36 In 2000, almost one third of family house- holds in which a Hispanic person was a member con- sisted of five or more people. 37 Only 11.8% of non- Hispanic white family households were this large. More than 40% of our subjects reported that the family mem- ber they spend the most time with eats foods that are not part of their diet cat least once a day. d Participants in other studies have reported that it can be difficult to adhere to a diet regimen if the rest of the family was not willing to eat the same foods that the participants were eating, and preparing two different types of meals may be difficult for most families.<br><br> 11,38 The level of perceived family support specific to dia- betes was moderate. There were not any gender differ- ences on perceived family support for diet. Brown et al reported that males expressed stronger perceptions of social support for diet than did women.<br><br> 39 This may be due to the gender role differences in this culture where women are responsible for cooking and preparing meals. The sample in the Brown study was younger (mean age=54 years) than the present study. There may Table 2 Mean Scores for Barrier to Diet Self-care Scale ItemnMean (SD) How often do each of the following happen to you?<br><br> Around people who are eating and drinking things I shouldn 9t1384.83 (2.42) Not home for meals1383.85 (2.12) Think about costs of foods1373.20 (2.12) Unsure about foods1373.12 (2.17) Still feel hungry1372.93 (2.06) Don 9t have time to prepare foods1362.43 (2.04) Won 9t matter if don 9t follow diet1382.23 (1.91) Overall scale score1373.22 (1.07) Scale: 1=very rarely or never, 2=once per month, 3=twice per month, 4= once per week, 5=twice per week, 6=more than twice weekly, and 7=daily Table 3 Mean Diet Barriers and Diet DFBC Scores by Family Function and Gender Diet Barriers* Diet DFBC** Family functionnMean SDt dfP Value n Mean Rank z P Value Functional ( e 15)993.171.07-0.95134.3469974.64-2.728.006 Mildly dysfunctional/ dysfunctional ( d 14)373.361.08 4 4 43854.30 4 4 Gender Males463.020.94-1.541350.1264667.47-0.429.668 Females913.311.12 4 4 49270.52 4 4 Scale: Diet barriers: 1 (never or rarely ) to 7 (daily); Diet DFBC: range from -8 to 8 with higher scores indicating more perce ived support *Parametric test used 4diet barriers variable displays characteristics of normal distribution as tested by Shapiro-Wilk 9s statis tic = 0.982; P >.05 ** Non-parametric test (Mann-Whitney U) used for non-normally distributed variable DFBC 4Diabetes Family Behavior Checklist significant predictors of barriers to diet. Table 6 pre- sents the bivariate correlations among the variables in the model. Discussion Older Hispanic adults with higher levels of family support for diet self-care reported fewer barriers to diet self-care.<br><br> Moreover, those who reported higher levels Clinical Research and Methods 428 June 2004Family Medicine not have been any gender differences in our study be- cause our sample was older, and participants may have depended on the support from their children or other family members. The structural function theory may be used to ex- plain the second question of why family functioning is related to the level of family support for diabetes. The theory provides a framework for assessing families and health.<br><br> The structural functional framework defines the family as a social system. 40 Illness of a family member results in changes of the family structure and function. The theory focuses on the family structure and func- tion and how well the family structure performs its func- tion.<br><br> The concept of structure refers to how the family is organized, the manner in which the units are arranged, and how these units relate to each other. 40 The concept of function refers to what the family does and why it exists. Structure is assessed by the Family APGAR, and function is as- sessed by the family support specific to diabetes.<br><br> Family function serves as a resource for social support for the patient. 41 To examine the factors associated with perceived barriers to diet self- care, a regression analysis resulted in a model that explained a modest 14% of the variance in perceived barriers. Family support specific to diet and age were significant predictors of barriers to diet.<br><br> The greater the family support for diet, the less the perceived barri- ers. Age had an inverse relationship with perceived barriers. This finding is consistent with other studies exam- ining the relationship between age and perceived barri- ers.<br><br> 8,42 Limitations The results of this study should be interpreted cau- tiously since there are several important limitations. One limitation is that the study was cross-sectional, and cau- sality cannot be determined. Perhaps those who per- ceive their families as being more supportive also per- ceive fewer barriers to self-care, because they gener- ally have a positive outlook.<br><br> Longitudinal studies are needed to assess the relationship between family sup- port and barriers to self-care over time. Further, the fam- ily interactions were self-reported. Also, the sample was limited to those adults living in a family environment and with lower income.<br><br> Finally, the results of the study are not generalizable to all older Hispanic adults. The findings from this study have important impli- cations for primary care physicians, dieticians, and dia- betes educators. Previous research has shown that bar- riers to self-care play an important role in adherence to Table 4 Univariate Analyses Between the Initial Set of Independent Variables and the Dependent Variable 4Perceived Barriers to Diet Self-Care VariableFn P Value Age10.38136.002 Gender2.37136.126 Diabetes-related comorbidities1.80135.182 Duration of diabetes2.11136.149 Marital status2.02136.157 Household status0.02136.900 Diet DFBC4.92136.028 Family APGAR0.89135.346 Education1.18135.280 Income 4monthly0.02121.886 DFBC 4Diabetes Family Behavior Checklist Table 5 Multiple Linear Regression Analysis of Barriers to Diet Self-care VariablesBetaSE tSignificance Age-0.040.02-2.410.02 Gender0.370.201.880.06 Diet DFBC-0.090.04-2.510.01 Diabetes comorbidities-0.040.08-0.480.63 Duration of diabetes-0.010.01-0.710.48 Marital status-0.260.20-1.440.15 SE 4standard error DFBC 4Diabetes Family Behavior Checklist Table 6 Bivariate Correlations of Variables in Final Regression Model Diet DietMarital VariablesBarrierAgeGenderDFBCComorbidDurationStatus Diet Barrier1 Age-0.27**1 Gender0.13-0.061 Diet DFBC-0.20*0.020.071 Comorbid-0.110.13*0.07.0101 Duration-0.110.39**0.04-0.130.121 Marital status-0.110.42*0.22**-0.01-0.0090.041 DFBC 4Diabetes Family Behavior Checklist *Correlation is significant at the 0.05 level.<br><br> **Correlation is significant at the 0.01 level. 429 Vol. 36, No.<br><br> 6 diet recommendations. 4,6-9 Diet self-care behaviors are deeply rooted in culture and lifestyle. Educational pro- grams that take into consideration the culture and lifestyle of patients and family are needed.<br><br> For example, for patients with poorly controlled diabetes and poor adherence to diet, consideration should be given to in- cluding the family in office visits and other interven- tions. Further research should be conducted to see if including family in office visits does, in fact, improve adherence. Family functioning is associated with diet family support, thus health care providers might consider as- sessing family functioning when low levels of family support for diet are present and refer for family coun- seling if indicated.<br><br> Improving family support is impor- tant not only because it is associated with lower levels of perceived barriers to diet self-care, but family sup- port specific to diabetes has also been shown to be re- lated to diabetes self-management activities. 12,18 The greater the perceived support, the greater the self- reported adherence with the diabetes regimen. Conclusions The findings from this study indicate that family functioning is related to family support for diet self- care and that such support is inversely related to per- ceived barriers to following the diet regimen.<br><br> Knowl- edge of family function and perceived support may be useful to health care providers in the care of older His- panic adults with diabetes. Acknowledgments: The authors are indebted to the patients who generously volunteered their time in participating in the survey. We also thank the fac- ulty and staff at the Family Practice Clinic of the University Health Sys- tem-Downtown Clinic, San Antonio, Tex, for their support in this study.<br><br> At the time of the study, Dr Wen was a doctoral candidate at the College of Pharmacy, University of Texas at Austin. This material is the result of work supported with resources and the use of facilities at the South Texas Veterans Health Care System. The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs.<br><br> Corresponding Author: Address correspondence to Dr Parchman, VER- DICT, South Texas Veterans Health Care System, Ambulatory Care 11C-6, 7400 Merton Minter Blvd, San Antonio, TX 78229-4404. 210-617-5300, ext. 4028.<br><br> Fax: 210-567-4423. [email protected]. R EFERENCES 1.Anderson LA, Halter JB.<br><br> Diabetes care in older adults: current issues in management and research. Annu Rev Gerontol Geriatr 1989;9: 35- 73. 2.Mokdad AH, Ford SE, Bowman BA.<br><br> et al. Diabetes trends in the US: 1990 31998. Diabetes Care 2000;23(9):1278-83.<br><br> 3.National diabetes fact sheet, Centers for Disease Control and Preven- tion. Available at www.cdc.gov/diabetes/pubs/estimates.htm. Accessed August 16, 2003.<br><br> 4.Glasgow RE, Toobert DJ. Social environment and regimen adherence among type II diabetic patients. Diabetes Care 1988;11(5):377-86.<br><br> 5.Nelson KM, Reiber G, Boyko EJ. Diet and exercise among adults with type 2 diabetes. Diabetes Care 2002;25(10):1722-8.<br><br> 6.Travis T. Patient perceptions of factors that affect adherence to dietary regimens for diabetes mellitus. Diabetes Educ 1997;23(2):152-6.<br><br> 7.Ary D, Toobet D, Wilson W, Glasgow R. Patient perspective on factors contributing to nonadherence to diabetes regimen. Diabetes Care 1986; 9(2):168-72.<br><br> 8.Glasgow RE, Hampson SE, Strycker LA, Ruggiero L. Personal model beliefs and social-environmental barriers related to diabetes self man- agement. Diabetes Care 1997;20(4):556-61.<br><br> 9.Jenny JL. Differences in adaptation to diabetes between insulin-depen- dent and non-insulin-dependent patients: implications for patient edu- cation. Patient Educ Couns 1986;8(1):39-50.<br><br> 10.Aljasem LI, Peyrot M, Wissow L, Rubin RR. The impact of barriers and self-efficacy on self-care behaviors in type 2 diabetes. Diabetes Educ 2001;27(3):393-404.<br><br> 11.Dye CJ, Haley-Zitlin V, Willoughby D. Insights from older adults with type 2 diabetes: making dietary and exercise changes. Diabetes Educ 2003;29(1):116-27.<br><br> 12.Wen LK. The relationship of family environment and other social cog- nitive variables on diet and exercise in older adults with type 2 diabetes [dissertation]. Austin, Tex: University of Texas at Austin, 2002.<br><br> 13.Fisher L, Chesla C, Skaff MM, Gilliss C, Mullan JT, Bartz RJ, Kanter RA, Lutz CP. The family and disease management in Hispanic and European-American patients with type 2 diabetes. Diabetes Care 2000; 23(3):267-72.<br><br> 14.Edelstein J, Linn MW. The influence of the family on control of diabe- tes. Soc Sci Med 1985;21(5):541-4.<br><br> 15.Tamez EG, Vacalis TD. Health beliefs, the significant others, and com- pliance with therapeutic regimens among adult Mexican American dia- betics. Health Educ 1989;20(6):24-31.<br><br> 16.Keefe S, Padilla A, Carlos M. The Mexican-American extended family as an emotional support system. Hum Organ 1979;38:144-52.<br><br> 17.Auslander W, Corn D. Environmental influences on diabetes manage- ment: family, health care system, and community contexts. In: Haire- Joshu D, ed.<br><br> Management of diabetes mellitus: perspectives of care across the lifespan. St Louis: Mosby,1996:513-26. 18.Gleeson-Kreig JA, Bernal H, Woolley S.<br><br> The role of social support in the self-management of diabetes mellitus among a Hispanic popula- tion. Public Health Nurs 2002;19(3):215-22. 19.Garay-Sevilla ME, Nava LE, Malacara J, et al.<br><br> Adherence to treatment and social support in patients with non-insulin-dependent diabetes mellitus. J Diabetes Complications 1995;9(2):81-6. 20.Kroenke K, Spitzer RL, Williams JB.<br><br> The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med 2001;16(9):606-13. 21.Glasgow RE.<br><br> Social-environmental factors in diabetes: barriers to dia- betes self-care. In: Bradley C, ed. Handbook of psychology and diabe- tes.<br><br> Chur, Switzerland: Harwood Academics, 1994:335-49. 22.Smilkstein G. The Family APGAR: a proposal for a family function test and its use by physicians.<br><br> J Fam Pract 1978;6(6):1231-9. 23.Cardenas L, Vallbona C, Baker S, Yusim S. Adult onset diabetes melli- tus: glycemic control and family function.<br><br> Am J Med Sci 1987;293(1): 28-33. 24.Konen JC, Summerson JH, Dignan MB. Family function, stress, and locus of control: relationships to glycemia in adults with diabetes mel- litus.<br><br> Arch Fam Med 1993;2(4):393-402. 25.Rankin S. Galbraith ME, Huang P.<br><br> Quality of life and social environ- ment as reported by Chinese immigrants with non-insulin-dependent diabetes mellitus. Diabetes Educ 1997;23(2):171-7. 26.Smilkstein G, Ashoworth C, Montano D.<br><br> Validity and reliability of the Family APGAR as a test of family function. J Fam Pract 1982;15(2): 303-11. 27.DelVecchio Good MJ, Smilkstein G, Good BJ, Shaffer T, Arons T.<br><br> The Family APGAR Index: a study of construct validity. J Fam Pract 1979; 8(3):577-82. 28.Mengel M.<br><br> The use of the family APGAR in screening for family dys- function in a family practice center. J Fam Pract 1987;24(4):394-8. 29.Smucker WD, Wildman BG, Lynch TR, Revolinsky MC.<br><br> Relationship between the family APGAR and behavioral problems in children. Arch Fam Med 1995;4(6):535-9. 30.Pless I, Sattwewhite B.<br><br> A measure of family functioning and its appli- cation. Soc Sci Med 1973;7(8 ):613-21. 31.Deyo RA, Diehl AK, Hazuda H, Stern MP.<br><br> A simple language-based acculturation scale for Mexican Americans: validation and application to health care research. Am J Public Health 1985;75(1):51-5. Clinical Research and Methods 430 June 2004Family Medicine 32.Olmedo GM, Padilla AM.<br><br> Empirical and construct validation of a mea- sure of acculturation for Mexican Americans. J Soc Psych 1978;105: 179-87. 33.Bendel RB, Afifi AA.<br><br> Comparison of stopping rules in forward regres- sion. Journal of the American Statistical Association 1977;72:46-53. 34.Mickey J, Greenland S.<br><br> A study of the impact of confounder-selection criteria on effect estimation. Am J Epidemiol 1989;129(1):125-37. 35.Fisher L, Chesla CA, Bartz RJ, et al.<br><br> The family and type 2 diabetes: a framework for intervention. Diabetes Educ 1998;24(5):599-607. 36.Therrien M, Ramirez RR.<br><br> The Hispanic population in the United States: March 2000, current population reports, Washington, DC: US Census Bureau, 2000:20-53. 37.A statistical profile of Hispanic older Americans aged 65 plus, US De- partment of Health and Human Services, Administration on Aging. Available at www.aoa.gov/aoa/stats/stat-FS/facts-on-Hispanic- elderly.html.<br><br> Accessed September 2, 2003. 38.Maillet NA, D 9Eramo-Melkus G, Spotllett G. Using focus groups to characterize the health beliefs and practices of black women with non- insulin-dependent diabetes.<br><br> Diabetes Educ 1996;22(1):39-46. 39.Brown SA, Harrist RB, Villagomez ET, Segura M, Barton S, Hanis CL. Gender and treatment differences in knowledge, health beliefs, and metabolic control in Mexican Americans with type 2 diabetes.<br><br> Diabe- tes Educ 2000;26(3):425-38. 40.Friedman MM. Structural-functional theory.<br><br> In: Friedman MM, Bowden VR, Jones EG, eds. Family nursing: research, theory, and practice, fifth edition. Upper Saddle River, NJ: Prentice Hall, 2003:89-102.<br><br> 41.Neabel B, Fothergill-Bourbonnais F, Dunning J. Family assessment tools: a review of the literature from 1978 31997. Heart Lung 2000; 29(3):196-209.<br><br> 42.Connell CM. Psychosocial contexts of diabetes and older adulthood: reciprocal effects. Diabetes Educ 1991;17(5):364-71.<br><br> | 1 | 18 |
<urn:uuid:a23ee7fd-2e43-40b7-86db-33e25874c5f7> | Laboratory of Molecular Biophysics
Laboratory Journal 2002
Electron microscope studies
1. Electron crystallography
Structural studies at high resolution are possible using cryo-electron microscopy
and image analysis. Periodic ordering of proteins in two-dimensions
as well as along one-dimensional helices has been used to determine some
important structural features using transmission electron microscopy. Electron
crystallography originally developed by Henderson and Unwin for structure
determination of 2-dimensional (2D) crystals of membrane proteins has now
revealed the atomic resolution of bacteriorhodopsin (Henderson et al.,
1990) , the light harvesting complex (Kühlbrandt et al., 1994)
and the tubulin (Novales et al., 1998).
1.1 Crystallization of soluble protein at the air/water interface
One technique to obtain 2D crystals is the crystallization of soluble protein
on a lipid monolayer first described by Kornberg et al., 1983. This
procedure is based on the formation of 2D crystals of proteins bound to a
ligand-lipid incorporated into a planar lipid layer at the air:water interface.
The classical lipid monolayer technique uses electrostatic interactions
between lipids and proteins. It is however possible to use more specific
interactions. For example, in collaboration with the group of C. Mioskowski
(Paris), we have developed a method of crystallization using the specific
strong interaction between histidine residues and Nickel ion. The polar
head of synthesized lipids carries the Nickel ion whereas a short stretch
of contiguous histidine residues (a His-tag) is located on the C or N terminal
end of the expressed protein
1.1.1 HupR Protein
Karen Davies and Louise Johnson
Using this technique, large, well ordered two-dimensional crystals of the
histidine tagged-HupR protein, a transcriptional regulator from the photosynthetic
bacterium Rhodobacter capsulatus, were obtained (Vénien-Bryan
et al., 1997). HupR (53KDa) is a response regulator of the NtrC subfamily;
it activates the transcription of the structural genes hupSLC, of
Many bacterial signalling pathway, particularly those requiring a fast response
to external environmental changes, involve two-component systems. These systems
consist of two proteins: a sensory autophosphorylating protein kinase (histidine
kinase) and a partner response regulator. The response regulators are classified
by the presence of a homologous receiver domain within the protein and can
be subdivided into families based on the number of other functional domains
in the protein. E.g. the CheY family contain only the receiver domain whereas
the OmpR family contain a DNA-binding domain in addition to the receiver
All response regulator are activated/inactivated by the transfer of a phosphoryl
group from the partner histidine kinase to the aspartate residue in the receiver
domain, but their downstream function depends on the other domains in the
protein. The structures of a number of proteins belonging to the CheY and
OmpR family have been solved to high resolution allowing an incite into the
structural mechanisms underlying the function of these two families of response
A third family of response regulators called the NtrC family is less well
known structurally. It contains three functional domains: an N-terminal receiver
domain, a central ATPase domain and a C-terminal DNA-binding domain. The family
is a group of enhance binding proteins that activate the transcription of
enzymes involved in bacterial metabolism e.g. nitrogen fixation, and chemolithotrophic
metabolism. HupR is a member of the NtrC family of response regulators. We
aim to improve the structural understanding of the NtrC family by producing
a medium resolution, 3D reconstruction of HupR using electron microscopy.
A projection map of the full-length protein at 9Å resolution was obtained
by electron cryo-microscopy and image analysis of frozen-hydrated two-dimensional
crystals. The crystals have a p6 plane group with unit cell dimensions
of a=b= 111.6 Å,
= 120.4o .figure 1 (Vénien-Bryan et al., 2000).
These results provide the first structure at medium resolution of a whole
transcription factor, HupR from the NtrC family. The 3D structure of this
protein is being studied. By tilting the grid within the microscope it is
possible to get different projections views of the protein which are then
combined to produce a medium resolution 3D reconstruction to about 8Å.
| Figure 1. A projection map with p6
plane group symmetry ...more
As well as producing a 3D reconstruction of HupR we also hope to produce
a 3D reconstruction of HupR bound to its enhancer binding site and one of
phosphorylated HupR. These reconstructions will give information about any
conformational change that may occur during the regulatory process of transcription
Work is currently being carried out to characterise the binding of the promoter
site to HupR by gel filtration and fluorescence anisotropy. Previous work
by collaborators in Grenoble (Annette Colbeau) suggests that HupR binds to
a palindromic motif of 3 bp with at least 80 bp downstream of this region.
Primary studies show that the palindromic sequence on its own is not sufficient
for binding. Work is in progress to locate the shortest oligonucleotide sequence
sufficient to bind to HupR for use in crystallisation studies.
1.1.2 Other proteins
The recombinant His-tag vascular endothelial cadherin has been crystallised
in 2D on a Ni-lipid monolayer. A projection map at 20 Å have been produced.
This work has been done in collaboration with Rana al-Kurdi, Elizabeth Hewat
and Daniel Gulino, IBS Grenoble, France,
1.2 Crystallization of membrane proteins at the air/water interface.
In 1971 it was first shown by Fromherz (Fromherz 1971) that an ordered arrangement
of protein can be generated underneath a lipid monolayer. The crystallization
on lipid layers is an elegant method because it is possible to work with very
dilute protein solutions and still generate a high local concentration of
protein constrained in two dimensions. Nonetheless the proteins retain sufficient
mobility to allow the organization into crystalline two-dimensional arrays
by lateral diffusion.
Application of surface crystallization on lipid monolayers to membrane proteins
is complicated by the tendency of detergents to solubilize monolayers of regular
lipids. To avoid solubilization of the lipid monolayer by the detergent L.
Lebeau and C. Moskowski in Strasbourg developed a new class of partially fluorinated
lipids. These lipids, when spread at the interface display a high resistance
toward solubilization by detergents. As a test case for the crystallization
of a membrane protein using lipid monolayers served the H+-ATPase
from the plant Arabidopsis thaliana which was expressed in yeast with
a C-terminal His-tag (Jahn et. al. 2001). The plasma membrane H+-ATPase
(AHA2) from A. thaliana is a single-subunit integral membrane protein
with a molecular mass of 104 kDa. It belongs to the family of P-type transport
ATPases and shows large conformational changes during the pumping cycle especial
for function and regulation (Kühlbrandt et al. 2002). P-type ATPases
are widely distributed biological energy transducers that convert the free
energy resulting from ATP hydrolysis into an electrochemical ion gradient
across the membrane. AHA2 became the first membrane protein to be crystallized
with the new method using fluorinated lipidic monolayers (Lebeau et al.
We are currently in the process of applying this technique to other His-tagged
membrane proteins. An ellipsometer has been installed in order to follow
the absorption process of the protein to the lipid monolayer. This non-invasive
method is sensitive to the density and thickness of the interface layer and
therefore especially useful for investigation of surface monolayer behaviour
(Vénien-Bryan et. al. 1998).
1.3 Amphipols - a novel family of surfactants.
In collaboration with JL Popot, IBPC Paris.
Amphipols are a novel family of surfactants (Tribet et. al. 1996).
They are composed of a strongly hydrophilic polymeric backbone which is grafted
with hydrophobic chains, making them amphiphilic. These amphiphilic polymers
bind to hydrophobic surfaces of proteins in a non-covalent, quasi-irreversible
manner. Membrane proteins complexed by amphipols are soluble in the absence
of detergent or free amphipols and are generally more stable than in a detergent
solution. Our objective is to develop the application of amphipols to structural
biology, in particular to membrane protein 2D crystallization.
Electron microscopic images of fluorinated lipid monolayers under which
cytochrome bc1/amphipol complexes had been injected have
yielded evidence for protein adsorption. The next step is to find condition
which promotes the crystalline arrangement of these complexes.
1.4 DNA scaffolding
|Figure 2. Negatively stained EM image of RuvA
Louise Johnson, in collaboration with Jonathan Malo, David Sherratt and
The aim of this project is to develop a technique for protein structure
determination using self-assembled DNA templates to form engineered 2D crystals.
A pilot study have been made using RuvA, this DNA binding protein is a component
of the Ruv resolvosome that binds and processes Holliday junction intermediates
in homologous recombination. A synthetic protein/DNA crystal have been
produced and a 2D projection map have been calculated at 23 Å resolution,
(Figure 2). We are hoping to develop this method
of crystallization to any protein soluble or membraneous.
Fromherz P. 1971. Electron microscopic studies of lipid protein films. Nature
Henderson, R., et al.., Model for the structure of bacteriorhodopsin
based on high-resolution electron cryo-microscopy. (1990) J. Mol. Biol.
Jahn T, Dietrich J, Andersen B, Leidvik B, Otter C, Briving C, Kühlbrandt
W, Palmgren MG. 2001. Large Scale Expression, Purification and 2D Crystallization
of Recombinant Plant Plasma Membrane H(+)-ATPase. J Mol Biol 309(2):465-476.
Kühlbrandt, W., et al., Atomic model of plant-harvesting complex
by electron crystallography. (1994) Nature, 367, 614-621.
Kühlbrandt W, Zeelen J, Dietrich J. 2002. Structure, mechanism, and
regulation of the neurospora plasma membrane H+-ATPase. Science 297(5587):1692-1696.
Lebeau L, Lach F, Venien-Bryan C, Renault A, Dietrich J, Jahn T, Palmgren
MG, Kühlbrandt W, Mioskowski C. 2001. Two-dimensional crystallization
of a membrane protein on a detergent-resistant lipid monolayer. J Mol Biol
Novales, E et al., Structure of the alpha beta tubulin dimer by electron
crystallography. (1998) Nature 391 199-203.
Tribet C, Audebert R, Popot JL. 1996. Amphipols - Polymers That Keep Membrane
Proteins Soluble in Aqueous Solutions. Proceedings of the National Academy
of Sciences of the United States of America 93(26):15047-15050.
Uzgiris, E. & Kornberg, R. Two-dimensional crystallization technique
for imaging macromolecules with application to antigen-antibody-complement
complexes. (1983). Nature, 301, 125-129.
Vénien-Bryan,C., et al. Structural st udy of the response
regulator HupR from Rhosobacter Capsulatus. Electron microscopy of two-dimensional
crystals on a Nickel-chelating lipid. (1997) J. Mol. Biol., 274, 687-692.
Vénien-Bryan C, Lenne PF, Zakri C, Renault A, Brisson A, Legrand
JF, B. B. 1998. Characterization of the Growth of 2d Protein Crystals on
a Lipid Monolayer by Ellipsometry and Rigidity Measurements Coupled to Electron
Microscopy. Biophysical Journal 74(5):2649-2657
Vénien-Bryan, C. et al. Projection structure of a transcriptional
regulator HupR determined by electron cryo-microscopy. (2000) J. Mol. Biol,
2. Electron microscope studies: Single particle analysis.
In addition to the well established techniques of electron crystallography
and helical three-dimensional reconstruction which can be applied to periodic
or symmetric structures, new powerful methods for single particle analysis
have been developed in the past two decades. The main difference is the way
averaging is performed. Whereas, in electron crystallography or helical reconstruction,
the information for several hundreds or thousands of particles is averaged
directly in a Fourier transform and the reconstruction of the object is obtained
by inverse Fourier or Fourier-Bessel transformation, single particle analysis
works with a large number of individual images of the object and combines
individual image elements. The advantage of this technique is that it is not
necessary to obtain a highly regular arrangement of the object. The method
has been successfully used to carry out 3-D reconstruction of large symmetric
(e.g. the chaperonin) or asymmetric macromolecular assemblies (e.g. ribosomes).
2.1 Phosphorylase kinase
Ed D. Lowe and Louise Johnson
In collaboration with N. Boisset (Paris) and G. M. Carlson (Kansas).
Figure 3. Wire representation of the phosphorylase kinase decorated
with glycogen phosphorylaseb. ...more
Phosphorylase kinase integrates signals from hormonal messengers and neuronal
stimuli to produce rapid activation of glycogen phosphorylase and subsequent
degradation of glycogen stores either to provide energy to sustain muscle
contraction or, in the liver, to provide other tissues such as the brain with
glucose. It is one of the most complex kinases comprising (
)4 assembly of subunits with a total molecular weight of 1.3 x 106.
subunits are regulatory; the
subunit is the catalytic subunit; and the
subunit is identical to calmodulin and confers calcium sentivity.
A 3D structure of the holoenzyme PhK has been produced at medium resolution
by electron microscopy and the random conical tilt method using the set of
programs SPIDER (Franck, 1996). The 222 symmetric structure shows a
butterfly like structure 270Å x 225Å by 160Å in overall
dimensions with two wing-like lobes connected by two oblique bridges. Comparison
of the PhK model with previous immunoelectron microscopy studies has allowed
the identification of the
regulatory subunits at the tips of the lobes and the
regulatory subunits at a position on the lobes closer to the cross-bridges.
Structural studies of PhK alone and of PhK decorated with GPb have revealed
the position of the catalytic
subunit of the phosphorylase kinase to be on the side of the lobes close
to the ends figure 3 (Vénien-Bryan et al., 2002). The PhK/GPb
model provides an explanation for the formation of hybrid GPab intermediates
in the PhK catalysed phosphorylation of GPb, as previously observed by other
We would like to pursue this structural work of Phk at higher resolution
using cryo electron microscopy.
11.2.2 Other Proteins
Louise Johnson and in collaboration with Lori Passmore and David Barford.
The 3D structure of APC is being studied. In this project we would
like to identify and position the various subunits of the APC using a NTA-Nickel
Franck, J. Three dimensional electron microscopy of macromolecular assemblies.
1996, San Diego: Academic Press
Vénien-Bryan, C., E. M. Lowe, Boisset, N. Traxler, K. W.Johnson,
L. N. Carlson, G. M Three-dimensional structure of phosphorylase kinase at
22 A resolution and its complex with glycogen phosphorylase b (2002). Structure
Last updated: 14-MAY-2003 17:03 | 1 | 2 |
<urn:uuid:b58875a9-fb52-455e-b82d-8224cb543de8> | - During a sidereal day, an astronomical object will cross the meridian twice: once at its upper culmination, when it is at its highest point as seen from the earth, and once at its lower culmination, its lowest point. Often, culmination is used to mean upper culmination. — “Culmination - Wikipedia, the free encyclopedia”,
- Culmination definition, the act or fact of culminating. See more. — “Culmination | Define Culmination at ”,
- Find detailed product information for basket ball and other products from Sichuan Culmination Printing Supplies Co., Ltd. on . — “basket ball - Detailed info for basket ball,basket ball”,
- Definition of culmination from Webster's New World College Dictionary. Meaning of culmination. Pronunciation of culmination. Definition of the word culmination. Origin of the word culmination. — “culmination - Definition of culmination at ”,
- Jon Young & J. Cash are legendary indie rap artists from Orlando, Florida made popular by the singles City I Luv, Just Chill and Post Up Feat. Lil Boosie. Check out the Official Site for news, tour Dates, audio and videos. This entry was posted in Buy Music and tagged jon young, the culmination. — “Jon Young "The Culmination" | The Official Jon Young & J”,
- We found 31 dictionaries with English definitions that include the word culmination: culmination: Compact Oxford English Dictionary [home, info] culmination: V2 Vocabulary Building Dictionary [home, info]. — “Definitions of culmination - OneLook Dictionary Search”,
- Definition of Culmination in the Online Dictionary. Meaning of Culmination. Pronunciation of Culmination. Translations of Culmination. Culmination synonyms, Culmination antonyms. Information about Culmination in the free online English. — “Culmination - definition of Culmination by the Free Online”,
- Each of the examples given below has been the culmination of explicit effort to address the needs of the widest possible range of users. The protest was the culmination of a series of public meetings all over the country in support of the Bill [ 2 ]. — “Use culmination in a sentence | culmination sentence examples”,
- The NCCA Men's Basketball Final Four is a culmination of March Madness, when 65 basketball teams representing a cross-section of conferences small and large will compete for the title of national champion. The 2010 Final Four will again be. — “Final Four TV Schedule”,
- Definition of culmination in the Legal Dictionary - by Free online English dictionary and encyclopedia. What is culmination? Meaning of culmination as a legal term. What does culmination mean in law?. — “culmination legal definition of culmination. culmination”, legal-
- culmination (plural culminations) (astronomy) The attainment of the highest point of altitude reached by a heavenly body; passage across the meridian; transit. Retrieved from "http:///wiki/culmination". — “culmination - Wiktionary”,
- Definition of word from the Merriam-Webster Online Dictionary with audio pronunciations, thesaurus, Word of the Day, and word games. — “Culmination - Definition and More from the Free Merriam”, merriam-
- Culmination Manufacturers & Culmination Suppliers Directory - Find a Culmination Manufacturer and Supplier. Choose quality Culmination Manufacturers, Suppliers, Exporters at . — “Culmination-Culmination Manufacturers, Suppliers and”,
- Success is usually the culmination of controlling failure. We toured that record for a year, which turned out to be the culmination of ten years of being constantly on the road. — “Definition of Culmination”,
- Download Young Hootie - The Culmination Mixtape This mixtape showcases Young Hootie at his best; with lyrical variety and an ability to hold your attention throughout. — “Young Hootie - The Culmination | ”,
- HafutotaJE: Sucker Punch looks like the Culmination of everything awesome wrapped into a tortilla-thin movie. Please keep George in ur prayers as he faces the Culmination of boot camp till Sat.am. — “Culmination - Define Culmination at WordIQ Online Dictionary”,
- There's more in store for ! I have some posts in the pipeline, including a is powered by WordPress 3.0.1 and delivered to you in. — “”,
- Cul·mi·na·tion n. [Cf. F. culmination ] 1. The attainment of the highest point of altitude reached by a heavenly body; passage. — “Culmination: Information from ”,
- This page is for universe building purposes for a work-in-progress book. — “Culmination - Wikia Entertainment”,
- Culmination, Flight Simulator X History - At the culmination of a poisonous political season, Senator Murkowski appears to have won her write-in 2010-11-24. — “Culmination”,
related images for culmination
- Mixtape Torrents This mixtape showcases Young Hootie at his best with lyrical variety and an ability to hold your attention throughout He is one MC that deserves to be noticed and recognized for his talent
- Cornered 1147699379l jpg 24 May 2007 13 20 104K Crest of the Wave 1181871571l jpg 14 Jun 2007 21 39 157K Culmination 1114159018l jpg 24 May 2007 13 20 81K Curves 1112513896l jpg 24 May 2007 13 20 85K
- cover500 jpg
- Ambiguous Culmination Full Zip Sweat
- The SOSMentor ShapeUp Program celebrated its culmination on Wednesday
- culmination jpg
- |=>Colores predominantes Del color de la imagen |=>Links de Imágenes http www artbylt com Images culmination500 jpg |=>Animación No
- Ambiguous Culmination Full Zip Sweat
- Share this product
- Ambiguous Culmination Full Zip Sweat
- 4 Culmination Conditioning and Training Series
- DEEPLY MOVED Edgar Romney International Vice President of UNITE HERE saw the Obama victory on Nov 4 as the culmination of his 45 years in the civil rights movement Mr Romney spoke Nov 6
- Flavio Trevisan Culmination 2008 Image courtesy of Artist
- Reserve kampioene nationale prijskamp 2005 Culmination van de Eik Culmination van de Eik Cubitus du Pre Rosine x Elkehard van de Wolvendreef Informatie
- Kimono as Art The Landscapes of Itchiku Kubota is the culmination
- Culmination 0640 jpg
- Features the singles Til I m Gone So Lost All Luck Bad Girl and more Also features collaborations with $hamrock D A B J Cash Persyce J Mor Cutta Man and Down Bottom Get
- culmination space of debate
- m197200060001 jpg
- 3D visualisation taken from
- KIAMBA Sarangani February 15 2008 Residents and visitors pack the municipal kiosk as the week long Timpuyog Festival 2008 winds up Thursday February 14 Photo by SARANGANI
- Culmination 600 jpg
- culmination jpg
- and the culmination
- 30 2009 GOT College participants received information that helps them understand the California Master Plan its purpose the college going process and requirements as well as the fina ncial aid opportunities available for college The culmination ceremony marks the program s end and acknowledges the commitment families had toward making college a priority in their lives
- madrazo jpg
- Ambiguous Culmination Full Zip Sweat
- culm jpg
- Culmination detail 1000 w jpg
- MAITUM Sarangani December 24 2008 Vice Governor Steve Chiongbian Solon gives a certificate of appreciation and a Christmas token to 35 graduates of a three month Culture of Peace and
- Culmination plexi 750 jpg
- The Coronation of Mary
- liquidisland jpg
- Just missed him Cornered
- M O R E | P R O J E C T S
- << Prev Fullsize Next >> culmination another view of the arches found in
- Ambiguous Culmination Full Zip Sweat
- B Culmination
- Deer spooks lughnasah Culmination Dog Days
- Next Previous MotoGP 2006 GRAN
related videos for culmination
- USC Ski & Snowboard - Powder Edit 2010 This video is a culmination of powder shots from all of 2010 thus far. It has been one of the snowiest seasons Mammoth has seen in years. We are usually seen in the park but with a season like this, we've gotten more epic powder days than we have sunny park days. All of the shots are from lift accessed terrain at Mammoth Mountain and June Mountain. Shot entirely on GoPro HD. Songs: The Radio Dept. - Heavens on Fire Wolf Gang - The King and All of His Men (Kid Adrift Remix)
- MKM2 - 13 - the culmination of a full month's worth of Santa movies Didn't have time to make a Nezumi man video yesterday so this is all you get! Heck yeah! A rather subdued, rather quiet mario video!! http
- Knob Creek: Machine Gun Night Shoot October 2010 (HD) This event is the culmination of THE BEST GUN SHOW you could ever attend. You are viewing the first 6 1/2 minutes of the Knob Creek Machine Gun Night Shoot. Fully automatic weapons including the most modern M16 variations, AK47, .50 BMG, minigun, and classics from every major military conflict of the past century. Plus a few flares! Targets included propane tanks, automobiles, appliances, and large drums of fuel. This "outburst" ran for about 10 minutes, but the intensity had pretty much subsided at the 7 minute mark. Several shooters either did not reload, and/or they were no longer shooting tracers and incendiary ammo. The main range where the video was filmed is approximately 350-400 yards in length, and has a huge hill as a backstop. Due to the dry ground conditions and volume of smoke, you will notice the view becoming much more cloudy beginning around the 2:00 mark. I was standing several rows back in the crowd, well beyond the left side of the shooting line. It is nearly impossible to get a clear unobstructed view. The majority of the shooting line is under a shed, and that area is crowded with security and Knob Creek support staff. Plus, factor in thousands of attendees, many holding their cameras high over their heads (as I did) to get the best possible view. This event is held twice each year, in April and October. The October 2010 event was the first I had ever attended, and I was immensely impressed. VERY IMPORTANT TO NOTE......attendees for these events make ...
- The Mid-East in Prophecy—the Culmination (Part 3) The Bible foretells that the "Holy Land"—and the Middle East—will again be invaded by foreign armies. What does this mean? Where is it leading? Who should care? To view more World to Come videos, visit:
- F-16 B-Course Class 08-FBC Graduation Video The culmination of 4 years of school, 2 years of Pilot Training, and 6-9 months of flying God's gift to aviation, the F-16C Viper. All flying is done by the 14 men of the 61st Fighter Squadron class 08-FBC and takes place over the skies of Phoenix, AZ.
- No Soul To Take - Culmination NS2T Performing Culmination! Check us out on /nosoultotake for info about our band, and our show dates! Once we hit 1500 fans on Facebook we'll release another song!! Subscribe so you can hear it once we release it! No Soul To Take is a metalcore/*** from New Rochelle, NY, about 20 minutes from New York City. The band consists of 2 guitarists, a bassist, a drummer, and a singer/screamer. The band was formed in September of 2009 and all the members are either juniors or seniors in highschool. Although young, the band has already established itself in the metalcore genre around New York. With many shows already played, and many more coming over the next months and a tour in the summer along the east coast, NS2T is a band that brings explosive, fast paced riffs and drum beats along with melodic riffs and heavy breakdowns. Playing shows everywhere from school functions, to opening up for bigger bands in New York City, and the surrounding area, No Soul To Take gives a show that many people will remember. From the music to the performance that's given at shows, every effort is put into the band to make it stand out from others and establish itself as a band that can quickly overtake the metalcore genre, and at a very young age. *Lyrics* I've been blinded by violence. Let none survive. Two sides, who's right? After this war (after this war!). Nothing will remain. I am so sick of this. Things seem so hopeless. Why do I persist? I taste the blood. I taste the ...
- 2009 McDonald's All-American Game (Miami, FL) Article via The ultimate culmination to a great high school career is a bid to play in the McDonalds All-American game. On Wednesday evening 24 of the top high school players were officially announced for the game. Nine of s top 10 prospects were selected to the game, the only one that didnt make it John Wall wasnt eligible. Going further, of the 24 prospects invited to participate, 19 of them are ranked within s top 24, with three not on the voting ballot. The McDonalds All-American festivities will take place in Miami this year. The Jam Fest, which consists of the slam-dunk, skills and three-point competitions, will take place at 9:00 pm at the BankUnited Center on March 30th. Two days later the boys game will be at 8:00 pm at the BankUnited Center, following the girls game. East Derrick Favors No. 1 Lance Stephenson No. 7 DeMarcus Cousins No. 6 Kenny Boynton No. 8 Ryan Kelly No. 11 Alex Oriakhi No. 14 Dominic Cheek No. 15 Dante Taylor No. 17 Dexter Strickland No. 18 Milton Jennings No. 23 Maalik Wayns No. 26 Peyton Siva No. 54 West John Henson No. 3 Xavier Henry No. 4 Renardo Sidney No.5 Abdul Gaddy No. 9 Avery Bradley No. 10 Wally Judge No. 16 Mason Plumlee No. 19 Michael Snaer No. 22 Tommy Mason-Griffin No. 24 Keith Gallon No. 37 Travis Wear No. 41 David Wear No. 42
- Dance Routine for 5th Grade Culmination (Led by Diavolo Instructors) - Mr. Rojas's Class
- Burning Ravan This is the culmination of the Ram Leela when an effigy of Ravan is lit to symbolize the victory of good over evil.
- A devotee's culmination with his Lord ( Movie Annamayya Song antharyami ) with english subtiles
- Interfaith Project Culmination: 'Achieving Unity and Peace Amidst Religious Diversity Using Songs' This video captures the culmination of the project "Developing an Institutional Interdenominational Faith-Formation Program Using the Bottom-Up Approach." The project adopts what the team terms as "alternative packaging" -- using performing and visual arts in facilitating dialogues and expressing ideas. There are 13 student-participants in the project representing 8 countries (China [Tibet], Iran, Italy, Jordan, Korea, Nigeria, Philippines, USA) and 4 religions [Buddhism, Islam, Protestantism, Roman Catholicism). The project is handled by Silliman University on a grant from the United Board for Christian Higher Education in Asia.
- Sly Profit Mask Goggle System - Paintball Gateway The culmination of over 2 years of rigorous testing and development, SLY is proud to announce the debut of the SLY PROFIT mask goggle system. Featuring industry leading developments in goggle technology, the SLY PROFIT Goggle is truly in a class of its own. The SLY PROFIT Mask Goggle System accommodates all the demands of the modern tournament player in one stylish, affordable system. * Velvet lined, soft cell foam atop impact absorbent SBR foam for maximum impact absorption, bounce potential, and personal comfort. * Integrated double strap for goggle angle adjustment and maximum retention. * Durable and stylish nylon frame with co-molded soft TPR lower for maximum protection while maintaining bounce potential. * Lightning fast, patent-pending lens retention system. Fastest lens changes in the industry. * NEW 3M-Engineered sealed thermal gasket system. This new impenetrable white thermal seal offers the ultimate seal against moisture and paint seepage between lens layers. * Velvet lined, soft cell foam atop molded impact absorbent SBR foam earpieces for that soft-ear bounce with pillow-like comfort. These earpieces muffle your own screaming and allow for superb directional hearing. * Lenses and colors for every style and situation of play. Featuring gradient chromed lenses for those sunny days, with clear downward vision area for those pesky LCD screens and cell phones. * Molded TPR vents for optimum heat dissipation and sound penetration. * Every SLY PROFIT Goggle System ...
- Winston Churchill - Address To Joint Session Of Congress primeminister of great britain winston churchill
- 17 - kings of leon [lyrics in description] Oh she's only seven*** Whine whine whine, weep over everything Bloody Mary breakfast busting up the street Brothers fighting, when's the baby gonna sleep Heaving ship too sails away Said it's a culmination of a story and a goodbye session It's a tick of our time and the tic in her head that made me feel so strange So I could call you baby, I could call you, dammit, it's a one in a million Oh it's the rolling of your Spanish tongue that made me wanna stay Oh she's only seven*** Whine whine whine, weep over everything Bloody Mary breakfast busting up the street Brothers fighting, when's the baby gonna sleep Heaving ship too sails away Said it's a culmination of a story and a goodbye session It's a tick of our time and the tic in her head that made me feel so strange Said I could call you baby, I could call you, dammit, it's a one in a million Oh it's the rolling of her Spanish tongue that made me wanna stay I could call you baby, I could call you, dammit, it's a one in a million Oh it's the rolling of your Spanish tongue that made me wanna stay
- Reading FC - Established 1871 The year began with the culmination of an amazing coca cola championship season and a league record of 106 points. Then in August the Royals started life in the Premiership for their first taste of top flight football since the club were founded in 1871. Please watch and enjoy and if u have the time comment and rate : ) ure also welcome to subscribe and add me to ure friends. 13/10/08 #42 - Most Discussed (Today) - Sports - Australia #74 - Top Rated (This Week) - Sports - Australia #54 - Top Rated (This Week) - Sports - Australia #39 - Top Rated (This Week) - Sports - Australia 14/10/08 #66 - Most Discussed (This Week) - Sports - Australia #35 - Top Rated (This Week) - Sports - Australia #34 - Top Rated (This Week) - Sports - Australia 15/10/08 #32 - Most Discussed (This Week) - Sports - Australia #27 - Top Rated (This Week) - Sports - Australia 16/10/08 #26 - Most Discussed (This Week) - Sports - Australia #15 - Top Rated (This Week) - Sports - Australia 17/10/08 #21 - Most Discussed (This Week) - Sports - Australia #14 - Top Rated (This Week) - Sports - Australia #13 - Top Rated (This Week) - Sports - Australia 18/10/08 #19 - Most Discussed (This Week) - Sports - Australia #11 - Top Rated (This Week) - Sports - Australia
- Mike Leigh - "***" [climax] This is the culminating point of the movie "***" by the magnificent Mike Leigh, the director of the critically acclaimed "Vera Drake", and recently - "Happy-Go-Lucky". You might wanna try and give him a shot. "***" tells a story of a tramp with lambent wit, wandering through streets and meeting other fellas to talk about such subject as extinction of human race, apocalypse, and other sophisticated stuff. He also meets and enjoys women. This film is filled with all colors of life. Brilliant performance by David Thewliss.
- "Bruno" Striptease Riles Crowd Comedian Sacha Baron Cohen is shooting a new movie as "Bruno," a flamboyantly gay Austrian TV host. His on-stage striptease in Arkansas and same-*** kiss had spectators up in arms. Jeff Glor reports.
- A devotee's culmination with his lord....
- STATE OF SHOCK - Michael Jackson Tribute "State of Shock" This song is from the Jackson's 1985 Victory album. This was the biggest single, featured a duet with Michael Jackson and Mick Jagger. This is one of my favorite songs. My brother, the mighty Ponceman, and myself, SAP, used to perform this song every time it came on. We'd set up special performances of it for whatever audience we could assemble. I was 14, Ponce was 6. We grew closer to each other with every performance. Why? Because the music was that good. It made us move. It made us dance, sing, and feel like we could conquer the world. And we still perform it to this day. This video is the culmination of all those past performances. Dirty Jenny lays down all the guitar work. Me and Ponce sing, Quernzy on backing vocals. Raunchy T in effect with Windsock. Shot by Wenstrup. Special thanks to Ted, Laban, Chad, Roy, and Curtis' 1600. This was a labor of love. Read our blog on this video here: SPREAD LOVE, NOT HATE. PEEP OUR WEBSITE: FACEBOOK US: DIG OUR BLOG: TWITTER PONCE: TWITTER SAP:
- 97th Civil Affairs Battalion (Airborne) CULEX The 97th Civil Affairs Battalion (Airborne) 2009 Culmination Exercise (CULEX)
- Les Miserables - One Day More (1987 Royal Variety) The 1987 Royal Performance of Les Miserables - One Day More (Act I Finale). Cast includes: Rebecca Caine Simon Bowman Sue Jane Tanner Ken Caswell Clive Carter Michael Maguire Dudu Fisher Kaho Shimada (I apologize for the video quality - it was the best I could do.)
- I Can't Believe Its Not Shelx The epic culmination of literally minutes of practice at the 2009 Durham Intensive X-Ray Crystallography School Presentation evening.
- The Culmination - Starring James, Bosh & Wade Mix made by me and efe9park3r... Mix details LeBron James', Dwyane Wade and Chris Bosh's career. Also a preview for their future.
- Worlds first controllable robotic samara monocopter MAV, University of Maryland's Ulrich flyer The culmination of 3.5 years of research has led to controllable monocopter that can autorotate like a maple seed (Acer diabolicum Blume) and fly like a helicopter (hover and forward flight). The vehicle, invented at the University of Maryland, Aerospace Engineering Autonomous Vehicle Laboratory and Alfred Gessow Rotorcraft Center, is the smallest and most capable to date as it meets most of the challenges set forth by DARPA's nano-air-vehicle program.
- A&R:The Movie Life in A&R UK Major Label Part 1 - drop the creator and email at aandrthemovie@
- 2009 McDonald's All-American Game Preview Article via The ultimate culmination to a great high school career is a bid to play in the McDonalds All-American game. On Wednesday evening 24 of the top high school players were officially announced for the game. Nine of s top 10 prospects were selected to the game, the only one that didnt make it John Wall wasnt eligible. Going further, of the 24 prospects invited to participate, 19 of them are ranked within s top 24, with three not on the voting ballot. The McDonalds All-American festivities will take place in Miami this year. The Jam Fest, which consists of the slam-dunk, skills and three-point competitions, will take place at 9:00 pm at the BankUnited Center on March 30th. Two days later the boys game will be at 8:00 pm at the BankUnited Center, following the girls game. East Derrick Favors No. 1 Lance Stephenson No. 7 DeMarcus Cousins No. 6 Kenny Boynton No. 8 Ryan Kelly No. 11 Alex Oriakhi No. 14 Dominic Cheek No. 15 Dante Taylor No. 17 Dexter Strickland No. 18 Milton Jennings No. 23 Maalik Wayns No. 26 Peyton Siva No. 54 West John Henson No. 3 Xavier Henry No. 4 Renardo Sidney No.5 Abdul Gaddy No. 9 Avery Bradley No. 10 Wally Judge No. 16 Mason Plumlee No. 19 Michael Snaer No. 22 Tommy Mason-Griffin No. 24 Keith Gallon No. 37 Travis Wear No. 41 David Wear No. 42
- Giffin Valiant The Giffin Valiant is the culmination of Roger Giffin's decades of experience building custom guitars for living legends like Eddie Van Halen, Jimmy Page, and Malcom Young, among others, both under his own name and at the Gibson Custom Shop. The Giffin Valiant is based on a single-cut, mahogany-body design, but with very unique contours that make it a light and supremely comfortable instrument to strap on. It's appearance is quite distinctive, enhanced further by cream binding, Giffin's vertical line inlays, and a stunning flame maple top. The Giffin Valiant 's playability and tone are unmatched, with Amalfitano Fullbuckers, a TonePros AVT Wrap Around Bridge, Grover Tuners, and a 12" radius rosewood fretboard. - -
- Elton John & Leon Russell - Hey Ahab (Live on Letterman 02-09-2011) [HD 1080p] The Union marks the culmination of a mutual musical adoration that began in the late 1960s.
- The Culmination of Power - Psi-Ops: The Mindgate Conspiracy Psi-Ops™: The Mindgate Conspiracy www.absolute-video- ©2004 Midway®
- YTPMV: The Culmination Of Layered Soccer Between HoZKiNZPooP And lnsector (AKA Bare Bear) Reupload of a HoZKiNZPooP video.
- Elton John & Leon Russell - Love is Dying (Live on Regis and Kelly 11-24-2010) [HD] The Union album marks the culmination of a mutual musical adoration that began in the late 1960s, ahead of Elton's US debut.
- President Obama Announces Financial Regulation Reform As the culmination of a months-long process in which the President consulted with the most expert and experienced regulators, leaders in Congress, and his entire economic team, he announces his vision for desperately needed financial regulatory reform. A major brick in the new foundation for Americas economy. June 17, 2009. (Public Domain)
- Roxxi Botox and the art of holiday dressing This is a culmination of the essence of the holiday season; regardless of how one feels about the holidays, anyone can easily fall back on a number of these handy tips and get through it !! make up songs, dress up, or smile even if only on the outside ! xo
- Danny Noriega's Journey on American Idol Culmination of mish-mashed clips highlighting Danny's fabulous journey (that ended way too soon) on American Idol. Sorry if it's a bit long...Danny just steals the show It strictly consists of his journey that was televised. This includes his San Diego Audition, Hollywood Week, and the Top 24, 20, and 16 performances. I included a very shortened version of his Tainted Love performance, and most of the after-performance critique. I also used a lot of his elimination video that was shown on American Idol. The last audio snip in the credits was taken from one of his Stickam chats.
- DMA + BITY = GMT Here's the culmination of my time at the DMA. This is the Good Morning Texas segment that featured our cast the lifecast at the DMA.
- ISRAEL CANNAN- ONE FINE DAY This film clip shows the culmination of Israel Cannan's day's spent traveling around his country, Australia.... Playing music in the streets of all major capital city's and rural townships along the way. From the alpine ranges of New south wales & Victoria through to the barren landscape of the Nullabor Plains.... The untouched coastline of South west Australia up through the Kimberly's. Making friends and discovering a brand new way of creating music along the way. Israel took "the road less traveled" and here lies a snapshot into that Journey. This is "ONE FINE DAY"
- 1970 Chevy c10 buildup My 1970 c10 buildup, culmination of 1 year of work
- Sara Orange Tip Butterfly Life Cycle 720p HD The culmination of 2 years-worth of rearing and filming, this collection of scenes from the Sara Orange-Tip (Anthocaris Sara) Butterfly's life cycle has finally been released! This beautiful white, orange, and black butterfly is found throughout the western United States, and is one of the first butterflies to emerge in spring. It is easily mistaken for the more common Cabbage White Butterflies and other similar species unless one is able to get close enough to see the bright orange wingtips. As Winter ends and the days slowly begin to warm, these butterflies begin emerging from their long winter sleep as pupae, and start looking for mates. The females seek out plants in the Mustard family and lay their eggs on the plant's stems. After a week or so, the eggs develop and hatch, after which the tiny larvae, not much longer than 1/8" eat their eggshells (shown here in time lapse) for some extra protien to start the caterpillar phase of their lives.. Over the next few weeks the caterpillar grows to maturity, eating mostly the plant's flower heads and seed capsules. At some point late in the caterpillar's life, it makes a "decision" based on environmental conditions, whether or not to develop immediately into an adult butterfly, or hibernate (known as "diapause") as a chrysalis (pupa) for the winter. It then searches for a suitable spot on a plant stem straps itself in using a silken threads and pad that it spins from glads near its mouth, where it then pupates (also shown ...
- TF2: Culmination of Slaughter No place like Slaughterhouse. Don't be a stranger, drop by sometime. MAP LIST: cp_corporation_b3 cp_castle4 dm_biosphere_v3 pl_badwater cp_soar ______________________________ SH TEAM FORTRESS 2 Servers ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ SLAUGHTERHOUSE [ LOS ANGELES, CA ] FASTRESPAWN / CUSTOMMAPS 188.8.131.52:27015 or tf2:27015 SLAUGHTERHOUSE [LOS ANGELES, CA] 24MAN/FASTRESPAWN/CUSTOMMAPS 184.108.40.206:27016 or tf2:27016
- Eluvium - In Culmination From the album "Similes" (2010). Disclaimer: I am in no way associated with the making of this music, nor am I taking credit for this song. © Matthew Cooper (Eluvium), Temporary Residence Limited
- aphs funk band may 2001 2001: a funk odyssey was the culmination of a year's worth of getting funk'd. with a few minor blemishes ("not singing in ANY key") and a jumpy 1st valve slide the performance was an explosion that no one expected. "2001" was arranged on a korg keyboard over a 2 liter bottle of mountain dew in a basement while "soul man" was full of choreography that made james brown look amateur. overall it was epic. enjoy
- Veracity's Vangaurd Tribute - MMO Raiding and More This video is the culmination of four years in the best MMO on the market, shared with the most amazing people one could ever hope to find. Thanks everyone for the great memories and for letting me abuse/wipe the raid to make this happen. It's been a wild ride... all the best!
twitter about culmination
Blogs & Forum
blogs and forums about culmination
“Facebook is a social utility that connects people with friends and others who work, study and live around them. People use Facebook to keep up with friends, upload an unlimited number of photos, post links and videos, and learn more about the”
— NetworkedBlogs on Facebook | Cohesive Culmination,
“This is the 18th and final installment of A Catechism of Enlightenment–a serialized commentary on "A Method Of Enlightening A Disciple" from Shankara's”
— The Culmination of Evolution | The Atma Jyoti Blog,
“I have long felt that the corps, never a truly committed partner on ecosystem restoration, has purposely kept the navigation interests away from the environmental crowd, fearing that a previously-unlikely parthership among these widely divergent”
— Culmination of the 2009 high water trip by the Mississippi,
“Ejaculating represents the culmination of the male ***ual act, whether it's achieved through intercourse or masturbation. It's not only the ultimate pleasurable feeling that all men desire but also a basic instinctual drive created by nature to”
— Delayed Ejaculation Problems | Penis Resources Blog, penis-
“The Top 100 Prospects list is the culmination of our offseason prospect coverage, which begins with our reviews of the top talent in each minor league”
— Baseball America | Blog | Baseball America Prospects Blog,
“Conan the Conqueror Part 5: The Culmination. Welcome to the final installment of the Conan the Conqueror sculpting blog, chronicling the making of this sculpture of Conan on horseback from the stunning painting by Frank Frazetta. I hope you found this blog interesting and useful in understanding”
— Clayburn Moore's Web Log,
“ Fastest Growing Social Network Site " Blog " the culmination of the impenetrable mystery It seemed the culmination of the impenetrable mystery which for years had shrouded the place. Shortly after the arrival of the”
— Fastest Growing Social Network Site " Blog,
“Home minister P Chidambaram on Wednesday stepped in with calls for a Addressing a press conference here on Wednesday, the home minister said the”
“momAgenda is an original day planner created for mothers, by a mother.This stylish and functional planner is guaranteed to help eliminate the chaos and restore order to the scheduling challenges that go with parenting”
— Day Planners for Moms | The Culmination of Our School Project,
“My theory will involve more than just another idea leaned towards Earth and the plethera of life upon HER. I say HER because She is”
— A cyclic culmination : Philosophy, disclose.tv
related keywords for culmination
- culmination definition
- culmination synonym
- culmination in a sentence
- culmination blouse
- culmination point
- culmination to
- culmination quotes
- culmination event
- culmination means
- culmination of the ages
- culmination definition science
- culmination definition geology
- culmination definition literature
- definition of culmination | 1 | 10 |
<urn:uuid:cc17965c-2895-48a3-a2d1-986b421fd4be> | Of the instruments we discuss here, the ASI and CIDI make up the common assessment battery of the National Institute on Drug Abuse (NIDA) Clinical Trials Network (CTN), which conducts studies to evaluate evidence-based treatment interventions in widely diverse community-based treatment settings and patient populations nationwide. Prior to adopting these measures, a CTN workgroup evaluated many measures for reliability, validity, efficiency, and suitability for widespread use in nonresearch settings.
RELIABILITY AND VALIDITY
An instrument’s reliability and validity are critical to its value. All the instruments discussed in this article are highly reliable and valid, but the extent of their reliability or validity may differ in particular situations.
The question of reliability is: Will users of the instrument consistently reach the same diagnostic conclusions? A straightforward and rigorous way to answer this question is the test-retest method. Two or more clinicians use the instrument to conduct independent assessments of the same patient, and the degree of correlation among their findings is calculated. The standard statistical measure of the degree of the clinicians’ agreement, the kappa coefficient, equals 1 if agreement is complete and less than zero when agreement is no greater than chance might produce (Cohen, 1960
). Generally, a test-retest kappa score of 0.75+ indicates excellent reliability; 0.60 to 0.74 is good; 0.40 to 0.59 is fair; and less than 0.40 is poor (Fleiss, 1981
The question of validity is: Does the instrument truly and unambiguously assess the condition it is designed to evaluate? This question has more dimensions than the estimation of reliability; accordingly, validity is estimated with a number of methods.
The widely used SCID was the first standardized psychiatric interview based on the DSM and has been updated to correspond to the most current DSM criteria. The AUDADIS, PRISM, and SSADDA were specifically designed for substance abuse research, but are adaptable for clinical purposes, too.
Brief descriptions of these instruments follow. For a summary comparison of their properties, see .
Characteristics and Selected Assessment Categories of Six Structured Assessment Instruments
The Addiction Severity Index
The ASI (McLellan et al., 1980
) screens for problems and impairments that commonly accompany drug abuse and dependence. These include, among others, interpersonal difficulties with family, friends, and co-workers; medical conditions such as hepatitis B and C, HIV/AIDS, sexually transmitted diseases, alcoholic liver disease, acute myocardial infarction, pneumonia, and metabolic and endocrine complications (Kresina et al., 2004
; Mertens et al., 2003
); and legal troubles. The ASI provides information that clinicians can use to address these problems with appropriate interventions or referrals.
The semi-structured ASI evaluates patients’ functioning and lifetime experiences in seven domains: (1) medical conditions, (2) employment/support, (3) use of alcohol and drugs, (4) legal issues, (5) family history, (6) family/social relationships, and (7) psychiatric disorders. The administrator asks the patient to rate his or her level of distress in each domain during the past 30 days from 0 (none) to 4 (extreme) and independently rates the patient’s need for treatment in each domain from 0 (none necessary) to 9 (treatment needed to intervene in a life-threatening situation). Finally, the administrator calculates a composite score from a subset of the distress and treatment need responses. This score becomes the basis for treatment planning. Altogether, the ASI takes approximately 45 to 60 minutes to administer, plus 10 to 20 minutes for post-interview scoring.
THE DSM AND STANDARDIZED ASSESSMENT
The year 1980 saw the publication of an epochal document in psychiatry: Diagnostic and Statistical Manual of Mental Disorders, 3rd Edition
(DSM-III; American Psychiatric Association, 1980
). The DSM-III provided clinicians and researchers with standardized definitions and diagnostic criteria for more than 200 psychiatric disorders, including substance abuse and dependence disorders. Prior to this publication, clinicians and researchers commonly used the same diagnostic terms to mean different things, and clinicians often disagreed on whether patients had specific disorders (Spitzer, Endicott, and Robins, 1975
; Spitzer and Fleiss, 1974
). Substance abuse professionals engaged in semantic debates over the definition of addiction—even over the very existence of such a condition.
Following the publication of the DSM-III, diagnostic criteria were included in the mental disorders section of the International Classification of Diseases, 10th Edition
(ICD-10; World Health Organization, 1993
). The ICD-10 is widely used outside the United States to define psychiatric diagnoses.
Substance Use Disorders in DSM-IV
The current edition of the DSM, DSM-IV-TR, sets diagnostic criteria for two types of substance use disorder: dependence and abuse. Some patients seeking treatment report too few symptoms to meet the criteria for either diagnosis. In these cases, the specific symptoms, symptom clusters, and the severity of associated problems can inform effective strategies for intervention and management.
The ICD-10 criteria for substance dependence are similar to those of the DSM-IV. The ICD-10 counterpart to abuse is called “harmful use” and is less specific.
Drug or alcohol dependence is diagnosed by documenting that a patient has experienced at least three of seven criteria for a particular substance within a 12-month period. The criteria are:
- Substance often taken in larger amounts or over longer period than intended
- Persistent desire or unsuccessful efforts to cut down or control use
- Great deal of time spent in activities necessary to obtain, use, or recover from the substance
- Important social, occupational, or recreational activities given up or reduced
- Continued use despite knowledge of having a persistent or recurrent physical or psychological problem likely to have been caused or exacerbated by the substance
Although the DSM-IV provides no standards for dependence severity, clinicians may specify “with physiological dependence” or “with withdrawal” to indicate the presence of tolerance (i.e., the need for higher doses to achieve intoxication or other desired effects). Withdrawal, in particular, predicts medical problems and poor outcome (Hasin et al., 2000
; Schuckit et al., 2003
). Alternatively, a symptom or criteria count can function as a measure of dependence severity (Hasin et al., 2006b
The DSM-IV lists substance-specific intoxication and withdrawal symptoms for most of the common classes of drugs. Two exceptions are hallucinogens and cannabis, neither of which had a known withdrawal syndrome at the time of the document’s publication. Planners for the DSM-V are considering the addition of a withdrawal syndrome for cannabis. In anticipation of such a potential change, the CIDI and AUDADIS interviews contain items related to possible marijuana withdrawal.
Test-retest studies have repeatedly shown good to excellent reliability for the diagnosis of substance dependence with the DSM-IV (Bucholz et al., 1995
; Chatterji et al., 1997
; Easton et al., 1997
; Grant et al., 1995
; Hasin et al., 1996
; Horton, Compton, and Cottler, 2000
; Williams et al., 1992
). The DSM-IV substance dependence diagnosis also shows good validity in two forms of multi-method comparisons. One compares ICD-10, DSM-IV, and DSM-III-R diagnoses obtained from a single diagnostic interview (Grant, 1993
; Hasin et al., 1997b
; Schuckit et al., 1994
). The other compares diagnoses from a single system (such as DSM-IV) produced by different diagnostic interviews (Cottler et al., 1997
; Pull et al., 1997
Patients who do not meet the criteria for substance dependence may be diagnosed with substance abuse if they report experiencing one or more of four abuse symptoms repeatedly over a 12-month period. The symtoms are:
- Failure to fulfill major obligations at work, school, or home
- Recurrent use in situations in which it is physically hazardous
- Recurrent substance-related legal problems
- Continued use despite persistent social or interpersonal problems
Many clinicians have questioned the separation of substance dependence and substance abuse. Studies have shown that the DSM criteria for abuse are less valid than those for dependence. However, these studies diagnosed substance abuse hierarchically, meaning that an abuse diagnosis was considered to be redundant if dependence was present. Although DSM-IV stipulates this procedure, not everyone with dependence also meets the criteria for abuse (Hasin and Grant, 2004
). Women and minorities appear especially likely to experience dependence without abuse (Hasin et al., 2005
; Hasin and Grant, 2004
). Studies that assessed abuse regardless of whether dependence was present showed better reliability for the criteria for abuse (Bucholz et al., 1995
; Canino et al., 1999
; Cottler et al., 1997
; Pull et al., 1997
). In summary, the DSM-IV hierarchical status of abuse is problematic, but the criteria yield reliable diagnoses.
DSM-IV and Substance Use Comorbidity
Extensive comorbidity between substance use disorders and other psychiatric disorders has been reported consistently in patients (Nunes, Hasin, and Blanco, 2004
) as well as in the general population (Grant et al., 2004a
; Regier et al., 1990
). Such comorbidity can be serious. For example, studies with acceptable response rates (70 percent or more) and reliable diagnostic assessments have consistently found an adverse effect of major depression on the outcome of substance use disorders (Hasin, Nunes, and Meydan, 2004
). Further, among patients with histories of substance dependence and major depression, the occurrence of a major depressive episode during periods of sustained abstinence predicts a higher number of suicide attempts (Aharonovich et al., 2002
To be accurate, assessments must address the fact that substance intoxication and withdrawal can mimic symptoms of depression, psychosis, or other independent psychiatric disorders. Accordingly, the DSM-IV distinguishes among “expected effects” of substance intoxication or withdrawal, “primary disorders,” and “substance-induced disorders.” A primary disorder is diagnosed if “the symptoms are not due to the direct physiological effects of a substance” (American Psychiatric Association, 2000
). Psychiatric disorders that co-occur with substance intoxication or withdrawal can be considered primary if (1) symptoms substantially exceed the expected effects of the substance in the amount that was used; (2) there is a personal history of psychiatric symptoms during periods of extended abstinence; (3) the onset of psychiatric symptoms clearly preceded the onset of substance use; and (4) symptoms persisted for at least a month after the cessation of intoxication or withdrawal. Symptoms that are not considered primary fall into the category either of expected effects of a substance or of a substance-induced disorder that exceeds intoxication or withdrawal effects and deserves independent clinical attention. Instrument developers have incorporated this information into some tools, in particular the SSADDA and PRISM.
The ASI’s psychometric properties have been tested extensively (Alterman et al., 1994
; Hodgins and el-Guebaly, 1992
; Joyner, Wright, and Devine, 1996
; Kosten, Rounsaville, and Kleber, 1983
; McLellan et al., 1985
; Rogalski, 1987
). Several studies have demonstrated good to excellent reliability and validity for the instrument (Butler et al., 2001
; Hendriks et al., 1989
; Leonhard et al., 2000
; Weisner, McLellan, and Hunkeler, 2000
). A 2004 summary of studies in multiple patient groups (Mäkelä) found that the reliability of composite scores varied from high (Daeppen et al., 1996
; McLellan et al., 1985
; Peters et al., 2000
) to low (Drake, McHugo, and Biesanz, 1995
; Zanis et al., 1994
; Zanis, McLellan, and Corse, 1997
). Three of the seven ASI domains (medical conditions, use of alcohol, and psychiatric disorders) have high internal consistency across studies, while the other four are more variable. Correlations between domains are usually low, except those between the drug and legal measures and those between the psychiatric and social impairment measures. The lack of across-the-board correlations is consistent with the ASI’s perspective, which is that impairment in some domains does not necessarily entail impairment in others.
The ASI, by itself, may not be a highly reliable screen for special populations, such as the homeless or dually diagnosed. For the latter groups, the ASI should be supplemented with instruments that assess comorbidity in greater depth, such as the PRISM or the SSADDA.
Many community programs include the ASI in their initial assessment battery, but informal reports suggest that some look upon it as merely required paperwork and use its information minimally, if at all, in treatment. To remedy this situation, the NIDA/Substance Abuse and Mental Health Services Administration (SAMHSA) Blending Initiative has produced a curriculum on transforming ASI data into clinically useful information (see www.nida.nih.gov/Blending/ASI.html
Alcohol Tolerance Item From the World Mental Health Composite International Diagnostic Interview (WMH-CIDI)
|Did you ever need to drink a larger amount of alcohol to get an effect, or did you ever find that you could no longer get a “buzz” or a high on the amount you used to drink?||YES||NO||DK||RF|
The Composite International Diagnostic Interview
The CIDI, originally developed by the World Health Organization, assesses 22 DSM-IV diagnoses, including mood, anxiety, and substance use disorders (see “Alcohol Tolerance Item From the World Mental Health Composite International Diagnostic Interview (WMH-CIDI)”). For each substance use disorder, the CIDI elicits other information useful for treatment planning, such as the patterns and course of alcohol and drug use. The fully structured instrument takes approximately 120 minutes to administer in its entirety (Kessler and Ustün, 2004
Various versions and adaptations of the original CIDI have been developed. The University of Michigan version, the UM-CIDI, has been used in a large international epidemiological survey (Wittchen and Kessler, 1994
), but appears to produce lower prevalence estimates than other diagnostic instruments (Wittchen et al., 1998
). To address this problem and others related to earlier versions of the CIDI, the World Mental Health Survey Initiative Version, the WMH-CIDI, was developed (Kessler and Ustün, 2004
). A complete description of WMH-CIDI modifications is reported elsewhere (Kessler and Ustün, 2004
). The WMH-CIDI is available in paper and computerized forms for download or computer-assisted administration at www.hcp.med.harvard.edu/wmhcidi/instruments.php
Programs or projects may use the CIDI substance use sections alone or combine them with other sections to achieve the desired range of assessment. To meet the particular needs of the substance abuse field, researchers have developed the CIDI Substance Abuse Module (CIDI-SAM), an expanded version of the original CIDI substance use section that elicits detailed information on such areas as the onset and history of substance abuse, withdrawal symptoms, common comorbidities, social consequences, and treatment history (Cottler, Robins, and Helzer, 1989
; Horton, Compton, and Cottler, 2000
Test-retest studies of the original CIDI and the CIDI-SAM paper versions have demonstrated good to excellent reliability for DSM-IV diagnoses of any substance use disorder or substance dependence and fair to good reliability for abuse (Rubio-Stipec, Peters, and Andrews, 1999
; Wittchen et al., 1998
). The reliability of the CIDI, version 3.0, was tested in the WHO World Mental Health Surveys by comparing CIDI-derived diagnoses to those derived with the SCID (Haro et al., 2006
). Concordance for alcohol dependence (with or without abuse) was excellent; concordance for drug dependence (with or without abuse) was fair; and concordance for alcohol abuse and drug abuse was good.
NIDA’s CTN adopted the CIDI after comparing five commonly used substance use disorder diagnostic instruments on 26 criteria, including psychometric properties, diagnostic time frames, time to administer, and training and financial considerations (Forman et al., 2004
). The CTN workgroup ultimately determined that only the CIDI met three crucial CTN requirements: it can be administered by trained research technicians with no prior clinical experience; it provides for DSM-IV, as well as International Classification of Diseases, 10th Edition
(ICD-10; World Health Organization, 1993
), substance use disorder diagnoses; and it provides for past-year and lifetime diagnoses. At this point, it is too soon to know whether CTN-related community-based programs will adopt the CIDI for clinical use.
Alcohol Use Screening From the Structured Clinical Interview for DSM (SCID)
|ALCOHOL USE SCREENING|
|What are your drinking habits like?|
|How much do you drink?|
|Has there ever been a time in your life when you had five or more drinks on one occasion?|
|When in your life were you drinking the most?|
|How long did that period last?|
|During that time…|
|How often were you drinking?|
|What were you drinking? How much?|
|During that time…|
|Did your drinking cause problems for you?|
|Did anyone object to your drinking?|
The Structured Clinical Interview for DSM-IV
The SCID is available in different versions for researchers and clinicians. Additionally, the research version is available in formats for patients, nonpatients, and patients with psychotic disorders. The Structured Clinical Interview for DSM-IV-TR Axis I Disorders, Research Version, Patient Edition (First et al., 2002
), provides lifetime and current diagnostic assessments for many DSM-IV disorders, including substance use disorders. The separate SCID for Axis II disorders provides the basis for diagnosing personality disorders (First et al., 1997
The semi-structured SCID is designed for administration by interviewers with clinical expertise, but research assistants having extensive experience with a population under study have sometimes learned to administer it successfully. After an open-ended overview and brief general screening, the interviewer takes the patient through the questions on the form, following up as needed (based on clinical judgment) to clarify responses. The alcohol and drug modules contain open-ended screening questions as well (see “Alcohol Use Screening From the Structured Clinical Interview for DSM (SCID)”). Administration can take up to several hours, depending on the complexity of the patient’s substance and psychiatric history. The instrument is modular, so clinicians can make use of only those sections that pertain to assessment aims. It contains a minimal number of nondiagnostic items to keep administration time as brief as possible.
In tests among substance-abusing populations, the SCID has demonstrated excellent reliability for diagnosing DSM-III-R substance dependence (American Psychiatric Association, 1987
; Ross et al., 1995
). A small test-retest study of 52 patients with DSM-IV diagnoses showed excellent reliability for substance use disorders (Zanarini et al., 2000
). The SCID Web site (www.scid4.org/index.html
) provides information on the different versions, psychometric properties, ways to obtain copies of the interview and training materials, and procedures for arranging on-site training. A user’s guide provides basic training in the use of the SCID. In addition, an 11-hour videotape training program is available with examples of interviews with actual patients. The instrument’s developers recommend at least 20 hours of training on the full SCID for most clinicians. A Spanish-language version of the SCID (research version), in which only the questions have been translated, and a computer-assisted SCID (for Axis I disorders, clinician version), developed by an outside source, can be obtained through the SCID Web site.
The Alcohol Use Disorder and Associated Disabilities Interview Schedule
The AUDADIS (Grant et al., 1995
) provides for current (last 12 months) and lifetime DSM-IV diagnoses of major mood, anxiety, personality, and substance use disorders. Originally developed by the National Institute on Alcohol Abuse and Alcoholism (NIAAA) for use in population-based epidemiological surveys, the fully structured AUDADIS functions as an economical tool that lay staff in treatment programs can administer for intake screening. Clinicians can use the detailed descriptive data obtained by the AUDADIS to structure treatment based on a patient’s specific substance-related behaviors. In addition to alcohol, tobacco, and other drug use, modules address treatment and family history. Numerous queries address the frequency and quantity of use of each type of alcohol (e.g., beer, wine, liquor) and each illicit drug during three time periods—that of heaviest use, the past 12 months, and the interviewee’s lifetime (see “Sample Item From the Alcohol Use Disorder and Associated Disabilities Interview Schedule (AUDADIS)”).
The AUDADIS showed high reliability in a test-retest study in clinical settings where comorbidity was expected to be high (Hasin et al., 1996
). Its test-retest reliabilities for alcohol and drug consumption, abuse, and dependence, as well as those for other modules, were good to excellent (Grant et al., 1995
). The AUDADIS interview can be downloaded (niaaa.census.gov/questionaire.html). The instrument’s developers recommend using the computer-assisted version.
The Psychiatric Research Interview for Substance and Mental Disorders
The PRISM (Hasin et al., 1996
; Hasin, Trautman, and Endicott, 1998
) is a semi-structured diagnostic interview designed expressly for assessing comorbid psychiatric disorders in individuals who abuse substances. The instrument’s strength is in differentiating independent psychiatric disorders, such as depression, from the effects of intoxication and withdrawal. Along with abuse and dependence diagnoses for specific substance categories, clinicians and researchers can use the PRISM to make current and lifetime DSM-IV diagnoses of Axis I and Axis II disorders that commonly occur with substance abuse, such as mood, anxiety, and psychotic disorders.
The PRISM sections on substance use disorders are placed at the beginning of the interview and provide a background for the overall clinical picture. Periods of chronic intoxication (defined as “at least 4 days a week for a month”) or binge use (defined as “most of the day for 3 or more days”) and extended periods of abstinence are identified and charted on a timeline. The timeline is the only part of the PRISM that is conducted in an unstructured format, and timeline information is not coded for data entry. The purpose of the timeline is to assist in differentiating primary versus substance-induced symptoms in later diagnostic sections.
PRISM developers incorporated two features into the instrument to avoid the lengthy administration time associated with many standardized interviews. First, diagnostic sections are modular, so the instrument can be tailored to fit specific treatment or research needs. Second, consumption questions in the substance use module do not seek detailed information about patterns, but simply ask how often the interviewee has used the substance “in the last 12 months” or “ever” and whether the individual has ever experienced a period of chronic intoxication or binge use. If the response to any of these broad questions is “yes,” the interviewer moves on to the abuse and dependence diagnostic module.
A recent test-retest study of 285 heavy substance users showed good to excellent reliability for most dependence diagnoses, including alcohol, cocaine, heroin, cannabis, and sedative dependence (Hasin et al., 2006a
). An independently conducted validity study of a Spanish-language version of the PRISM with the Longitudinal, Expert, All-Data Diagnosis (LEAD) procedure (Spitzer, 1983
) as the “gold standard” and the SCID found that the concordance of the three assessments in substance dependence was good to excellent. However, PRISM/LEAD concordance was significantly better than SCID/LEAD concordance on current cannabis and cocaine dependence, as well as past alcohol abuse and dependence (Torrens et al., 2004
). The English version of the PRISM can be downloaded, together with training information (www.columbia.edu/~dsh2/prism
). A computer-assisted version, which will include questions on marijuana withdrawal and modules for nicotine-related disorders, pathological gambling, and attention deficit hyperactivity disorder, will be available in 2008.
Sample Item From the Alcohol Use Disorder and Associated Disabilities Interview Schedule (AUDADIS)
|Now I’d like to ask you about drinking beer. |
5a. During the last 12 months, did you drink any beer, light beer or malt liquor? Do not count nonalcoholic beers. Statement D
|1 — Yes|
2 — No - SKIP to Statement E, page 11
|5b. (SHOW FLASHCARD 12)|
During the last 12 months, about how often did you drink any beer or malt liquor?
|1 __ Everyday |
2 __ Nearly every day
3 __ 3 to 4 times a week
4 __ 2 times a week
5 __ Once a week
6 __ 2 to 3 times a month
7 __ Once a month
8 __ 7 to 11 times in the last year
9 __ 3 to 6 times in the last year
10 __ 1 or 2 times in the last year
The Semi-Structured Assessment for Drug Dependence and Alcoholism
The SSADDA (Pierucci-Lagha et al., 2005
) was developed for use in studies of genetic influences on cocaine and opioid dependence. Derived from the Semi-Structured Assessment for the Genetics of Alcoholism, the SSADDA provides extensive coverage of the physical, psychological, social, and psychiatric manifestations of cocaine and opioid abuse and dependence in addition to a number of related Axis I and Axis II disorders. A standout feature of the SSADDA is its inclusion of questions about the onset and recency of individual alcohol and drug symptoms, permitting a temporal assessment of symptom clusters. Information about the timing of symptoms is particularly helpful in distinguishing comorbid disorders from intoxication or withdrawal effects.
The reliability of individual dependence criteria in the SSADDA has been tested to determine the extent to which independent interviewers arrive at the same diagnostic conclusions. Overall, the inter-rater reliability estimates were excellent for individual DSM-IV criteria for nicotine and opioid dependence; good for alcohol and cocaine dependence; and fair for dependence on cannabis, sedatives, and stimulants (Pierucci-Lagha et al, 2007
). A computer-assisted version of the SSADDA is available free. Further information can be obtained by contacting Dr. Amira Pierucci-Lagha, Alcohol Research Center, Department of Psychiatry, University of Connecticut School of Medicine.
Zigy Kaluzny/© 2007 Getty Images | 1 | 4 |
<urn:uuid:3d2422d3-861e-4c46-966d-57afa1e925f0> | ||This article needs additional citations for verification. (February 2008)|
The transputer was a pioneering microprocessor architecture of the 1980s, featuring integrated memory and serial communication links, intended for parallel computing. It was designed and produced by Inmos, a semiconductor company based in Bristol, United Kingdom.
For some time in the late 1980s many considered the transputer to be the next great design for the future of computing. While Inmos and the transputer did not ultimately live up to this expectation, the transputer architecture was highly influential in provoking new ideas in computer architecture, several of which have re-emerged in different forms in modern systems.
In the early 1980s, conventional CPUs appeared to reach a performance limit. Up to that time, manufacturing difficulties limited the amount of circuitry designers could place on a chip. Continued improvements in the fabrication process, however, removed this restriction. Soon the problem became that the chips could hold more circuitry than the designers knew how to use. Traditional CISC designs were reaching a performance plateau, and it wasn't clear it could be overcome.
It seemed that the only way forward was to increase the use of parallelism, the use of several CPUs that would work together to solve several tasks at the same time. This depended on the machines in question being able to run several tasks at once, a process known as multitasking. This had generally been too difficult for previous CPU designs to handle, but more recent designs were able to accomplish it effectively. It was clear that in the future this would be a feature of all operating systems.
A side effect of most multitasking design is that it often also allows the processes to be run on physically different CPUs, in which case it is known as multiprocessing. A low-cost CPU built with multiprocessing in mind could allow the speed of a machine to be increased by adding more CPUs, potentially far more cheaply than by using a single faster CPU design.
The first transputer designs were due to David May and Robert Milne. In 1990, May received an Honorary DSc from University of Southampton, followed in 1991 by his election as a Fellow of The Royal Society and the award of the Patterson Medal of the Institute of Physics in 1992. Tony Fuge, a leading engineer at Inmos at the time, was awarded the Prince Philip Designers Prize in 1987 for his work on the T414 transputer.
The transputer (the name deriving from transistor and computer) was the first general purpose microprocessor designed specifically to be used in parallel computing systems. The goal was to produce a family of chips ranging in power and cost that could be wired together to form a complete parallel computer. The name was selected to indicate the role the individual transputers would play: numbers of them would be used as basic building blocks, just as transistors had earlier.
Originally the plan was to make the transputer cost only a few dollars per unit. Inmos saw them being used for practically everything, from operating as the main CPU for a computer to acting as a channel controller for disk drives in the same machine. Spare cycles on any of these transputers could be used for other tasks, greatly increasing the overall performance of the machines.
Even a single transputer would have all the circuitry needed to work by itself, a feature more commonly associated with microcontrollers. The intention was to allow transputers to be connected together as easily as possible, without the requirement for a complex bus (or motherboard). Power and a simple clock signal had to be supplied, but little else: RAM, a RAM controller, bus support and even an RTOS were all built in.
The original transputer used a very simple and rather unique architecture to achieve a high performance in a small area. It used microcode as the principal method of controlling the data path but unlike other designs of the time, many instructions took only a single cycle to execute. Instruction opcodes were used as the entry points to the microcode ROM and the outputs from the ROM were fed directly to the data path. For multi-cycle instructions, while the data path was performing the first cycle, the microcode decoded four possible options for the second cycle. The decision as to which of these options would actually be used could be made near the end of the first cycle. This allowed for very fast operation while keeping the architecture generic.
The clock speed of 20 MHz was quite high for the era and the designers were very concerned about the practicalities of distributing a clock signal of this speed on a board. A lower external clock of 5 MHz was used and this was multiplied up to the required internal frequency using a phase-locked loop (PLL). The internal clock actually had four non-overlapping phases and designers were free to use whichever combination of these they wanted so it could be argued that the transputer actually ran at 80 MHz. Dynamic logic was used in many parts of the design to reduce area and increase speed. Unfortunately, these techniques are difficult to combine with automatic test pattern generation scan testing so they fell out of favour for later designs.
The basic design of the transputer included serial links that allowed it to communicate with up to four other transputers, each at 5, 10 or 20 Mbit/s – which was very fast for the 1980s. Any number of transputers could be connected together over links (which could run tens of metres) to form a single computing "farm". A hypothetical desktop machine might have two of the "low end" transputers handling I/O tasks on some of their serial lines (hooked up to appropriate hardware) while they talked to one of their larger cousins acting as a CPU on another.
There were limits to the size of a system that could be built in this fashion. Since each transputer was linked to another in a fixed point-to-point layout, sending messages to a more distant transputer required the messages to be relayed by each chip on the line. This introduced a delay with every "hop" over a link, leading to long delays on large nets. To solve this problem Inmos also provided a zero-delay switch that connected up to 32 transputers (or switches) into even larger networks.
Transputers could be booted over the network links (as opposed to the memory as in most machines) so a single transputer could start up the entire network. There was a pin called BootFromROM that when asserted caused the transputer to start two bytes from the top of memory (sufficient for up to a 256 byte backward jump, usually out of ROM). When this pin was not asserted, the first byte that arrived down any link was the length of a bootstrap to be downloaded, which was placed in low memory and run. The 'special' lengths of 0 and 1 were reserved for PEEK and POKE – allowing inspection and changing of RAM in an unbooted transputer. After a peek (which required an address) or a poke (which took a word address, and a word of data – 16 or 32 bit depending on the basic word width of the transputer variant) the transputer would return to waiting for a bootstrap.
Supporting the links was additional circuitry that handled scheduling of the traffic over them. Processes waiting on communications would automatically pause while the networking circuitry finished its reads or writes. Other processes running on the transputer would then be given that processing time. It included two priority levels to improve real-time and multiprocessor operation. The same logical system was used to communicate between programs running on a single transputer, implemented as "virtual network links" in memory. So programs asking for any input or output automatically paused while the operation completed, a task that normally required the operating system to handle as the arbiter of hardware. Operating systems on the transputer did not have to handle scheduling: in fact, one could consider the chip itself to have an OS inside it.
To include all this functionality on a single chip, the transputer's core logic was simpler than most CPUs. While some have called it a RISC due to its rather spare nature (and because that was a desirable marketing buzzword at the time), it was heavily microcoded, had a limited register set, and complex memory-to-memory instructions, all of which place it firmly in the CISC camp. Unlike register-heavy load-store RISC CPUs, the transputer had only three data registers, which behaved as a stack. In addition a Workspace Pointer pointed to a conventional memory stack, easily accessible via the Load Local and Store Local instructions. This allowed for very fast context switching by simply changing the workspace pointer to the memory used by another process (a technique used in a number of contemporary designs, such as the TMS9900). The three register stack contents were not preserved past certain instructions, like Jump, when the transputer could do a context switch.
The transputer instruction set comprised 8-bit instructions divided into opcode and operand nibbles. The "upper" nibble contained the 16 possible primary instruction codes, making it one of the very few commercialized minimal instruction set computers. The "lower" nibble contained the single immediate constant operand, commonly used as an offset relative to the Workspace (memory stack) pointer. Two prefix instructions allowed construction of larger constants by prepending their lower nibbles to the operands of following instructions. Additional instructions were supported via the Operate (Opr) instruction code, which decoded the constant operand as an extended zero-operand opcode, providing for almost endless and easy instruction set expansion as newer implementations of the transputer were introduced.
The 16 'primary' one-operand instructions were:
|J||Jump — add immediate operand to instruction pointer.|
|LDLP||Load Local Pointer — load a Workspace-relative pointer onto the top of the register stack|
|PFIX||Prefix — general way to increase lower nibble of following primary instruction|
|LDNL||Load non-local — load a value offset from address at top of stack|
|LDC||Load constant — load constant operand onto the top of the register stack|
|LDNLP||Load Non-local pointer — Load address, offset from top of stack|
|NFIX||Negative prefix — general way to negate (and possibly increase) lower nibble|
|LDL||Load Local — load value offset from Workspace|
|ADC||Add Constant — add constant operand to top of register stack|
|CALL||Subroutine call — push instruction pointer and jump|
|CJ||Conditional jump — depending on value at top of register stack|
|AJW||Adjust workspace — add operand to workspace pointer|
|EQC||Equals constant — test if top of register stack equals constant operand|
|STL||Store local — store at constant offset from workspace|
|STNL||Store non-local — store at address offset from top of stack|
|OPR||Operate — general way to extend instruction set|
All these instructions take a constant, representing an offset or an arithmetic constant. If this constant was less than 16, all these instructions coded to a single byte.
The first 16 'secondary' zero-operand instructions (using the OPR primary instruction) were:
|REV||Reverse — swap two top items of register stack|
|GCALL||General Call — swap top of stack and instruction pointer|
|IN||Input — receive message|
|GT||Greater Than — the only comparison instruction|
|OUT||Output — send message|
|OUTBYTE||Output Byte — send single-byte message|
|OUTWORD||Output word — send single-word message|
To provide an easy means of prototyping, constructing and configuring multiple-transputer systems, Inmos introduced the TRAM (TRAnsputer Module) standard in 1987. A TRAM was essentially a building block daughterboard comprising a transputer and, optionally, external memory and/or peripheral devices, with simple standardised connectors providing power, transputer links, clock and system signals. Various sizes of TRAM were defined, from the basic Size 1 TRAM (3.66 in by 1.05 in) up to Size 8 (3.66 in by 8.75 in). Inmos produced a range of TRAM motherboards for various host buses such as ISA, MicroChannel or VMEbus. TRAM links operate at 10 Mbit/s or 20 Mbit/s.
Transputers were intended to be programmed using the occam programming language, based on the CSP process calculus. In fact it is fair to say that the transputer was built specifically to run occam, even more so than contemporary CISC designs were built to run languages like Pascal or C. Occam supported concurrency and channel-based inter-process or inter-processor communication as a fundamental part of the language. With the parallelism and communications built into the chip and the language interacting with it directly, writing code for things like device controllers became a triviality – even the most basic code could watch the serial ports for I/O, and would automatically sleep when there was no data.
The initial occam development environment for the transputer was the Inmos D700 Transputer Development System (TDS). This was an unorthodox integrated development environment incorporating an editor, compiler, linker and (post-mortem) debugger. The TDS was itself a transputer application written in occam. The TDS text editor was notable in that it was a folding editor, allowing blocks of code to be hidden and revealed, to make the structure of the code more apparent. Unfortunately, the combination of an unfamiliar programming language and equally unfamiliar development environment did nothing for the early popularity of the transputer. Later, Inmos would release more conventional occam cross-compilers, the occam 2 Toolsets.
Implementations of more mainstream programming languages, such as C, FORTRAN, Ada and Pascal were also later released by both Inmos and third-party vendors. These usually included language extensions or libraries providing, in a less elegant way, occam-like concurrency and channel-based communication.
The transputer's lack of support for virtual memory inhibited the porting of mainstream variants of the UNIX operating system, though ports of UNIX-like operating systems (such as Minix and Idris from Whitesmiths) were produced. An advanced UNIX-like distributed operating system, HeliOS, was also designed specifically for multi-transputer systems by Perihelion Software.
The first transputers were announced in 1983 and released in 1984.
In keeping with their role as microcontroller-like devices, they included on-board RAM and a built-in RAM controller which enabled more memory to be added without any additional hardware. Unlike other designs, transputers did not include I/O lines: these were to be added with hardware attached to the existing serial links. There was one 'Event' line, similar to a conventional processor's interrupt line. Treated as a channel, a program could 'input' from the event channel, and proceed only after the event line was asserted.
All transputers ran from an external 5 MHz clock input; this was multiplied to provide the processor clock.
Transputer variants (excepting the cancelled T9000) can be categorised into three groups: the 16-bit T2 series, the 32-bit T4 series and the 32-bit T8 series with 64-bit IEEE 754 floating-point support.
The prototype 16-bit transputer was the S43, which lacked the scheduler and DMA-controlled block transfer on the links. At launch, the T212 and M212 (the latter with an on-board disk controller) were the 16-bit offerings. The T212 was available in 17.5 and 20 MHz processor clock speed ratings. The T212 was superseded by the T222, with on-chip RAM expanded from 2 kB to 4 kB, and, later, the T225. This added debugging breakpoint support (by extending the instruction J 0) plus some extra instructions from the T800 instruction set. Both the T222 and T225 ran at 20 MHz.
At launch, the T414 was the 32-bit offering. Originally, the first 32-bit variant was to be the T424, but fabrication difficulties meant that this was redesigned as the T414 with 2 kB on-board RAM instead of the intended 4 kB. The T414 was available in 15 and 20 MHz varieties. The RAM was later reinstated to 4 kB on the T425 (in 20, 25 and 30 MHz varieties), which also added the J 0 breakpoint support and extra T800 instructions. The T400, released in September 1989, was a low-cost 20 MHz T425 derivative with 2 kB and two instead of four links, intended for the embedded systems market.
T8: floating point
The second-generation T800 transputer, introduced in 1987, had an extended instruction set. The most important addition was a 64-bit floating point unit and three additional registers for floating point, implementing the IEEE754-1985 floating point standard. It also had 4 kB of on-board RAM and was available in 20 or 25 MHz versions. Breakpoint support was added in the later T801 and T805, the former featuring separate address and data buses to improve performance. The T805 was also later available as a 30 MHz part.
An enhanced T810 was planned, which would have had more RAM, more and faster links, extra instructions and improved microcode, but this was cancelled around 1990.
Inmos also produced a variety of support chips for the transputer processors, such as the C004 32-way link switch and the C012 "link adapter" which allowed transputer links to be interfaced to an 8-bit data bus.
System on a chip
|This section does not cite any references or sources. (January 2010)|
Part of the original Inmos strategy was to make CPUs so small and cheap that they could be combined with other logic in a single device. Although SOCs as they are commonly known, are ubiquitous now, the concept was almost unheard of back in the early 1980s. Two projects were started in around 1983, the M212 and the 'TV-toy'. The M212 was based on a standard T212 core with the addition of a disk controller for the ST 506 and ST 412 Shugart standards. 'TV-toy' was to be the basis for a games console and was joint project between Inmos and Sinclair Research.
The links in the T212 and T414/T424 transputers had hardware DMA engines so that transfers could happen in parallel with execution of other processes. A variant of the design, known as the T400, not to be confused with a later transputer of the same name, was designed where the CPU handled these transfers. This reduced the size of the device considerably since 4 link engines were approximately the same size as the whole CPU. The T400 was intended to be used as a core in what were then called 'SOS' ('systems on silicon') devices, now better known as SOCs. It was this design that was to form part of TV-toy. The project was canceled in 1985.
Although the previous SOC projects had had only limited success (the M212 was in fact sold for a time), many designers still firmly believed in the concept and in 1987, a new project, the T100 was started which combined an 8-bit version of the transputer CPU with configurable logic based on state machines. The transputer instruction set is based on 8-bit instructions and can easily be used with any word size which is a multiple of 8 bits. The target market for the T100 was to be bus controllers such as Futurebus, as well as an upgrade for the standard link adapters (C011 etc.). The project was stopped when the T840 (later to become the basis of the T9000) was started.
While the transputer was simple but powerful compared to many contemporary designs, it never came close to meeting its goal of being used universally in both CPU and microcontroller roles. In the microcontroller realm, the market was dominated by 8-bit machines where cost was the only serious consideration. Here, even the T2s were too powerful and expensive for most users.
In the computer desktop/workstation world, the transputer was fairly fast (operating at about 10 MIPS at 20 MHz). This was excellent performance for the early 1980s, but by the time the FPU-equipped T800 was shipping, other RISC designs had surpassed it. This could have been mitigated to a large extent if machines had used multiple transputers as planned, but T800s cost about $400 each when introduced, which meant a poor price/performance ratio. Few transputer-based workstation systems were designed; the most notable probably being the Atari Transputer Workstation.
The transputer was more successful in the field of massively parallel computing, where several vendors produced transputer-based systems in the late 1980s. These included Meiko (founded by ex-Inmos employees), Floating Point Systems, Parsytec (picture) and Parsys. Several British academic institutions founded research activities in the application of transputer-based parallel systems, including Bristol Polytechnic's Bristol Transputer Centre and the University of Edinburgh's Edinburgh Concurrent Supercomputer Project. In addition, the Data Acquisition and Second Level Trigger systems of the High Energy Physics ZEUS Experiment for the HERA collider at DESY was based on a network of over 300 synchronously clocked transputers divided into several subsystems. These controlled both the readout of the custom detector electronics and ran reconstruction algorithms for physics event selection.
The parallel processing capabilities of the transputer were put to use commercially for image processing by the world's largest printing company, RR Donnelley & Sons, in the early 1990s. The ability to quickly transform digital images in preparation for print gave RR Donnelley a significant edge over their competitors. This development was led by Michael Bengtson in the RR Donnelley Technology Center. Within a few years, the processing capability of even desktop computers pushed aside the need for custom multi-processing systems for RR Donnelley.
The German company Jäger Messtechnik used transputers for their early ADwin real-time data acquisition and control products.
The transputer also appeared in products related to virtual reality such as the ProVision 100 system made by Division Limited of Bristol, featuring a combination of Intel i860, 486/33 and Toshiba HSP processors, together with T805 or T425 transputers, implementing a rendering engine that could then be accessed as a server by PC, Sun SPARCstation or VAX systems.
Inmos improved on the performance of the T8 series transputers with the introduction of the T9000 (code-named H1 during development). The T9000 shared most features with the T800, but moved several pieces of the design into hardware and added several features for superscalar support. Unlike the earlier models, the T9000 had a true 16 kB high-speed cache (using random-replacement) instead of RAM, but also allowed it to be used as memory and included MMU-like functionality to handle all of this (known as the PMI). For additional speed the T9000 cached the top 32 locations of the stack, instead of three as in earlier versions.
The T9000 used a five stage pipeline for even more speed. An interesting addition was the grouper which would collect instructions out of the cache and group them into larger packages of 4 bytes to feed the pipeline faster. Groups then completed in a single cycle, as if they were single larger instructions working on a faster CPU.
The link system was upgraded to a new 100 MHz mode, but unlike the previous systems the links were no longer downwardly compatible. This new packet-based link protocol was called DS-Link and later formed the basis of the IEEE 1355 serial interconnect standard. The T9000 also added link routing hardware called the VCP (Virtual Channel Processor) which changed the links from point-to-point to a true network, allowing for the creation of any number of virtual channels on the links. This meant programs no longer had to be aware of the physical layout of the connections. A range of DS-Link support chips were also developed, including the C104 32-way crossbar switch, and the C101 link adapter.
Long delays in the T9000's development meant that the faster load-store designs were already outperforming it by the time it was to be released. In fact it consistently failed to reach its own performance goal of beating by a factor of ten the T800: when the project was finally cancelled it was still achieving only about 36 MIPS at 50 MHz. The production delays gave rise to the quip that the best host architecture for a T9000 was an overhead projector.
This was too much for Inmos, which did not have the funding needed to continue development. By this time, the company had been sold to SGS-Thomson (now STMicroelectronics), whose focus was the embedded systems market, and eventually the T9000 project was abandoned. However, a comprehensively redesigned 32-bit transputer intended for embedded applications, the ST20 series, was later produced, utilising some technology developed for the T9000. The ST20 core was incorporated into chipsets for set-top box and GPS applications.
Although not strictly a transputer itself, the ST20 was heavily influenced by the T4 and T9 and did in fact form the basis of the T450 which was arguably the last of the transputers. The mission of the ST20 was to be a reusable core in the then emerging SOC market. In fact the original name of the ST20 was the RMC or Reusable Micro Core. The architecture was loosely based on the original T4 architecture with a microcode-controlled data path. It was however a complete redesign, using VHDL as the design language and with an optimized (and rewritten) microcode compiler. The project was conceived as early as 1990 when it was realized that the T9 would be too big for many applications. Actual design work started in mid-1992. Several trial designs were done, ranging from a very simple RISC-style CPU with complex instructions implemented in software via traps to a rather complex superscalar design similar in concept to the Tomasulo algorithm. The final design looked very similar to the original T4 core although some simple instruction grouping and a 'workspace cache' were added to help with performance.
Ironically, additional internal parallelism has been the driving force behind improvements in conventional CPU designs. Instead of explicit thread-level parallelism (such as that found in the transputer), CPU designs exploited implicit parallelism at the instruction-level, inspecting code sequences for data dependencies and issuing multiple independent instructions to different execution units. This is known as superscalar processing. Superscalar processors are suited for optimising the execution of sequentially-constructed fragments of code. The combination of superscalar processing and speculative execution delivered a tangible performance increase on existing bodies of code – which were mostly written in Pascal, Fortran, C and C++. Given these substantial and regular performance improvements to existing code there was little incentive to rewrite software in languages or coding styles which expose more task-level parallelism.
Nevertheless, the model of cooperating concurrent processors can still be found in cluster computing systems that dominate supercomputer design in the 21st century. Unlike the transputer architecture, the processing units in these systems typically utilize superscalar CPUs with access to substantial amounts of memory and disk storage, running conventional operating systems and network interfaces. Resulting from the more complex nodes, the software architecture used for coordinating the parallelism in such systems is typically far more heavyweight than in the transputer architecture.
The fundamental transputer motivation remains, yet was masked for over 20 years by the repeated doubling of transistor counts. Inevitably, microprocessor designers finally ran out of uses for the additional physical resources – almost at the same time when technology scaling began to hit its limits. Power consumption and therefore heat dissipation requirements render further clock rate increases unfeasible. These factors lead the industry towards solutions little different in essence from those proposed by Inmos.
The most powerful supercomputers in the world, based on designs from Columbia University and built as IBM Blue Gene, are real-world incarnations of the transputer dream. They are vast assemblies of identical, relatively low-performance SoC chips.
Recent trends have also tried to solve the transistor dilemma in ways that would have been too futuristic even for Inmos. On top of adding components to the CPU die and placing multiple dies in one system, modern processors increasingly place multiple cores in a single die. The transputer designers struggled to fit even one core into its transistor budget. Today designers, working with a 1000-fold increase in transistors, can now typically place many. One of the most recent commercial developments has emerged from XMOS, which has developed a family of embedded multi-core multi-threaded processors which resonate strongly with the transputer and Inmos.
The transputer and Inmos both not only left a legacy on the computing world but also established Bristol, UK as a hub for microelectronic design and innovation.
Platforms currently using Transputers
- David May, transputer architect
- Atari Transputer Workstation
- IEEE 1355 data interconnect standard derived from T9000 DS-links
- Meiko Computing Surface
- Ease programming language
- Allen Kent, James G. Williams (eds.) (1998) "Encyclopedia of Computer Science and Technology", ISBN 0-8247-2292-2, "The Transputer Family of Products", by Hamid R. Arabnia
- Barron, Iann M. (1978). "The Transputer". In D. Aspinall. The Microprocessor and its Application: an Advanced Course (Cambridge University Press): 343. ISBN 0-521-22241-9. Retrieved 2009-05-18.
- Stakem, Patrick H. The Hardware and Software Architecture of the Transputer, 2011, PRB Publishing, ASIN B004OYTS1K
- Inmos Technical Note 29: Dual-In-Line Transputer Modules (TRAMs)
- Edmunds, Nick (July 1993). "When two worlds collide". Personal Computer World.
- Bangay, Sean (July 1993). Parallel Implementation of a Virtual Reality System on a Transputer Architecture.. Rhodes University. Retrieved 2012-05-06.
- Inmos T9000 CPU patent, "US patent 5742783",
- Inmos DS Link patent, "Communication Interface US patent 5341371",
- "The MYRIADE Platform". Retrieved 2011-08-22.
- David CHEMOUIL. "The Design of Space Systems". Retrieved 2011-08-22.
|Wikimedia Commons has media related to: Transputer|
- The Transputer FAQ
- Ram Meenakshisundaram's Transputer Home Page
- WoTUG A group applying the principles of transputers (e.g., CSP) in other environments.
- Transputer emulator – It emulates a single T414 transputer (i.e. no FPU, no blitting instructions) and supplies the file and terminal I/O services that were usually supplied by the host computer system.
- PC based Transputer emulator – This is a PC port of the original T414 transputer emulator (called jserver) written by Julian Highfield in the mid-late 90s.
- Transputers can be fun.
- The Transterpreter virtual machine. – A portable runtime for occam-pi and other languages based on the transputer bytecode.
- The Kent Retargettable occam compiler. – The occam-pi compiler.
- transputer.net. – Documents and more about transputer.
- Transtech Parallel Systems Ltd. – still supporting transputer based systems as of Q4 2009 (TRAMs with I/O like SCSI or with T225/T425/T805/ST20450 transputers); Maidenhead, UK
- Inmos alumni Directory of ex-Inmos employees, plus photos and general info. Maintained by Ken Heddings.
- Prince Philip Designers Prize Prince Philip Designers Prize winners from 1959–2009, Design Council website
- HETE-2 Spacecraft internal systems | 1 | 63 |
<urn:uuid:bdc02f35-dadb-4d5a-873e-147164a50550> | Control of Health-Care--Associated Infections, 1961–2011
Corresponding author: Richard E. Dixon, MD, Regional Medical Director; Health Net of California, Inc., 11971 Foundation Place; Rancho Cordova, CA 95670; Telephone: 916-935-1941; Fax 800-258-3506; E-mail: [email protected].
For centuries, hospitals have been known as dangerous places. In 1847, Ignaz Semmelweis presented evidence that childbed fever was spread from person to person on the unclean hands of health-care workers (1). Semmelweis's findings did not immediately improve sanitary conditions in hospitals, but surgeons gradually adopted aseptic and antiseptic techniques and became leading innovators of techniques to reduce patients' susceptibility to postoperative infections. Concerns about the spread of infection by air, water, and contaminated surfaces gradually changed practices in hospitals, making them safer. During the 1950s, epidemic penicillin-resistant Staphylococcus aureus infections, especially in hospital nurseries, captured the public's attention and highlighted the importance of techniques to prevent hospital-acquired infections, now also referred to as health-care--associated infections (HAIs; i.e., nosocomial infections) (2). By the mid-20th century, some surgeons, microbiologists, and infectious disease physicians had focused their studies on the epidemiology and control of HAIs (3,4). From the efforts of these pioneers grew the notion that hospitals had the ability---and the obligation---to prevent HAIs.
By the 1960s, hospital-based infection control efforts had been established in scattered hospitals throughout the United States. The number of hospitals with HAI control programs increased substantially during the 1970s, and HAI control programs were established in virtually every U.S. hospital by the early 1990s. The remarkable spread and adoption of programs designed to prevent and control HAIs hold valuable lessons about the ways that other public health initiatives can be designed, developed, and implemented. This report traces the strategic and tactical steps used to bring about a major public health success: the ubiquity of formal established infection control programs in virtually all U.S. hospitals and expanding into other health-care settings.
Developing the Public Health Model for Hospital Infection Control
By the late 1950s and early 1960s, a small proportion of hospitals had begun to implement programs designed to understand and control HAIs. The pioneering leaders of those efforts were located mostly in large, academic medical centers, not in public health agencies. Although state, local, and federal public health agencies were sporadically called on to provide epidemiologic or laboratory support to investigate particular problems, they did not consider hospitals as communities needing ongoing public health resources. Nor did hospitals routinely see themselves as communities needing such assistance. During the 1950s and even afterwards, many hospitals saw themselves as "the doctor's workshop" and their roles as providers of space and personnel to support practicing physicians. In most communities, a hospital was perceived as good because doctors who practiced there were perceived as good, not because the hospital's outcomes were better than its competitors'. Focused on patients and doctors as individuals, most hospitals neither tracked nor had systems in place designed to improve their overall outcomes; public health--based and population-based principles often were not important management priorities. The nosocomial staphylococcal epidemics of the 1950s began to change those attitudes.
History did not record who first understood---or when it was first recognized---that hospitals are discrete communities in which public health principles could be used to prevent and control HAIs. But by the 1960s, hospital-based clinicians and CDC epidemiologists clearly were beginning to apply a public health model to HAIs. That model was built around systematic surveillance to identify HAIs; ongoing analysis of surveillance data to recognize potential problems; application of epidemic investigation techniques to epidemic and endemic HAIs; and implementation of hospitalwide interventions to protect patients, staff, and visitors who seemed to be at particular risk.
One might assume that the public health system would have managed the public health approach to HAIs. It did not. Instead, a different approach evolved. Hospitals built and managed their own infection control programs. The historical record is murky as to why infection control programs became the responsibility of hospitals, rather than local, state, or national public health agencies. Although many exceptions certainly existed, hospitals generally did not work closely with their local health departments, and when they did interact, the health departments were sometimes seen to be regulators, not colleagues. A perception at the time was that most health departments had little interest in the hospitals' clinical activities.
Given the absence of a tradition of collaboration between community hospitals and local health departments, two of CDC's first public health research and development activities were embedded in hospitals themselves. One was a national network of hospitals that volunteered to conduct HAI surveillance by using CDC methods and to report those data to CDC each month. That voluntary surveillance system, the National Nosocomial Infection Surveillance program, has changed over the years but remains active as the National Healthcare Safety Network (NHSN; http://www.cdc.gov/nhsn/) and continues to provide information about the changing patterns of HAIs.
The second of CDC's research projects also was located in community hospitals, and it profoundly affected the evolution of infection control programs. The Comprehensive Hospital Infections Project (CHIP) was begun in 1965 (5). Eight community hospitals, which were located in different cities across the country, participated in the project. Those hospitals served as the laboratories where surveillance and control techniques were developed. CDC funded those activities, and Atlanta-based CDC staff actively collaborated in the research. Physician and nurse epidemiologists, along with CDC microbiologists, visited CHIP hospitals regularly and conducted studies to learn the epidemiology of HAIs. CHIP studies helped to define how HAIs could be identified and distinguished from community-acquired infections. Hospital staff and CDC epidemiologists explored what data were needed to improve practices and how those data should be analyzed and reported. That direct field epidemiology experience gave CDC important insights into the ways that community hospitals worked. The close interactions with the hospitals undoubtedly helped CDC develop unique recommendations that were credible to hospitals and practical for them to use.
CDC's decision to use community hospitals for some of its early research was a strategic one. Most hospital inpatients were---and still are---treated in community hospitals. Although CDC staff interacted closely and shared ideas with leading infectious disease experts in the United States and Europe, CDC's involvement with community hospitals made the resulting infection control models and techniques more likely to be appropriate for use in the kinds of institutions where most patients get hospital care.
Promoting the Public Health Model to All U.S. Hospitals
As the infection control community developed confidence in the value of infection control programs, the next task was to assist other hospitals to adopt them voluntarily. Two barriers were obvious. First, hospitals were not required to have such programs, so the value of the activities had to be promoted to hospital administrators and clinical staffs. Because they recognized such programs as advantageous to the hospital and its patients, many hospitals voluntarily adopted and paid for such programs.
The second problem posed a larger challenge. Because local and state health departments did not have the resources to place their personnel in every hospital needing an infection control program, where would the trained infection control specialists come from? Existing hospital personnel had to be recruited and trained to use entirely new public health and epidemiologic skills.
The new jobs were often filled by existing staff nurses and laboratorians who built new careers as infection control practitioners (ICPs). The ICPs usually were supervised by hospital epidemiologists---typically physicians selected from the existing medical staff, such as pathologists or infectious disease--trained physicians. These doctoral-level program directors often were hired to provide this service part time, and many volunteered to serve without pay. Both positions---ICP and hospital epidemiologist---were newly created positions, and at the time, few ICPs or hospital epidemiologists had more than cursory formal training in epidemiology or any other public health discipline.
Training for these new careers often took place informally, on the job, by networking with colleagues in other hospitals, and by taking brief training courses. Many of the pioneer infection control programs were staffed by practitioners who had either attended a week-long training course conducted at CDC or had been trained by another practitioner who had been trained at CDC. As a result, the knowledge and attitudes of the earliest infection control staff had considerable uniformity. Those pioneers soon became the leaders of their new fields and naturally became the teachers and consultants for new practitioners. The public health model became an unofficial standard of practice; it focused on active prospective surveillance, data analysis, and reporting, and it emphasized prevention programs that relied on the education of hospital staff about infection control techniques.
Although using existing hospital staff and retraining them for their new jobs provided many advantages, this practice also had unanticipated disadvantages. Few infection control pioneers brought investigative experience to their new positions. As a result, when problems were discovered by surveillance, instead of basing interventions on locally acquired epidemiological and laboratory evidence, often they were based merely on established guidelines and recommendations that seemed logically to make the most sense. The evidence base for many of those guidelines was not strong, however, because effectiveness studies of intervention programs had rarely been conducted.
Infection Control Becomes a Profession
The rapid growth and acceptance of infection control programs was undoubtedly stimulated by the new career possibilities offered by the emerging infection control field. Staff nurses, microbiologists, pathologists, and infectious disease clinicians were eager to become part of a field that provided new skills and offered new opportunities. The professionalization of infection control practice was strengthened when, in 1972, infection control practitioners formed a professional society, the Association of Practitioners in Infection Control (APIC, now the Association for Professionals in Infection Control and Epidemiology). APIC was formed to provide practitioners with continuing professional interaction, education, and growth. A certifying program based on practitioners' education, experience, and test scores followed in 1980, further establishing infection control as an attractive career.
The hospital epidemiologists followed soon afterwards in forming their own professional society, the Society of Hospital Epidemiologists of America (SHEA), now The Society for Healthcare Epidemiology of America. Its initial membership requirements allowed only physicians to join, and physician infectious disease subspecialists accounted for most of its early members. Only several years after its founding were nonphysician epidemiologists, sanitarians, microbiologists, and other doctoral-level practitioners able to join SHEA. The doctoral-level societies were also divided. Surgeons interested in hospital-acquired infections formed their own society: the Surgical Infection Society (SIS). SIS, like the other professional associations, has expanded membership to other categories of physicians, nurses, and others with an interest in surgical infections. SIS, SHEA, and APIC have not merged, although they have developed collegial working relationships and have important collaborations.
Although the development of trained professional cadres of infection control experts in every hospital seems to be an obvious benefit, it must be asked whether infection control would have been more innovative and might have advanced faster if the practitioners of the new careers had welcomed other disciplines and other kinds of expertise into the field earlier. Would that have promoted innovation? Would it have led to faster development of an evidence base for infection control? Perhaps so. Public health officials need also to consider this question as they develop and deploy new approaches to public health practice.
Transforming Infection Control from Movement to Mandate
By the late 1970s, the infection control field was well established. It had strong presences in hospitals across the country, organized work forces, a coherent model that guided the field's activities, and a rapidly expanding body of scientific publications. A decade earlier, during the late 1960s and early 1970s, however, that degree of success was not certain. During the early 1970s, the hospital infection control movement faced the same challenges as many other public health initiatives have before it: how to increase adoption by more communities and how to convert a good idea into a virtual mandate for action.
By the mid-1970s, HAIs were recognized as a major threat associated with medical care. Despite the increasing public and professional concern about HAIs, it became apparent during the mid-1970s that not all hospitals were adopting infection control programs. CDC had ready access to national professional societies, health-care trade associations, accrediting organizations, and regulatory agencies, but infection control programs, although encouraged, were not mandated. Some hospitals had no programs at all. Other hospitals had programs, but no requirement existed to ensure they were properly staffed, well structured, or effective. The absence of a requirement that hospitals have effective infection control programs to protect the public was due, in part, to the fact that the evidence for the effectiveness of the public health model for infection control programs was mostly only anecdotal. It had a compelling story; it seemed like a good thing to do; but it was not evidence based.
CDC determined that a rigorous scientific assessment of the effectiveness of infection control programs would be necessary to propel widespread adoption of hospital-based programs. That decision led to the Study on the Effectiveness of Nosocomial Infection Control (SENIC), a rigorous assessment of infection control effectiveness that compared outcomes in hospitals with and without CDC-style infection control programs (6). The study was designed to determine whether infection control programs using CDC-recommended practices actually reduced the risks from HAIs. To conduct the study, 338 U.S. hospitals were randomly selected and were stratified by geography, inpatient bed capacity, and teaching status. Approximately half of the study hospitals had established infection surveillance and control programs. When that study showed that hospitals with infection control programs had significantly lower rates of HAIs than did hospitals without such programs (7), expectations for hospital programs changed. With strong scientific evidence supporting the value of such programs, accrediting organizations such as the Joint Commission on Accreditation of Hospitals (now The Joint Commission) mandated that accredited hospitals have infection control programs similar to those recommended by CDC and the professional organizations of hospital epidemiologists and infection control practitioners. The Joint Commission made this an accreditation requirement in 1976 (8).
The SENIC study converted a movement into a mandate. Although it is widely agreed that new treatment interventions for individual patients should be tested in rigorous clinical trials, such trials are much less common for large population-based interventions. The design and conduct of assessments for population-based interventions can be difficult scientifically, legally, and ethically. They also can be expensive, and often no commercial company is interested enough to sponsor such studies. As a result, SENIC-style studies are rarely conducted by public health agencies.
Beyond its revolutionary effect on infection control practices in hospitals, the SENIC study served as an example that rigorously conducted public health research can change the credibility and acceptability of public health interventions and can speed adoption of important programs. It established how, when a public health problem is important enough, a scientifically rigorous population-based assessment can be used to propel the implementation of effective programs. In the future, public health programs are likely to face ever-greater demands for proof of worth and more competition for support, and more SENIC-style studies may be needed.
Hospital Epidemiology in the New Century
CDC continues to play an important role in HAI prevention research. CDC's Division of Healthcare Quality Promotion (DHQP) has substantial expertise in HAI control, stemming in part from decades of experience in HAI epidemiologic investigations. That, along with its central role in the public health infrastructure, gives CDC a unique opportunity and responsibility to guide and support research that directly addresses the knowledge gaps most relevant to the public health.
In addition to the important research contributions that arise directly from the core activities of outbreak investigation, laboratory support, and HAI surveillance, CDC dedicates funds for innovative extramural HAI prevention research through its Prevention Epicenter Program. DHQP began the Prevention Epicenters Program in 1997 as a way to work directly with academic partners to address important scientific questions about the prevention of health-care--associated infections, antibiotic resistance, and other adverse events associated with health care. Through a collaborative funding mechanism, DHQP staff work closely with a network of academic centers to foster research on the epidemiology and prevention of HAI, with an emphasis on multicenter collaborative research projects. The program has provided a unique forum in which leaders in health-care epidemiology can collaborate with each other and with CDC to pursue innovative research endeavors that bring into alignment both academic and public health research goals and objectives and create important synergies that might not be possible for a single academic center or without the benefit of cross-fertilization of ideas between academic and public health experts.
Research conducted through the Epicenters program has produced valuable contributions to the field and to the mission of DHQP. The program has resulted in approximately 150 peer-reviewed publications that cover a broad array of topics relevant to HAI prevention, including the epidemiology of infections caused by multidrug-resistant organisms and Clostridium difficile; development and testing of novel prevention strategies, such as the use of chlorhexidine bathing to prevent bloodstream infections and pathogen transmission among intensive-care unit patients; and development of novel HAI surveillance strategies that are helping to shape the future of HAI surveillance through the National Healthcare Safety Network. CDC should seek to maintain an active participatory role in HAI research.
As CDC plans its research agenda, another lesson taught by the development of infection control as a public health discipline should be remembered: sometimes public health agencies need to actually conduct research, not just fund it. CDC's credibility obtained through its own research was an essential factor in its ability to promote infection control programs. Working in hospitals, collecting data, and conducting field studies alongside hospital workers gave CDC a unique understanding of the challenges that hospital-based infection control personnel face. As a result, CDC recommendations were more likely to be useful and appropriate than they would have been had CDC simply funded others to do its research. Learning the subtleties of what did not work or what was impractical to implement was perhaps more important than learning what did work, and this was learned best by the agency conducting the research itself.
The landscape of infection control and health-care epidemiology began another dramatic shift with the publication of the Institute of Medicine (IOM) report, To Err is Human, in 1999 (9). This report revealed that thousands of patients in U.S. hospitals were injured or died each year because of medical errors---many of which might have been preventable. HAIs were recognized as a leading cause of these preventable harms. This report was followed by an influential series of investigative articles on health-care--associated infections published by the Chicago Tribune. These reports underscored the findings of the IOM report on the major public health effects of HAIs and criticized hospitals for failing to prevent these infections and keeping secret the scope of the problem. The IOM report and Chicago Tribune articles touched off an active debate about HAI prevention and spurred action by consumers and legislatures. In 2002, four states (Illinois, Florida, Missouri, and Pennsylvania) passed laws to mandate that health-care facilities report HAIs to the public. Proponents of the legislation argued that health-care facilities would finally begin to take real steps toward preventing HAIs if they had to disclose them more openly.
Public interest in HAIs reached an important tipping point in 2005--2006 with the publication of two studies about the prevention of central line--associated bloodstream infections (CLABSIs). One study was a collaboration between CDC and the Pittsburgh Regional Healthcare Initiative and the other a collaboration between researchers at Johns Hopkins University Hospital and the Michigan Hospital Association (10,11). Both studies brought together staff from a large number of intensive-care units who collaborated to reduce CLABSIs by implementing a relatively simple set of interventions. The results of the studies were striking and consistent. In each, CLABSIs were reduced by roughly 65%.
Increasing awareness of the scope of the HAI problem, coupled with the recognition that a substantial portion of these infections could be prevented, galvanized even more consumers and policy makers to take action. Many other state legislatures began to debate and pass laws to mandate the public reporting of HAIs. In recognition of the growing interest in so-called public reporting, CDC worked with the Healthcare Infection Control Practices Advisory Committee to develop recommendations to help guide future legislation (12). These laws have now become widespread. Twenty-eight states have passed legislation that requires the public reporting of one or more HAIs, and legislation is pending in others. Federal lawmakers also have taken up the HAI issue. In 2008, as part of the larger deficit-reduction act, Congress mandated that the Center for Medicare and Medicaid Services (CMS) stop giving hospitals increased payments for the care of patients with HAIs. CMS worked closely with CDC to identify HAIs that were "reasonably preventable" to support implementation of this requirement. In 2010, Congress incorporated HAI prevention into the Value Based Purchasing program of the Affordable Care Act. CMS has elected to implement the requirement by requiring national public reporting of HAIs, beginning with CLABSIs in 2011.
CDC is playing a central role in supporting legislative mandates on HAI reporting and prevention. Laws in 22 of the 28 states that require reporting of HAIs specifically stipulate that facilities use the CDC's NHSN as the platform for that reporting. Likewise, the new CMS mandate will require submission of data to NHSN. These requirements have led to a dramatic expansion in NHSN enrollment, from roughly 300 hospitals in 2006 to approximately 3,500 in 2010. Increasingly, state health departments, with support from CDC, are leading HAI prevention efforts. Their role in HAI prevention was recognized and greatly enhanced in 2009 with passage of the American Recovery and Reinvestment Act. That legislation included $50 million to support state-based HAI prevention efforts. American Recovery and Reinvestment Act funds were distributed through CDC's Epidemiology and Laboratory Capacity grant to support state efforts to build HAI infrastructure and expand surveillance and prevention efforts. CDC staff and experts are now supporting HAI prevention efforts in 49 funded states, the District of Columbia, and Puerto Rico. Specifically, CDC subject-matter experts are helping guide the expansion and validation of HAI surveillance data and the initiation and expansion of HAI prevention.
Efforts to prevent and control HAIs have led to profound changes in the ways that those infections are perceived and managed in the United States and abroad. Programs focused on preventing and controlling HAIs were rare in U.S. hospitals in the early 1970s; now, they are present in virtually every hospital in the nation and in many hospitals abroad.
Among the main factors that led to this success was, most importantly, CDC's decision to use a rigorous scientific study, the SENIC study, to demonstrate that infection control programs were effective. This evidence obtained from SENIC converted infection control programs from being something worth doing into programs that must be implemented to reduce illness and death. Before SENIC, the evidence for the effectiveness of infection control programs was insufficient to make these programs mandatory. With evidence from SENIC, it was virtually impossible for hospitals to avoid implementing them.
CDC's ability to work with others to design and refine infection control programs was almost certainly aided by CDC's direct field experience investigating epidemics. Perhaps even more important was CDC's experience working directly with hospitals over a long period to design and test surveillance and control techniques. That first-hand field epidemiology helped CDC to learn how hospitals function and to design infection control programs that were practical and could be implemented.
CDC and other pioneers helped to define a new field (hospital epidemiology) and new professional disciplines (infection control and hospital epidemiology). When no training courses or job descriptions existed for those essential hospital workers, CDC provided the key early training and job-development resources used by a large proportion of infection control pioneers. Because of CDC's early dominance in defining the work of these new disciplines, CDC profoundly affected knowledge base, work activities, and extent of the practitioners' responsibilities.
Finally, hospital epidemiology was, for many years, a misleading title for a field that mainly focused on HAIs. As the patient safety movement has vividly shown, the opportunities for strong public health skills in hospitals extend far beyond mere infection control. CDC has the capacity to continue to support that effort and thereby help prevent the range of errors, omissions, and other preventable mishaps that still plague the organizations that should heal, not harm.
- Semmelweis I. Etiology, concept and prophylaxis of childbed fever. Madison, WI: University of Wisconsin Press; 1983.
- Wise RI, Ossman EA, Littlefield DR. Personal reflections on nosocomial staphylococcal infections and the development of hospital surveillance. Med J Aust 1978;12:543--6.
- Finland M, McGowan JE Jr. Nosocomial infections in surgical patients. Observations on effects of prophylactic antibiotics. Arch Surg 1976;111:143--5.
- McGowan JE Jr, Barnes MW, Finland M. Bacteremia at Boston City Hospital: occurrence and mortality during 12 selected years (1935--1972), with special reference to hospital-acquired cases. J Infect Dis 1975;132:316--35.
- CDC. Nosocomial infections in community hospitals, report no. 4, July 1968--1969. Atlanta, GA: US Department of Health, Education, and Welfare, CDC; 1969.
- Haley RW, Quade D, Freeman HE, et al. The SENIC project: Study on the Efficacy of Nosocomial Infection Control (SENIC PROJECT): summary of study design. Am J Epidemiol 1980;111:472--85.
- Haley RW, Culver DH, White JW, et al. The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol 1985;121:182--205.
- Weinstein RA. Nosocoomial infection update. Emerg Infect Dis 1998;4;416--20.
- Institute of Medicine. To err is human: building a safer health system. Washington, DC: National Academies Press; 2000.
- CDC. Reduction in central line--associated bloodstream infections among patients in intensive care units---Pennsylvania, April 2001--March 2005. MMWR 2005;54:1013--6.
- Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006;355:2725--32.
- McKibben L, Horan T, Tokars JI, Guidance on public reporting of healthcare-associated infections: recommendations of the Healthcare Infection Control Practices Advisory Committee. Am J Infect Control 2005;33:217--26.
Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of
Health and Human Services.
All MMWR HTML versions of articles are electronic conversions from typeset documents.
This conversion might result in character translation or format errors in the HTML version.
Users are referred to the electronic PDF version (http://www.cdc.gov/mmwr)
and/or the original MMWR paper copy for printable versions of official text, figures, and tables.
An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S.
Government Printing Office (GPO), Washington, DC 20402-9371;
telephone: (202) 512-1800. Contact GPO for current prices.
**Questions or messages regarding errors in formatting should be addressed to [email protected]. | 1 | 19 |
<urn:uuid:9f9e8a14-0487-470b-a50a-3eeefd80e759> | Inside the Macintosh SE
By Gabriel Torres on February 6, 2013
Following the release of the original Macintosh (a.k.a. Macintosh 128K) in 1984, the Macintosh 512K in 1985, and the Macintosh Plus in 1986, in 1987, Apple released the Macintosh SE. Several versions of this computer were released. Let’s discuss this computer in detail.
The Macintosh SE was based on the same processor (Motorola 68000), had the same size, and used the same screen size and resolution (9-inch black-and-white with 512 x 342 pixels) as the previous Macintosh models (except the Macintosh SE/30, which used a Motorola 68030 processor). It inherited the external SCSI port, the 800 KB floppy disk drive (later upgraded to a 1.44 MB floppy, as we will discuss), and 1 MB of RAM using four SIMM-30 memory modules (allowing you to upgrade the memory to 4 MB) from the Macintosh Plus.
This computer, however, came with new features. The most important was the addition of an internal hard disk drive (20 MB or 40 MB) connected to a new internal SCSI port. This was the first Macintosh computer to feature an internal hard drive. (The dual drive model didn’t come with an internal hard drive.)
Another difference between the Macintosh SE and the previous models was the addition of a new bus for connection of peripherals such as keyboard and mouse, called ADB (Apple Desktop Bus).
The Macintosh SE was also the first Mac to feature an expansion slot, called the PDS (Processor Direct Slot). In fact, “SE” stands for “System Expansion.” Actually, the Macintosh II, which was released at the same time, also featured expansion slots. The Macintosh II was a full-sized desktop computer, unlike the SE and its predecessors, which were compact computers.
Another feature that was added with the SE was a cooling fan. Previous Apple computers didn’t feature a fan because Steve Jobs thought it was noisy and “inelegant.”
In Figure 1, you can see a Macintosh SE system with a keyboard and a mouse.
Nowadays, the first thing you will notice when looking at the Macintosh SE is how small it was. In Figure 2, we compare the Macintosh Plus to a 21-inch LCD monitor.
Unlike the previous models, the computer didn’t come with a keyboard; you had to buy one separately. At the time, two choices were available: the Apple Keyboard (model M0116), which was cheaper and thus more common (the one shown in Figures 1, 2, and 3) and the Apple Extended Keyboard (model M0115), which was bigger and more expensive. We show this keyboard in Figure 4. Since the Macintosh SE used an ADB port, any keyboard based on this connection could be used. For example, you could use the Apple IIgs keyboard or buy a keyboard manufactured by a different company.
The mouse that came with the Macintosh SE was different from the one that came with the previous Macintosh models, as you can see in Figure 5 (models A9M0331 or G5431). As with the keyboard, you could use any mouse based on the ADB connection.
In Figures 6 and 7, you have an overall look at the Macintosh SE. Similarly to the previous Macintosh models, the Macintosh SE had a brightness adjustment button on its front panel, below the Apple logo.
The Macintosh SE is easy to identify, as it has “Macintosh SE” written on its front panel and on its rear panel as well. However, four different versions of the Macintosh SE were released. On the next page, we address their differences and how to identify the exact model you may have.
A minor difference between the Macintosh SE and the previous models was the kind and location of the battery in charge of the computer’s real time clock. While in the previous models this battery was accessible through the rear panel, on the Macintosh SE this battery was installed on the motherboard.
As with the previous models, the Macintosh SE allowed you to install an anti-theft device that was a metallic tab for installing a steel cable to prevent people from stealing the computer, which was highly desirable in public spaces such as schools.
In Figure 9, you can see the connectors available at the rear panel of the Macintosh SE. The first two connectors were the ADB connectors for you to install the mouse and the keyboard. Next, there was a port for the installation of an external floppy disk drive. Then there was an external 25-pin SCSI port, which allowed you to install external SCSI devices (such as an external hard drive) to the computer.
There were two serial ports: one for a printer and one for an external modem, using the same kind of connector introduced with the Macintosh Plus. (The Macintosh 128K and the Macintosh 512K also had two serial ports, but with the Macintosh Plus, Apple changed the connector type used for them from DE-9 to DIN-8, which became the standard for future Apple computers.)
The last connector was a 3.5 mm connector for an external speaker (the computer had an internal speaker).
The following models of the Macintosh SE were released.
This model, also known as M5010, came with two 800 KB floppy disk drives, but it didn’t come with an internal hard drive. As the motherboard was the same as the standard model, you could install an internal hard drive. However, there was not enough space for two floppy disk drives and a hard disk drive, so you had to remove one of the floppy drives to install a hard drive. It is possible, however, to create a homemade bracket to install the two floppy drives and a hard drive simultaneously.
You can identify this model by the presence of two floppy disk drives as well as the model number M5010 and the phrase “Two 800K Drives” on its rear label.
The standard model came with one 800 KB floppy disk drive and either a 20 MB or 40 MB hard disk drive. It used the model number “M5011,” which is the same as the FDHD and the SuperDrive models. You can differentiate it from these other two models as they have the names “FDHD” or “SuperDrive” written on their front panel.
The Macintosh SE FDHD (Floppy Drive High Density) was a Macintosh SE with a 1.44 MB floppy disk drive instead of an 800 KB floppy disk drive. Its model number is the same as the standard model, but you can tell the two apart by the presence of the letters “FDHD” on the front panel. See Figure 12.
After a while, Apple renamed the Macintosh SE FDHD to “SuperDrive.” So, the Macintosh SE FDHD and the Macintosh SE SuperDrive are the same computer but with a different name. This model has the word “SuperDrive” written on its front panel.
The Macintosh SE/30, released in 1989, used a different motherboard that was based on the Motorola 68030 microprocessor (hence its name), which was more powerful than the 68000 processor used on other models. The “30” in the name had nothing to do with the size of the hard drive that came with the computer.
This model had a different model number: M5119. It came with a 1.44 MB floppy disk drive and could come with either a 40 MB or an 80 MB hard disk drive. Furthermore, it had either 1 MB or 4 MB of memory, which could be expanded up to 128 MB.
Another difference between the Macintosh SE/30 and the other Macintosh SE models was the type of expansion connector with which it came. While the expansion connector was generically called PDS (Processor Direct Slot) in both systems, the connector on the Macintosh SE had 96 contacts (three rows with 32 contacts each) and was called PDS 68000, while on the Macintosh SE/30 this connector had 120 contacts (three rows with 40 contacts each) and was called PDS 68030.
You could easily identify this model by the name it had on its front panel, “Macintosh SE/30,” or by the model number written on the label available on its back part. Interestingly, while on the other models the name “Macintosh SE” was written on the right-hand side of the front panel, on the Macintosh SE/30 its name was written on the left-hand side, near the Apple logo.
To avoid regular users from opening the Macintosh SE, Apple used Torx TT15 screws, a very unusual type of screw to be used on computers (especially at the time), which required a special TT15 screwdriver at least 9 inches (230 mm) long. The same applied to the previous Macintosh models.
Inside of the computer, you would find the most commented on (and hidden) feature of the computer: the signatures of all the members of the team that designed the Macintosh, including, of course, Steve Jobs. See Figure 15. These signatures were also present on the previous Macintosh models. The Macintosh Plus and the Macintosh SE were released after Steve Jobs left Apple, but for some reason, his signature was kept inside the computer, even though he was not related to the development of these computers – in particular, the Macintosh SE. Jobs would never have approved the addition of a cooling fan.
You will see several people selling old Macs on eBay saying “This Mac is so special that it has Steve Jobs’s signature” or “rare – signed by Steve Jobs.” Let’s make something clear. All early Macintoshes were signed by the whole team, so that is not a “special feature.” Since millions of these computers were sold, they are not rare.
In Figure 16, you can see how the Macintosh SE looked inside. It was comprised of two printed circuit boards. One contained the power supply and the electronics for the monitor; the other was the motherboard. These boards were different from the ones used on previous Macintosh models.
On the previous Macintosh models, the analog board had the electronics for the video monitor and the power supply. On the Macintosh SE, however, the power supply was available as a separate unit, although screwed to the analog board. See Figure 17. In Figure 18, you can see the video monitor board (officially called “Macintosh SE Analog”), part number 820-0206 or 630-0147, with the power supply removed. In Figure 19, you can see the power supply by itself.
As mentioned, the Macintosh SE was based on the Motorola 68000, which was a 32-bit microprocessor using a 16-bit data bus and a 24-bit address bus, allowing it to access up to 16 MB of memory. The Macintosh SE/30 was based on a different processor, the Motorola 68030. Because of that, we will discuss the motherboard used on this computer later.
The motherboard of the Macintosh SE, part number 820-0176 or 630-4125, had four SIMM-30 memory sockets, originally coming with four 256 KB memory modules installed. The four 256 KB memory modules could have been replaced with two 1 MB memory modules to have a computer with 2 MB of RAM. The alternative would have been to have four 1 MB memory modules to have a computer with 4 MB of RAM. Since the 68000 CPU used a 16-bit data bus, and each SIMM-30 memory module is an eight-bit entity, you would need two or four memory modules installed; you can’t install one or three memory modules.
In order to install memory modules with more than 256 KB, you need to cut one of the legs of the R35 resistor (labeled “256K BIT”). See Figure 21. On some revised motherboards you need to simply move the position of a jumper, instead of having to cut a resistor. See Figure 22. In this case, you must move the jumper to the “2/4M” position to enable 2 MB of memory. However, in order to enable 4 MB of memory, you must remove the jumper from the motherboard (and not place it at the “2/4M” position, as it would be logical to assume).
The Macintosh SE used the same NCR 5380 SCSI controller as the Macintosh Plus. It also used a 65C22, which was an upgraded version of the 6522 “Versatile Interface Adapter” used on the previous Macintosh models and in charge of mouse and keyboard communications. It also used the Z8530 serial communications controller, in charge of the two serial ports; and the custom-made IWM (Integrated Woz Machine), in charge of controlling the floppy disk drives.
On the FDHD and on the SuperDrive models, the IWM chip was replaced with the new SWIM (Super Woz Integrated Machine) chip, to support 1.44 MB floppy disk drives. For this reason, you couldn’t install 1.44 MB floppy disk drives into the standard Macintosh SE and the dual floppy Macintosh SE, as their control circuits didn’t support this kind of drive.
The Macintosh SE had two floppy disk drive ports on the motherboard instead of only one.
Two new custom-made chips were added to the Macintosh SE, replacing the six PAL (Programmable Array Logic) chips available in previous Macintosh models. These chips were called BBU (Bob Bailey Unit, a big chip manufactured by VLSI) and GLU (General Logic Unit).
Other features that were inherited from previous Macintosh models were the reset and interrupt buttons (seen at the top right corner in Figure 20), which were targeted to programmers. These buttons were normally not accessible from outside the computer. However, as these buttons were located in front of the side ventilation slits of the computer, programmers could buy a special “programmer's switch” that could be attached to this vent (located on the left-side of the computer) and, therefore, access them.
As previously mentioned, one of the new features of the Macintosh SE was its expansion connector, called PDS (Processor Direct Slot). This expansion connector allowed you to install expansion cards, such as accelerator and networking cards. Accelerator cards replaced the 68000 microprocessor with another more powerful CPU, usually a 68020, and frequently added more RAM, allowing you to have more than 4 MB.
The Macintosh Portable, the Macintosh SE/30, and the Macintosh IIfx also had a PDS connector. Although the PDS connector of the Macintosh Portable had the same number of contacts as the PDS slot of the Macintosh SE, it was not compatible with the connector available on the Macintosh SE. The Macintosh SE/30 and the Macintosh IIfx used a different model of the PDS connector, called PDS 68030, which was not compatible with the PDS connector available on the Macintosh SE.
To install an expansion card, you had to open the computer and remove the motherboard – a rather complicated process.
In Figure 24, you can see a Radius SE accelerator card installed on the Macintosh SE motherboard, which replaced the 68000 microprocessor with a 68020. In Figure 25, we show a different accelerator card that was available for the Macintosh SE, the Prodigy SE from Levco, which also replaced the microprocessor with a 68020 model and added 16 MB of RAM.
In Figures 26 and 27, you can see two different models of Ethernet cards.
The motherboard for the Macintosh SE/30 was completely different from the one used on the other models of the Macintosh SE.
The Macintosh SE/30 used a different microprocessor, the 68030 (running at 16 MHz instead of 7.8 MHz as in the previous models). This is more powerful than the 68000 used by the previous versions of the Macintosh. It had an external 32-bit data bus (remember, the 68000 used an external 16-bit data bus, so when combining the higher clock rate with the higher number of bits to access the memory, the 68030 had four times more bandwidth for accessing memory than the 68000 used in previous models) and a 32-bit address bus, allowing the CPU to access up to 4 GB of RAM (in theory). It also had a 256-byte instruction cache and a 256-byte data cache, a feature not available in the 68000.
The SE/30 used a Motorola 68882 math co-processor. Previous Macintosh models didn’t have a math co-processor.
The motherboard of the Macintosh SE/30 had eight SIMM-30 sockets, allowing you to install up to 128 MB of RAM if eight 16 MB modules were used. The ROM of the computer was available in a SIMM-72 module.
Some chips used on the Macintosh SE/30 were the same used on the previous Macintosh models but were upgraded. The SCSI controller used was the 53C80, an upgraded version of the 5380 used on previous models. For controlling the two serial ports available (“Printer” and “Modem”), a Z8530 serial communications controller was still used but with the PLCC packaging instead of DIP, which occupies less space on the printed circuit board. For controlling the keyboard and mouse, it used two 65C22 chips (instead of only one as in the regular Macintosh SE), but this time it had the PLCC packaging instead of DIP. The SE/30 uses the SWIM (Super Woz Integrated Machine) chip to control the floppy disk drive, supporting 1.44 MB drives. Differently from the regular SE, the SE/30 had only one port for a single floppy disk drive.
The Macintosh SE/30 uses a custom-made chip called GLUE (General Logic Unit), manufactured by VLSI, instead of the BBU chip present on the Macintosh SE or the PALs present on previous models.
There are two 256 kbit chips for the video memory, making 64 KB of dedicated video memory. On previous Macintosh models, part of the main RAM had to be used as video memory.
Audio was upgraded from the previous models, with the addition of a custom-chip called ASC (Apple Sound Chip) and two sound processing chips from Sony, providing stereo sound with up to four simultaneous voices. Previous models used a technique called PWM (Pulse Width Modulation) to generate audio, and only mono audio was available. Also, on previous models, the PWM circuit also controlled the speed for 400 KB floppy disk drives.
As with the Macintosh SE, the Macintosh SE/30 had a PDS expansion connector, called PDS 68030. This connector, however, was not the same one used on the Macintosh SE; therefore, you couldn’t install expansion cards developed for the Macintosh SE on the Macintosh SE/30. | 2 | 24 |
<urn:uuid:029cf29f-236a-4dd1-a0a6-35f563565593> | Taking notes and measurements in the field
As anyone even remotely connected to the field of archaeology can tell you, we record EVERYTHING. Note-taking and record-keeping is just as much a part of archaeology as the iconic trowel, perhaps even more so! Archaeologists must keep track of and record as much as possible at the dig site, everything from location, maps and diagrams, weather, time, spatial distribution, artifacts found, soil types, color, and stratigraphy (and even this list is nowhere near exhaustive). All of this seemingly excessive record-keeping is an effort by archaeologists to preserve what we are excavating as best as possible. Archaeology is a destructive discipline, and by that I mean, as we excavate, we destroy the very archaeological record we are seeking to understand, and because of that, it is absolutely crucial that we record as much as possible to be able to recreate and study the dig site after excavation. Good note keeping is also very helpful to anyone looking at and potentially working with a project in the future.
I spent much of the last semester learning the basics of GIS (Geographic Information Systems) as a volunteer with the Campus Archaeology Program. It was my job to go through field notebooks from past projects and field schools and enter all of the data into the GIS. Where the project took place, what was done (shovel test pits or excavation units), who was on the team, when the project happened, and whether or not artifacts were found all goes into the GIS, and my work rested entirely on the notes of past Campus Archaeologists, Field School assistants and attendees, and volunteers. Trying to match hand drawn maps to a physical location on a satellite image of campus takes some practice, and it can be even further complicated when two different maps from two separate people working on the same project contradict each other. Differences in the field journals of individuals all working on the same project made gathering a complete picture of the project and what went on very difficult at times. Often times though, I had to deal with the lack of recorded data, missing dates, STPs on the maps that had no data associated with them, and not knowing who was excavating. That resulted in a scramble through many additional notebooks from Field School students in hopes of finding the missing data. Piecing together past archaeological projects for present-day digitization is a lot like detective work and again, relies on the record-keeping of those involved in the project.
This summer, as part of the CAP survey team, I am again in charge of entering all of our projects into the GIS, and I can tell you first-hand that doing it immediately after a project you just participated in is a whole different story. Not only do you have memory of what went on and where, but being present also gives you some control over the record-keeping for the project, especially knowing that later it has to be entered into the computer. My task became so much easier working from projects that I had worked on within the few weeks prior. After seeing just how troublesome even a couple of small discrepancies in field notebooks can be, I definitely understand how important note taking is in the field, and that was just from doing GIS work, I can hardly imagine trying to study a past archaeological project that was the victim of poor record-keeping!
So for those aspiring to be archaeologists, I have one piece of advice for you: develop good and consistent note taking skills!
Michigan State University’s landscape is consistently changing. The area north of the Museum and west of Linton hall, known as the sacred space, is a great example of this. Although no buildings have been built within this space the changing of the roads from inside the space to outside the space was one of the major changes altering the size and appearance of campus. This change, which is suspected to have occurred in the late 1920s, is the focus of one of Campus Archaeology’s current investigations. What we are looking for is how the original road was laid within the sacred space in front of William’s Hall one of the first dorms.
Photo from the late 19th c of Williams Hall and the fountain, road and sidewalk in old positions can be seen, via MSU Masterplan
Preliminary investigations involved comparing archival data such as pictures and maps. We looked to compare the location of the road based on two structures: the fountain between Linton and Museum and the Museum itself, which is believed to stand directly on top of the old William’s Hall. You can see in the image below that the road was to the right and the sidewalk to the left. Today the sidewalk sits to the right of the fountain.
It was made clear that the road followed a curve from the west entrance of Linton Hall to the north side of the old William’s Hall via the north side of the fountain. This is drastically different from the roads and sidewalks we see today.
The old roadways of MSU, road goes within the Sacred Space and buildings whereas today it is on the outside of the buildings, via MSU Masterplan
To investigate the location of the road a test pit was dug in the green space 7 meters north of the northeast edge of the Museum. Recovered from this pit were multiple layers of road materials from a gravel layer followed by a layer large river rocks and a subsequent layer of chunks of granite (about 15 cm x 6 cm) and clay. As this was the expected location of the road the layers of road materials confirmed the location. Now we ask the broader questions: “What did this road look like?”, “How wide was it?”, “Where did it curve?”, and “What was it made of?”.
To further investigate we went back to the archives searching for pictures of the road to help identify its composition. Archival research showed that in the past a process called macadam was used in which “crushed stone surfaces, 6 to 10 inches thick, were merely bound by dirt and clay” (ASCE, 2013) As this older technique was widely used it is extremely possible the lowest granite and clay layer is campus’s old road.
Men laying a macadam road, via Highway Online
Today we open up a section to explore the layering of this area in hopes to answer these questions. If we find that this layer of granite and clay reaches out further we will be able to confirm this is the old macadam road and further test pit to see its boundaries.
American Society of Civil Engineers. 2013. “Macadam Roads”. http://www.asce-sf.org/index.php?option=com_content&task=view&id=252&Itemid=79 accessed 5/20/13
Morrill Hall on fire, Fire department attempts to stop it, photo by Bethany Slon
Casual Wednesday night, I was sitting at my friend’s house scrolling through twitter on my iPhone (can you say 21st century girl?) when I saw that the State News had tweeted that Morrill Hall was on fire. I was out of the door and on my way to Morrill faster than you can say that’s-my-favorite-building-on-campus. By the time I had gotten there the flames had been extinguished, but the top of the building continued to smoke, and the caution tape surrounding the area was enough to paper a whole library (that is, if books were made from caution tape). Luckily, as the building has been being prepped for demolition, no one was in the building when the fire started around 7pm, and no one was injured. The roof of the four-story building had fallen through all the way to the first floor, and as of the writing of this blog, the cause of the fire is yet to be determined.
Morrill Hall is, in fact, my favorite building on campus, and since doing research on it as an intern for Campus Archaeology in the fall of 2012, I’ve grown quite attached to it. If you’ve read any of my other blogs, you’ll already know that Morrill Hall was built in 1900 and was then called the Women’s Building. It was the first dormitory for the women of Michigan Agricultural College (later to become Michigan State University), and it included everything from bedrooms to culinary and woodshop classrooms to even a two-story gym. Eventually the amount of women enrolled in the college far exceeded the amount of dorm rooms available in the Women’s Building, and the name of the building was changed to Morrill Hall and the rooms were converted into offices and classrooms, most recently that of the English and History departments. The women were placed in other dorms, most in what are now called the West Circle dorms, but Morrill Hall continued to thrive through the use of professors and students.
Demolition at Morrill Hall beginning with the stairs, Photo by Katy Meyers
Unfortunately, Morrill is no longer what it used to be – one hundred and thirteen years has really taken its toll on the building. The floors are sagging and before the departments were moved to other locations on campus, professors had to line the walls with books to ensure that there was equalizing weight on the floors. In 1990 the ceiling of the first floor collapsed into the basement, and that was only the beginning of the building’s unfortunate demise. The ceilings leak, the ventilation is extremely poor, asbestos can be found around every corner, and the amount of bats that fly around the building when the sun goes down is enough to give anyone a spook. As far as the university could see, there was no other solution than to demolish the building, which was scheduled to happen early this June.
However, since the fire, demolition has been postponed. Until officials know what caused the flames, the building is being treated as a crime scene, meaning our work with Campus Archaeology on the Morrill Hall front is also postponed. Personally, I think it’s sort of fitting that some of Morrill’s last moments were spent on fire. Many of the other original buildings and dorms on campus have also gone up in flame; Williams Hall in 1919, Saint’s Rest in 1876, Old Botany Building in 1892, the original Engineering building in 1916, and the original Wells Hall in 1905.
Engineering Building on fire in 1916, via MSU Archives and Historical Records
Don’t get me wrong, it was incredibly sad to stand there and watch the beauty that is Morrill Hall smoke for over an hour, but it’s almost as if there was something bigger going on last night – a pattern of some sort that the campus continues to uphold. I realize that sounds sort of morbid and strange, but hey, there’s some truth in it.
If you’re in East Lansing for the summer, be sure to take a walk by the red-bricked building and say your last goodbyes. Morrill – you’ve held up strong for over a century, and we’ll sure be sad to see you go.
We’ve been out doing our first two weeks of excavation at Jenison Field House and within West Circle Drive. So far we’ve found a number of interesting artifacts including an old gin bottle from brooklyn and a layer of burnt bricks possibly related to the Old Williams Hall. Before we get too far into the season, here are some introductions to our summer team!
Bethany, Josh, Katie and Marie from right to left at Jenison Field House (Katy out of the frame because she was taking the photo!)
Katy Meyers: I have been the Campus Archaeologist for two years, and this will be my last summer in this position. Over the past two years heading up the CAP teams I have excavated across the campus, gotten to do a dig at the first dormitory at MSU (Saints Rest) and excavated the Morrill Boiler Building found under East Circle Drive. In addition to this, I am currently a 3rd year PhD graduate student in Anthropology at MSU, and my research focus is on bi-ritual cemeteries in the UK. I got my start in archaeology through video games like Tomb Raider, and summer trips to my parent’s cabin where I got the chance to run up and down a gully finding fossils and early 20th century artifacts from the early cabins in the area. While my research does focus on cemeteries and funerary processes, I have done work on a number of historic and prehistoric sites throughout the Midwest and Northeast. I have truly loved being part of Campus Archaeology because it allows me to add to the history of MSU, and help create connections between the current and past campus.
Katie Scharra: I am a recent graduate of Michigan State University. Originally, I began a program in Microbiology. After travelling during my sophomore and junior years to Europe and exploring different cultures I had a change of interests. I wanted to look for an academic program that took my interest in science and applied it more culturally. This brought me into the Anthropology department where I began to study mortuary archaeology. In the future, I would like to apply both my microbiology and anthropology degrees with a PhD in Bioarchaeology. In order to gain experience in field methods and to keep up my archaeology skills during my current gap year I joined the Campus Archaeology team. Over the past year, I have worked on a few digs across campus and worked with the artifacts. In the spring I was involved with cleaning and interpreting the artifacts recovered from the October 2012 excavation of Saint’s Rest, the first dormitory on campus. During this project, a partner and I organized the finds in to a classification based on use (i.e. home goods, school items, building materials). This allowed to us to have a look in to the more realistic lives of the first Spartans. We presented our findings and the 2013 University Undergraduate’s Research Forum. This summer I am looking forward to continuing investigation into the changing landscapes and lifestyles of campus.
Bethany Slon: I am an undergraduate student majoring in Anthropology, and this fall I will be starting my senior year at Michigan State University, anticipating graduation in December. I started working with Campus Archaeology in the summer of 2012 as a volunteer, and the following fall semester I began work as an intern under the direction of Dr. Goldstein and Katy Meyers. My research involved looking at the early years of the Women’s Building (now called Morrill Hall) and gathering information about the first female students who lived in this dorm. The MSU archives was very useful with my study; they provided me with scrapbooks made by the female residents of the Women’s Building, in addition to maps, photos, and plenty of other information. I eventually presented this information at the University Undergraduate Research and Arts Forum, linking it to Campus Archaeology and what the demolition of Morrill Hall means to us. This summer I am working again with Campus Archaeology, this time to monitor construction and make sure nothing of historical or archeological value is destroyed or missed. I eventually want to become a bioarchaeologist, specializing in Central American locations. I’ll be attending MSU’s Dr. Wrobel’s field school this summer in Belize, where I will be doing research on caries of the ancient Mayan population that used to live there, giving me both experience and knowledge I’ll need for the future. Graduate school is also in the plans for me, though where I’ll be going is yet to be decided. Archaeology has always been a passion of mine, and I am lucky to have found this experience with Campus Archaeology, both to broaden my skills as an archaeologist and to do what I love.
Josh Schnell: I am a freshman here at MSU, majoring in Anthropology and Religious Studies, with a specialization in Latin American Studies. I have been working with Campus Archaeology since February of 2013 when I began an internship learning how to use Geographic Information Systems (GIS) software in an archaeological context. This summer, as a member of the Campus Archaeology Survey Team I will be digging during and monitoring various construction projects to ensure our campus’ cultural heritage is not lost. I am an aspiring bioarchaeologist with a strong interest in mortuary practices, and I also volunteer in MSU’s bioarchaeology lab. A strong fascination with ancient cultures is what first drew me to archaeology as a potential career in middle school, and ever since then I have been dedicated to protecting, investigating, and educating others about our past. As President and Webmaster of the Undergraduate Anthropology Club at MSU, I have a strong interest in building a social foundation and creating an environment where other anthropology students can learn, collaborate, and help each other. I hope that through working with the Campus Archaeology Program this summer I will gain experience in conducting Cultural Resource Management work in the field, as well as expand upon general archaeological field skills.
Marie Schaefer: I come to the Campus Archeology Program from a more cultural anthropology background. However, I have always thought to be a good anthropologist you need to have a least a basic understanding of all the subfields of anthropology (cultural, archeological, linguistics, biological). This is especially true if you are going to be working with any Native American tribes or conducting any applied anthological projects in which you might be working with anthropologists and others from all different backgrounds. As a result, I have searched out opportunities to gain an understanding of the different perspectives of anthropology. After graduating from Eastern Michigan University with a BS in anthropology I went to Northern Arizona University for my masters where I had the opportunity to conduct a needs and asset assessment with Hopi women for the Hopi Cultural Preservation Office on why Hopi women’s traditional knowledge is not being passed down to the next generation and suggestions on how to stem the tide of this knowledge loss. Currently I am in the PhD program in anthropology at Michigan State University with a very applied focus to my work which focuses on how indigenous knowledge and Western scientific knowledge can be integrated in order to assist in the creation of sustainable futures for indigenous people. The CAP program offers me a unique opportunity to not only learn more about the amazing history of a land grant university but also to gain a deeper understanding of the work of anthropologists in order to serve as a bridge between tribes and archeologists.
This academic year has been enlightening and challenging for me. I dove into continuing a specific project that explores the heart of campus at MSU. I used archival evidence to glean the social, structural and spacial landscape of campus throughout the four time periods of the first 100 years of campus. Using scrapbooks, administration correspondence, and annual reports, I analyzed the changes in campus over time and how different buildings were used and how these buildings represent where MSU was developmentally and in social context with the rest of the state and country. For each time period, the spaces selected to represent the center of campus were: 1855-1870- College Hall and Saints’ Rest, 1870-1900- The Sacred Space, 1900-1925- Red Cedar River, and 1925-1955- Beaumont Tower.
Sabrina presenting the poster she created with Katy at the Graduate Academic conference in February 2013, via Katy Meyers
I was able to work with Katy Meyers to create a poster that outlined the archaeological and archival evidence for these choices and presented it at the Graduate Academic Conference here at MSU. The poster gathered attention and praise from various graduate students and visitors, and was judged very highly. It was a great way to allow others to visualize the expansion of campus over time and what events propagated the growth. It also invited viewers to chime in on where they experience the heart of campus today, which gleaned a variety of results, perhaps demonstrating that the diversity on campus may allow for several “hearts” of campus.
My next task is to sort through all of the data I submerged myself in and try to make sense of what these spaces say about MSU in general and how they indicate who we are today and where we are going. I will continue working on my final report that supplements a previous paper written by a CAP student and will expand on the poster we presented. I will also input all of the scrapbook data into our database, which will hopefully allow for future CAP fellows to easily survey the types of evidence housed in the archives.
Participating in Science Festival was another big project this year; I was able to be an ambassador for the archaeology program and Campus Archaeology to some young and ambitious junior high school students. It was invaluable to utilize that avenue to reach the community and inform them of all the things CAP is involved in on our historic campus. Hopefully events like this will draw more involvement from future students and community organizations in all of the important work we do.
I was lucky to be able to participate in surveys on campus, something I may never have been able to take part in. These really grounded me in a sense of place on campus which often times felt enormous and contributed to my analysis in my project.
Though I am not an archaeologist, this year provided me with diverse experiences and methodology that I can perhaps utilize in my future research, and all of the projects enriched my learning and graduate experience. I want to thank Dr. Goldstein for her guidance and vision as well as Katy for all of her support and ideas. It was a pleasure to work with you and each of the CAP fellows this year.
I spent my year working on the sustainability project with a specific focus on using University Archives materials to understand food and transportation on the historic campus. Through pamphlets, diaries, newspaper clippings, photos, reports, and ledgers, I pieced together information about early student experience in MSU’s beginning years. Much of the archives research required locating documents that were tangentially related to the project in order to track changes over time.
Two male students in dorm at Old Wells Hall, 1900, via MSU Archives and Historical Records
For instance, I looked through years of brochures from the early 1900s advertising the annual state farmers’ meetings held on campus. In each of these, food, housing, and transportation options for visitors would be listed. As the years went by, food options on campus expanded to include mentions of restaurants on Grand River Avenue. Boarding choices in the earlier years were limited to home stays or college dorms, whereas later years referenced hotels available on the trolley route from Lansing to East Lansing. Transportation prices rose slightly to accomodate, presumably, the growing dependence on trolley cars. From ledgers kept by the agriculture and dairy departments, it is possible to document changes in food prices (and demand for food types) through time. Fortunately, Dr. Manly Miles kept a thorough ledger noting all sales and expenditures for the agricultural college from 1867-1877.
I believe the most interesting finding was in the local and state reaction to the college in the early years. Since the university is so entrenched in the community now (and because I only have the experience of a modern student), I assumed that the college had always been supported by the local and state population. Through diaries and personal accounts, I learned that state farmers and government leaders had been quite wary of the institution, even at times hoping for and predicting its eventual downfall. The hard work of the early students and professors who split their time between academics and manual labor ensured the success of the college. As wars took their toll on college-aged men, the university adapted to national needs and supported the war effort.
The sustainability project has allowed me to pursue many leads at the University Archives, sometimes resulting in exponential research questions. As I try to reign it all in, I have found that the most relevant source material are personal accounts. Reading handwritten documents from MSU’s first students has been a thrill and I look forward to continuing this project.
Day of DH Logo, via MATRIX
The Day of DH is a national celebration of the range and variety of people, projects, and groups involved in digital humanities (DH). This year the event is hosted by MSU’s own DH center: MATRIX: The Center for the Digital Humanities & Social Sciences. It is a community sourced online publication and project to bring together scholars interested in DH. This year, Day of DH is taking place today, April 8th. Participants answer questions about what digital humanists do, how they work together, and provides them a chance to document their activity on this one day.
You can follow along today by visiting dayofdh2013.matrix.msu.edu or through Twitter with the hashtag #dayofdh.
Campus Archaeology is going to be participating through the blog, facebook, and twitter, so make sure to follow us on our day of DH! We use a number of digital resources to aid in our research and surveys, but also to communicate with the broader public.
Today, in celebration of the Day of DH, we are going to be working on two projects that will aid with preserving the archaeological heritage of MSU. Our intern Josh is working on adding all our archaeological surveys and excavations to a geographic information system, and I will be working on updating our OMEKA museum website. Stay tuned for updates and photos on Twitter and Facebook throughout the day!
Update from our #DayofDH
8:00am EST: Good morning!
8:15am EST: Working with Katie to get one of the undergrad posters done for the upcoming UURAF, a symposium for undergrads to show off their research. Their poster is on classifying the Saints’ Rest material we excavated in the Fall. It is interesting to see all the artifacts and what they’ve learned from them. When we make posters, we actually use powerpoint to design them, and then save them as large PDFs. It is an easy way to make posters because it is drag and drop, and can be set to the specific large size of the poster.
8:56am EST: Finished with draft #1 of the poster.
9:45am EST: Last year, we designed an OMEKA museum site for Campus Archaeology. We haven’t fully used this program- partially because there is so much to do and partially because I still have problems sometimes making it work properly.
Today my goal is to finally add spatial data to the artifacts. This means assigning a geolocator (longitude and latitude) to every artifact we have online. Shouldn’t be too hard since many are located in the same area! Check out the progress here at campusunearthed.matrix.msu.edu/.
11:00am EST: I was able to update a dozen or so of the items on the OMEKA with their geolocation, and it seems to be working pretty well! In addition to this, Katie has been able to get the next draft of her poster done and added in some sweet photos of the Saints Rest collection from the 2012 excavation. However, if you want to see that work you’ll have to head over to the undergrad symposium this friday at the MAC Union!
Thinking about DH and CAP: Here at Campus Archaeology, digital tools are integrated into every stage of our workflow- it is inescapable, but in a good way. At every stage of the work we do there is a strong digital and analog component. Any dig we begin starts with research online and in the archives. We investigate GIS maps, both our own and the one created by the university’s Physical Plant so that we can prepare an excavation or survey plan. As we learn more about the work we are going to be doing, we share this information through social media and our blog. Once we are out in the field, we tweet and photograph everything that we are doing. When the dig in complete, we catalog every artifacts into a database, add the new excavation data to our GIS, and write up everything for our blog. It is incredible that even as a discipline so concerned with the past, our methods and techniques are constantly being updated with new technology. But what does this have to do with the digital humanities? If you followed along with others on their Day of DH, you know that it is an inclusive and highly varied field that is loosely based on the intersection between digital technology and humanities related disciplines. DH is exciting, not because it is finding new ways to display data or share information, but because it is based on values of innovation, engagement, and community interaction. We at Campus Archaeology are committed to these- we are always searching for ways to improve our research and better share our findings through new digital tools like OMEKA, we strive to engage with people on a number of both digital and analog levels through social media and engagement events, and we are dedicated to interacting with the MSU, East Lansing, and broader community interested in learning about and preserving history.
Carefully look at this map of MSU’s campus from the 1880′s.
1880′s Map of MSU, via MSU Archives and Historical Records
There is a dark black line running from East Grand River Road into the Sacred Space, and then it turns into a squiggly line that goes all the way into the Red Cedar River. That was once the brook that ran through the middle of campus. The dark line is a drainage system that was meant to aid in draining the swamps north of East Grand River Road. The little brook was important to keeping the swamp areas from flooding and also helped direct wastes into the Red Cedar River. Of course, today there is no brook running through the Sacred Space. So what happened to it? This is the question I’ve been trying to answer the past week. Thanks to Whitney from the MSU Archives and Historical Records I have a couple answers.
Bridge from Chemical Lab to Botany, 1884, via MSU Archives and Historical Records
Other than the brook being present on maps, it is mentioned in a few historical documents that help us determine where it was located and what happened. In Beal’s (1915) history of the Michigan Agricultural College, he notes that in 1877 they botanic gardens were created, and were located in a ravine northwest of the greenhouses (located once at the SW edge of the Library) and north of the Red Cedar on the banks of a brook. From the Michigan Board of Agriculture Report 1880, Beal reports that there was a footbridge that crossed this ravine from the Chemical Lab (which was located where the fountain in front of the Library currently is) to the Botany Lab (which was located just east of IM West). It was a fairly large bridge, 16 feet wide with five piers supporting it. Pictures of the bridge show that it was primarily meant for the ravine since the brook is barely visible. In 1884, when Abbot Hall was constructed (now the location of the Music Practice Building), it was determined that this bridge wasn’t sturdy enough. The soil removed from the basement of Abbot Hall was used to fill in the ravine where the bridge was, and the brook was directed through via cement drains. So now we know when the ravine was filled in by the roadways, but not when the brook vanished.
Small Bridge in the Botanical Gardens over a Brook, via MSU Archives and Historical Records
We know from both Beal (1915) and Darlington (1929) that the brook and river would often flood the gardens. From 1904 to 1910, Beal raised the level of the garden from four to five feet to prevent the high waters from destroying the garden. Beal (1915:254) wrote “Most perplexing of all, was the habit of the Cedar river in overflowing its banks and covering most of the garden with water, for three to seven days at a time and if this freshet occurred during the growing season, two or three hundred attractive plants are killed outright. To overcome this difficulty a section at a time during six years was raised from one foot to five feet or more.” Due to these alterations, “the brook now flows under ground through a cement tunnel for nearly four hundred feet” (Beal 1915:254). So we now know that the brook that once ran through the garden was still there, but was underground.
There are reports beginning in 1874 and 1890 that sewage from North campus often flowed through this ravine into the river. As the brook became more placed in culverts and drain pipes it further became used for sewage. In 1927, East Lansing determined that a proper sewer system needed to run through campus to prevent pollution. Alumni were up in arms according to various newspaper clippings since the sewer plan involved destruction of a portion of the Beal Gardens. A compromise was made, and it was decided that the new sewer system would run through the pipes of the old brook. By 1929, this plan was enacted, and the brook is no longer evident on campus maps or garden maps. According to Forsyth however, there are drain covers still evident in the gardens, and during periods snow melting the brook can be seen in that a green strip through the garden above the drain will melt first.
In the upcoming summer, construction will begin of West Circle Drive along the area that once was the ravine and bridge. During this, we hope we will be able to document this exactly what happened to the brook by examining the soil stratigraphy of this area!
Beal, WJ. 1915 History of the Michigan Agricultural College. MSU Archives UA 943. LD 3245.M28 B4
Darlington, HT. 1929 Letter to President Shaw Regarding the Beal Gardens. MSU Archives Beal Botanical Gardens 1925-1932. F 17. B 37. C UA 2.1.12
Thank you to MSU Archives for all their help!
I am still working on the sustainability project which seems to have generated endless research questions. As I try to reign it all in, I have been writing about a category that I have blandly termed “Student Life” in my draft. This is the catch-all portion for the interesting factoids I come across in the University Archives. Somehow I will assimilate this information into a working draft, but for now I will share what I have learned below:
In the early days of the college, all students attending the college were required to split their days between labor and academics (T. Gunson, 1940). Through manual labor in the gardens and farms, as well as clearing land for buildings and roads, the student body effectively constructed the foundations of the institution while receiving their education.
In 1871, student Henry Haigh reported a fee of $29.95 for boarding at Saint’s Rest. Haigh journaled about the atmosphere in the dining halls which were structured by assigned seating. He mentioned the presence of women in the halls, though the ratio of men to women was still quite unequal at this time.
Engineering Lab on Fire in 1916, via MSU Archives
During October 1871, the year of the Great Chicago Fire, there were numerous raging fires in the woods around the new campus and across Michigan. Students were dispatched to fight the blazes along with seminal faculty members, Dr. Miles and Dr. Kedzie. Many people lost their lives and homes, especially in the thumb region of the state, but the college was spared due to the management of the students and their vigilance against the fires. Drs. Miles and Kedzie would divide students into groups to battle the blazes through the night, a task compounded by the water shortage from an ongoing drought. Classes were largely cancelled for a week while students joined with neighboring farmers to keep watch over the advancement of the fires. Haigh noted that many students knew how to combat fires and dense smoke, having experience with managing agricultural lands on their family properties. (Sidenote: if anyone has any information about the fire outbreaks during this time period, please share! I am curious as to why there were so many fires in Michigan at this time, though I presume it is due to dry environment).
Faced with declining enrollment numbers, President Snyder (1896-1915) personally corresponded with potential students and advocated the incorporation of promotional literature and calendars into the college’s recruitment plans. As a result, student enrollment increased during his presidency (though the onset of World War I drew students to combat soon after he stepped down). President Snyder encouraged the training of women at the college through a series of short course programs. During his term, Snyder also helped initiate summer courses and railroad institutes. All of these programs lended the college credibility in the eyes of the state population, as MAC faculty members traveled to rural areas of Michigan to give lectures and perform demonstrations for farmers. In an effort to appear relevant and indispensible to the state, the college also enacted county extension programs.
Frank Kedzie, President of the college from 1916-1921 during the turbulent war years, resigned in the wake of weak post-war enrollment growth. A change in leadership was thought to be needed to reignite admissions, so leadership was passed to President Friday in 1921. Friday was an economist and agriculturalist hired to solve the issues stemming from the national war effort. State farmers were suffering during the post-WWI depression. During his administration, Friday endorsed more liberal education programs, allowing engineering students to pursue liberal arts courses in place of some more technical class requirements. President Friday spearheaded the effort to grant PhDs, with the first degree conferred in 1925.
You may have noticed that the area around Michigan Avenue from Harrison Road to East Grand River Road is completely covered with construction equipment, orange cones, and various people in neon yellow. In a half mile radius there are three different construction projects that are occurring, two of which will take part on portions of MSU’s campus. Over the next few months, Michigan Ave between Harrison and Grand River, the Beal Street Entrance to campus, and portions of West Circle Drive will be removed for various reasons. The construction began this week, and we were out there bright and early monday morning to discuss the projects and monitor the initial progress.
Tomorrow we will begin to survey one portion of the Michigan Ave project; the green space and sidewalks around the Beal Street Entrance to campus. During the survey we will be digging shovel tests so we can get a sample of what the area is like, and determine if it requires further archaeological investigation.
In order to determine the historic significance and potential of discovering archaeological sites, we first look at maps to see what has been located in this area and how it has changed over MSU’s history. A map drawn in 1959, but based on historic sources, recreates what the campus would have looked like in 1857 when it was first opened. We can see the area under investigation was forested, and the road that was present at the time appears quite similar in direction and pathway to the current road.
Map of Campus in 1857, dating to 1959, via MSU Archives and Historical Records
However, a map from 1870 shows that there was no road in this area, and that it was simply forest. This could mean that there was no large main road allowing access, perhaps a smaller path that didn’t warrant placement on the map, or that the 1959 reconstruction map of 1857 was incorrect about accessibility in this area. By the 1890′s though it is clear from maps that a road definitely exists in this area. More research needs to be done to determine what was actually in this area, how it has changed, and what we might possibly find. The survey will also help us determine what is in this area.
From historic sources, we know that this road would have led to Michigan Ave and Collegeville, a residential area founded in 1887 by Beal and Carpenter. As this area became more populated, this entrance under investigation would have been used more. By the 1920′s Collegeville was full of inhabitants. However- it appears the Beal Street Entrance area itself has been fairly vacant throughout history.
Feel free to come out to the site and visit us tomorrow! It may be a little cold, but the sunshine should help. We will be working at the site from about 8 to 10am, and would love some visitors! | 1 | 3 |
<urn:uuid:b2a861bf-4d98-4e2b-adad-616ec6a7b977> | Tirunelveli – the biggest of the Dioceses that formed the Church of South India in 1947, is the consummation of the labours from time to time of Missionary Societies in the West – SPCK, CMS, SPG. For very long the name ’Tirunelveli’ has been known all over Christendom as that of a field most congenial to the sowing of the Gospel and very responsive to the missionary effort. Surveying it in 1857, Dr. Caldwell wrote with just elation, "there the eye and heart … are gladdened by the sight of the largest, the most thriving, and the most progressive Christian community in India."
Like the rest of the country, this region came to be exposed to Christianity which accompanied the European powers who came to trade with India. The earliest were the Portuguese. St. Francis Xavier, though he came with royal authority and as the Papal Nuncio to India, chose to work his own way into the hearts of the Indians. He did very effective work among the fisher folk in the East Coast of Tirunelveli around Tuticorin. He was followed by Robert Nobili, John De Bitto and Joseph Beschi. Unfortunately the great Jesuit order was suppressed in 1775 by Pope Clement, leaving the good work done by it high and dry.
Moreover, Catholic Spain and Portugal lost their supremacy at sea and Protestant countries like Britain, Holland and later Germany emerged as the great maritime nations of the earth.
The British East India Company was formed in 1600, followed by the Dutch East India Company in 1602. Even while competing for the Indian market, they both carried there with them all the venom with which their countries fought at home the Catholic Spain and Portugal. In 1658 the Dutch captured Tuticorin from the Portuguese, expelled the Catholic Fathers from the Fishery Coast, and tried re-converting their adherents. While the Dutch turned many Roman Churches to warehouses, they built an extremely plain but massive Church at Tuticorin, which still stands as the sole relic of an interesting though passing phase in the history of the Tirunelveli Church. Over the porch of this solid structure is inscribed the monogram of the Dutch East India Company (VOC) with the date MDCCL (1750). This is therefore the oldest Protestant Christian Church in Tirunelveli District. With the fluctuations of political fortune it changed hands between the Dutch and the British until it was ceded peacefully by the Dutch to the British on 1st June 1825, with one condition that it should not be named after any Christian Saint. It was named “Church of Holy Trinity” as late as in 1959.
The Dutch Mission in Tirunelveli was rather a passing phase. But of a more abiding and effective character was the thrust into Tirunelveli by the Danish missionaries already stationed at Tranquebar. The earliest among them – Ziegenberg and Plutschau – had landed in 1706 and had been doing excellent work all over South India. Their Journals, reading very much like chapters taken out of the Acts of the Apostles, appeared translated into English; and soon they aroused great interest among the English people. With timely reinforcement by SPCK funds, they were able to extend their missions to places like Trichy, Thanjavur, Cuddalore, Nagapattinam, Madras and then to Tirunelveli. Some of Schwartz’s able SPCK catechists (Savarimuthu, Rayappan, Gnanapragasam and Savarirayan) frequently visited Tirunelveli and prepared the ground. It was during his first visit to Tirunelveli (1778) that Schwartz baptized Clarinda, a Maratha Brahmin attached to the Tanjore royal family, married to an English officer from whom she learnt of Christ. Her name heads the list of names in the Tirunelveli Church Register. Before Schwartz’s second visit to Tirunelveli in 1785 Clarinda, mostly at her own expense, had erected a small but substantial church that still stands in Palayamkottai. Schwartz dedicated it in 1785 and appointed the ablest of his catechists, Sathianathan (to be ordained later in 1790) to be in charge of the new congregation.
Rev. J.D. Jaenicke was the first Tranquebar missionary to reside in Palayamkottai and supervise the work done in Tirunelveli from 1791. In 1799 was formed the first purely Christian settlement in the district with 28 Christians – Mudalur (first village), on the initiative of David, the first convert of the place, and with the financial assistance of Captain Everett, a friend of the SPCK in Palayamkottai. The first small church built by the early Christians was burnt by the non-Christians in 1803. This resulted in a church being built with brick and mortar in 1816, which was renewed and extended by Rev. Norman in 1883. Its magnificent tower (202 ft high and the highest among the church towers of the Diocese) was added in 1929.
Gericke and Kohloff were in nominal charge of SPCK mission in Tirunelveli after Schwartz and Jaenicke. The field work was left to the 30 SPCK catechists. The terrible famine of 1810-1811 took a heavy toll of the Christians, and the opposition to Christians stiffened everywhere. It was then that Rev. James Hough, who became the Government Chaplain in Tirunelveli in 1816, held the breach so valiantly.
The two mainstreams
"I planted, Apollos watered; But God gave the increase." (I Corinthians 3:6)
What St. Paul said of the early Church is remarkably true of Tirunelveli as well. Schwartz planted it and left it to the care of Sathianathan and his band of catechists. The sapling Church was watered by the SPCK missionaries operating from Tranquebar and Tanjore, and at a time of drought saved by Rev. James Hough – providentially posted then as Army Chaplain at Palayamkottai. From the twenties of the 19th century there flowed in two main streams to water this promising field. One was the Church Missionary Society (CMS) which rushed its men to Tirunelveli in 1820 in response to a SOS from Hough. The other was the Society for the Propagation of Gospel (SPG) to which SPCK transferred its field in 1825. Together they took charge of all further expansion of the Tirunelveli Church unit until it blossomed into a Bishopric in 1896, and the two missions themselves merged their fields into a single Diocese in 1924.
Of the two missions CMS had an early start. On the day its first missionary, C.T.E.Rhenius (5.11. 1790), set foot in Tirunelveli (7th July 1820) The Church in Tirunelveli might be said to have come into its own. Acquiring for CMS the valuable property which Hough had purchased from Vengu Mudaliar in 1818 to the south of the main road in Palayamkottai, Rhenius and his assistant Schmid, soon got entrenched in a strategic complex from where they began to operate their mission. The first CMS congregation in Palayamkottai (Murugankurichi) came into existence on 10th March 1822, and the earlier SPCK congregations gradually got merged with the CMS congregations. Even the ones like Nazareth were entrusted by the SPCK to the care of Rhenius until the SPG could find the manpower to take them over in 1829. In 1824 Rhenius purchased from his Hindu friend and philanthropist, Vengu Mudaliar, for a concessional price of Rs. 750, the valuable property to the north of the High Road in Palayamkottai. Shifting the Seminary across the road to the newly acquired campus, he planned and built on the land so released a church which, with its lofty steeple added by Pettitt in 1845 and its several extensions from time to time, stands today as the Holy Trinity Cathedral, an imposing landmark in the whole district.
Operating from Palayamkottai, Rhenius touched a number of villages all over the district and planted small congregations (Sattankulam 1823, Neduvilai / Megnanapuram 1825, Idayankulam 1827, Asirvathapuram 1828, Nallur 1832, Surandai 1833). Where the early Christians met with persecution, Rhenius helped to colonise them in safe Christian Settlements. Thus he colonized in 1827 the Christians of Puliakurichi in a village he purchased out of money donated by a devout Prussian gentleman, Count Dohna of Scholodin, and named it after him as Dohnavur.
It was just when Rhenius stood at the crest of his missionary career that there burst out an unfortunate schism in the Tirunelveli Mission of the CMS. His health began to fail under the tension and strain caused by it, and on 5th June 1838 at 7:30 pm the Apostle to Tirunelveli quietly entered into the presence of his Lord and Master. By intense and systematic work Rhenius had set up as many as 371 congregations in Tirunelveli all within 15 years, which made Dr. Wolf, the great Jewish missionary – who came and stayed with Rhenius for a week during September 1833 – regard him as the greatest missionary who had appeared since St. Paul. His grave in Adaikalapuram, just a few yards off the national highway, is being treasured as the resting place of the most restless of the missionaries who ever came to India.
The man chosen by the CMS to take the place of Rhenius was Rev. George Pettitt who is best remembered as the builder of the stately steeple on the Holy Trinity Cathedral (1845) besides some solid churches in outstations like Alwarthirunagari 91846), Dohnavur (18470 and Pannaivilai (1847). He also founded the renowned Anglo-Vernacular School of Palayamkottai in 1844 with an eminent Eurasian educationist. William Cruikshank (who was blind from the age of ten), as its headmaster. This was the forerunner of the later C.M. High School and College and the model for every Christian boarding school to follow. Pettitt was also the first to set up a Theological Seminary whose classes were held in an airy room of the newly built tower of the Holy Trinity Cathedral, Palayamkottai.
By this time CMS had wisely decided to adapt the "Station Missionary System". Its missionaries were to be located at strategic centers, each corresponding directly with the Madras Committee. Pettitt, as the senior most among the new missionaries, was put in charge of the mission at the headquarters, Palayamkottai. Next in importance was Megnanapuram, which developed under the fostering care of Rev. John Thomas. He built the massive and imposing church capped it with a stately spire (192 feet high), designed by the London architect, Hussey and dedicated it on 9.10.1868 in the presence of Lord Napier, Governor of Madras, who acclaimed the church as the noblest he had seen in India, surpassing in beauty even St. Paul’s Cathedral, Calcutta. Rev. John Thomas also founded the Schools for Boys and Girls which have flourished ever since and been models for similar rural boarding schools which followed at Dohnavur, Sattankulam, Pannaivilai, Nallur and Surandai. Operating from Megnanapuram Rev. John Thomas spread the church to a network of villages around, the most important of them being Vellalanvilai, the native place of the renowned Bishop Azariah.
Yet another station so carefully cultivated by CMS was Pannaivilai which was out and out a product of Rev. J T Tucker, who laboured there for 20 years, baptized 3000 converts and built 60 simple churches. Somehow both CMS and SPG concentrated on the east of Tirunelveli district. CMS however, penetrated to the west, south and the north as well. Nallur and Surandai in the west, were developed by Schaffter, Hobbs and Barenbruck. Dohnavur in the south was another fruitful field of the labours of the indefatigable Walker, with whose cooperation Amy Carmichael founded the renowned Dohnavur Fellowship.
All the time Palayamkottai was being developed into a powerful CMS headquarters, with as many as 35 missionaries stationed there in 1892 (the highest for CMS centers in the world). As an educational agency CMS made greatest impact on the community. The CMS College for men was set up in 1880, and the Sarah Tucker College for women (the first college for women in Madras state) in 1896 with just 4 students on the roll. Apart from the traditional schools, CMS started in Palayamkottai two very unique and pioneer institutions – the School for the Blind (1890) and the Florence Swainson School for the Deaf (1887). As adjuncts to its educational effort, CMS founded in 1847 its own Printing Press and CMS Book Depot in 1882. Both these ventures have survived and grown with the times.
Later day missionaries brought their own special gifts to the service of the Tirunelveli Church. Rev. Scott Price instituted in 1892 the Tirunelveli Children's Mission which has steadily grown into a huge movement moulding thousands of children in their impressionable age, by a systematic study of the Scriptures, to become committed Christian citizens. Walker introduced the Harvest Festivals as something congenial to Indian mind, corresponding in a way to the Hindu "melas" or festivals. The first Harvest Festival was held in 1891 at Sachiapuram. Several centers copied the Festival – Nallur in 1892, Palayamkottai in 1895, Pannaivilai and Surandai in 1896. In January 1880 CMS and SPG joined hands in celebrating the first Centenary of Tirunelveli Church.
The last of this long line of CMS missionaries and one who strode like a Colossus for over half a century was Edward Sargent. He came to Tirunelveli on 7-7-1835 as a 19-year old lad Lay Catechist to assist Pettitt. Except for a short spell of 4 years when he went for training in Islington, his life was devoted to missionary work in Tirunelveli, where he filled by turns every conceivable position until he was consecrated on 11.3.1877 as one of the two Assistant Bishops of Tirunelveli (along with his SPG counterpart, Bishop Caldwell) at St. Paul’s Cathedral, Calcutta. For another 13 years Bishop Sargent led the Tirunelveli Church by his wise counsel and gentle but firm guidance. Straining himself in failing health, he died in 1890. The void felt by his passing away, as also that of Bishop Caldwell in 1891, hastened the move for a separate Bishopric for Tirunelveli. It took some time, however for the legal hurdles to be overcome and the long cherished dream could materialize in 1896.
SPG (1829 – 1896)
The Society for Propagation of the Gospel in Foreign Parts had spearheaded its operation to India by 1820, established the Bishop’s College at Calcutta for the training of its personnel (15.12.1820) and chosen strategic stations for work. The SPCK on the 7th June 1825 resolved to hand over its South India Mission to SPG Tirunelveli, as one of the ‘transferred congregations’ of the Tanjore Mission. Accordingly, Rev. David Rosen at Cuddalore came to Palayamkottai on 6th November 1820 and took charge of the SPCK mission till then looked after by Rhenius on behalf of the Tanjore Mission. He began well, reviving the former SPCK stations of Nazareth and Tuticorin and set to work vigorously, appointing some inspecting Catechists on the model of Rhenius in CMS. He was given two missionaries by SPG to assist him – Rev. J L Irion and Rev. Charles Hubbard. Neither of them stuck on to work, and Rosen himself left the country in 1838 leaving SPG Mission in a sorry plight.
It was then that Rev. Caemmerer (then at Madras) stepped in and by his devoted labours for two decades raised Nazareth to its pre-eminence in SPG Mission. The Church rapidly extended to dozens of neighbouring villages. Together they provide a splendid pattern of well knit Christian community. The visitor approaching Nazareth today is greeted by a heart-warming sight of spire rising over spire even in some humble villages, and by the chiming of Church bells in unison, inviting their faithful to worship. Pillayanmanai, Agapaikulam, Valaiyadi, Mookuperi, Pragasapuram, Oyangudi have all got some very imposing churches to show worthy monuments to the depth of the Christian life and witness among the rural folk of this area.
Among the successors of Caemmerer mention must be made of Rev. Dr. Strachan (1870 – 1876) a brilliant Doctor of Medicine and a Gold Medalist of Edinburgh University. He was the founder of the SPG Medical Mission in Tirunelveli and the first Dispensary he opened in Nazareth drew patients from places 40 to 50 miles away.
Already developed into a model Christian settlement, Nazareth was to attain still greater eminence during the long and dedicated stewardship of Rev. Canon Arthue Margoschis (1876 – 1908), the maker of modern Nazareth. He was a young man of 24 when he came to Nazareth,, and he plunged into his work, strengthening the existing mission and adding new dimensions to its work. He initiated the Nazareth congregation into several time-honoured habits of exemplary Churchmanship, still so faithfully retained. It was however, the various institutions that he founded and perfected in his time that have endeared his memory to posterity. He erected St. Luke’s Hospital (18920 to carry on the medical mission begun by Dr. Strachan. The Art Industrial School and its Orphanage, opened by him in 1878 to absorb usefully the large number of orphans left by the great famine of 1877, was something unique in the whole State of Madras. A Training School for Women (1877), a High School for Girls (1886) – the first of its kind in the State of Madras, a High School for Boys (1889) followed one another in quick succession, so eminently catering to the needs of the compact Christian community in and around Nazareth.
Next in importance only to Nazareth were two other centers in and around which SPG developed its mission in Tirunelveli. They were Sawyerpuram and Idaiyangudi. The first was a settlement of persecuted Christians on land provided by one Mr. Sawyer, an Anglo-Indian layman in the employ of East India Company, who was very friendly with SPCK missionaries. The village thus formed in 1814 was gratefully named after him as Sawyerpuram. The Christian settlers quickly organized themselves, and as early as in 1838 had built for themselves a small church and a school attended by 10 children. It was, however, with the advent of intrepid young Dr. G U Pope in 1842 that Sawyerpuram shot into prominence in the annals of missionary history. He established in 1844 the renowned "Sawyerpuram Seminary", which for a long time was the nursery of hundreds of Indian clergymen, teachers and catechists. The esteem in which this reputable center of learning was held can be seen from the fact that the Oxford University contributed to the formation of a suitable library within its walls.
Dr. G U Pope’s efforts were equally directed to the extension of the Church. He carried the light of the Gospel into every neighbouring village, and stationed catechists trained by himself in Christian doctrine to minister to the needs of the congregations. He built the All Saints Church at Subramaniapuram enduring extreme hostility and insult. The lovely red-brick Holy Trinity Church at Sawyerpuram was built by Rev. Huxtable and Rev. Sharrock and dedicated on 11th November 1877 by the Most Rev. Johnson, Metropolitan of India.
Sawyerpuram was also the venue of the SPG's first experiment in "Medical Evangelism". From the small beginning of a clinic set up in 1854 there sprang up St. Raphael's Hospital, which became increasingly popular and did signal service during the outbreak of epidemics following the famine of 1877-1879. The hospital came under Indian leadership when Dr. A Joseph assumed charge of it and served faithfully till 1896.
It is significant that Dr. Pope's Seminary blossomed into a College and was affiliated to the University of Madras in 1880. Rev. Sharrock was its first Principal. Bishop Caldwell thought it good to shift the College along with the High School to Tuticorin, leaving Sawyerpuram with a Middle School in 1883.
Based at Sawyerpuram, Dr. Pope directed his labours to Pudukottai and Puthiamputhur in the north. It is interesting that the first Church at Puthiamputhur was erected in 1844 out of funds contributed by the Sawyerpuram Church BuildingSociety. From Puthiamputhur, the Church spread north to Nagalapuram which has since been a significant Christian outpost in a predominantly backward and troubled area.
The other SPG stronghold, Idaiyangudi (the shepherd's dwelling) in the extreme south of Tirunelveli district, was so entirely a product of the labours of Dr. Caldwell. The village had earlier come under the influence of Gericke and Sathianathan. But the early converts, with no adequate supervision, had relapsed into Hinduism. It was among the wreck of those once Christian congregations that Caldwell was sent in 1841 to labour, to gather up the fragments that remained and restore what was lost. With such devotion and wisdom did Rev. Caldwell apply himself to his task that his rewards were phenomenal. Entire villages accepted Christ, churches and schools sprouted up so fast that Idayangudi soon became a model Christian settlement. The Holy Trinity Church, built under Caldwell's personal supervision and even with his own labour during a period of 33 years, was consecrated by him after he became Assistant Bishop of Tirunelveli in 1880. The chiming bells were a gift from Lord Napier, then Governor of Madras. On becoming Assistant Bishop, Dr. Caldwell moved out to Tuticorin, which was his headquarters since 1983. After Bishop Sargent’s death he had for a few months the Episcopal oversight of the CMS field as well. He died while at Kodaikanal on 28-8-1891, and his body was brought to be buried so fittingly beneath the altar of the Church at Idayangudi which he built and ministered in. His passing away brought to an end the age of the mission, with the stage well set for the merging of the two fields of CMS and SPG into a single Bishopric in 1896.
The mingling of the waters (1896 – 1924)
For three quarters of a century the two great streams of CMS and SPG had been watering the Tirunelveli Church. The smallness of the field made it inevitable that these streams should ultimately coalesce. The years 1896 – 1924 may be regarded as an eventful period during which the waters of these two streams began mingling, until they could issue in one single torrent as the Tirunelveli Diocese.
The birth of the Bishopric
Until 1896 Tirunelveli was part of the jurisdiction of the Bishop of Madras, to assist whom Dr. Sargent and Dr. Caldwell had been consecrated as Assistant Bishops in 1877. The system of "Society Bishops" was not a very satisfactory one; and with the passing away of these two stalwarts in 1890 and 1891, the SPG in particular, urged the creation of a separate bishopric for Tirunelveli, for whose endowment it generously voted in 1891 a sum of £ 5000. It took 5 years to overcome certain legal difficulties in the way; and on 28th October 1896 Rev. Samuel Morley was consecrated at Madras as Bishop of Tirunelveli and Madurai. A suitable Bishopstowe was built at Palayamkottai to provide residence for the Bishop and house his office. A lovely chapel was added to it by Bishop Waller.
The Jubilee interlude
Before any tangible advance could be made towards the merging of the missions, both the Societies had good reasons for pausing awhile to celebrate significant landmarks in their histories. CMS was born in London on 12th April 1799. Its first Centenary was fittingly celebrated at Palayamkottai in 1899 amidst scenes of great enthusiasm. To commemorate the occasion was built the CMS Centenary Hall in Tirunelveli – easily one of the biggest halls in South India with a capacity to accommodate 3000 people. In addition to its being used for children’s services and special missions, it is being used extensively as a public hall for many deserving cultural purposes.
Two years later was celebrated at Sawyerpuram in February 1901 the Bi-centenary of the SPG in which the leaders of CMC congregations also took a full share. Bishop Williams set the ball rolling for the "Diocesanisation", when he convened on 28th January 1908 a meeting of all clergymen in his Bishopric and lay representatives of the CMS and the SPG congregations. Many such dialogues followed when differences were ironed out one by one. A common magazine came to be issued and a Common Prayer Book came to be used. A Constitution for the new Diocese was hammered out.
The Tinnevely Diocesan Trust Association was constituted to administer the properties of CMS and SPG. The "Diocesanisation" was almost an accomplished fact before its most redoubtable champion, Bishop Waller, was translated to Madras in 1922. It now remained only to set the formal seal of approval to the Diocesan Constitution. That was left to Bishop Tubbs.
A historic session of the Diocesan Council was held on 11th March 1924 to formally usher into birth the Diocese of Tirunelveli. Both the CMS and the SPG had sent their blessings to the new Diocese, ultimately the product of their joint enterprise and effort. A Central Diocesan Office was established, and for administrative convenience 3 Church Councils (North, Central and South) were set up in 1925. | 1 | 3 |
<urn:uuid:3a8a9381-48db-4427-9f13-1432cbc25bf1> | [a] NAVEDTRA 10371, Aerographer�s Mate 2, Vol. 2
[b] NAVAIR AE-CVATC-OPM-000, Carrier Air Traffic Control Handbook
[c] NAVEDTRA 12701, Photography (Advanced)
[d] NAVAIR 00-80T-105, CV NATOPS Manual
103.1 Explain the effects of the following weather phenomena on flight operations:
a. Lightning and electrostatic discharge - Lightning strikes and electrostatic discharges are two of the leading causes of reported weather-related aircraft accidents and incidents. All types of aircraft are susceptible to lightning strikes and electrostatic discharges. Aircraft have been struck by lightning or have experienced electrostatic discharges on the ground or at altitudes ranging to at least 43,000 feet Lightning strikes can cause severe structural damage to aircraft. Damage to aircraft electrical systems, instruments, avionics, and radar is also possible. Transient voltages and currents induced in the aircraft electrical systems, as well as direct lightning strikes, have caused bomb doors to open, activated wind-folding motors, and made the accuracy of electronic flight-control navigational systems questionable. Pilots and crew are not immune to the effects of lightning strikes either. Flash blindness can last up to 30 seconds, and the shock wave can cause some temporary hearing loss if headphones or some form of hearing-loss-protection gear is not worn. Some aircrews have even experienced a mild electric shock and minor burns. A charge also may build up on an aircraft after it has been flying through clouds and precipitation, including snow as well as rain, or solid particles such as dust, haze, or ice. The larger the aircraft and the faster it flies, the more particles it impacts, generating a greater charge on the aircraft. The electrical field of the aircraft may interact with the cloud, and an electrostatic discharge may then occur. Electrostatic discharges usually cause only minor physical damage and indirect effects, such as electrical circuit upsets. Lightning occurs at all levels in a thunderstorm. The majority of lightning discharges never strike the ground but occur between clouds or within the same cloud. However, aircraft flying several miles from a thunderstorm can still be struck by the proverbial �bolt out of the blue.� Electrical activity generated by a thunderstorm may continue to exist even after the thunderstorm itself has decayed. This electrical activity may drift downstream and is usually found within the cirrus deck that at one time was connected to the thunderstorm cell. b. Hail - Hail is regarded as one of the worst hazards of thunderstorm flying. As a rule, the larger the storm, the more likely it is to have hail. Hail has been encountered as high as 45,000 feet in completely clear air and may be carried up to 10 miles downwind from the storm core. Hail can occur anywhere in a thunderstorm, but it is usually found beneath the anvil of a large cumulonimbus. Hailstones larger than � to � inch can cause significant aircraft damage in a few seconds. c. Icing - The formation of ice on lift-producing airfoils (wings, propellers, helo rotors, and control surfaces) disrupts the smooth flow of air over these surfaces. The result is decreased lift, increased drag, and increased stall speed of fixed-wing aircraft. Most aircraft that are normally loaded can fly with icing conditions ongoing and, under normal circumstances, the danger is not too great. When aircraft are critically loaded, however, icing is extremely important. The formation of ice on some structural parts of an aircraft can cause vibration and place added stress on those parts. For example, vibration caused by a small amount of ice unevenly distributed on a delicately balanced rotor or propeller can create dangerous stress on the system, transmission, and engine mounts. d. Turbulence - Turbulence is defined as �any irregularity or disturbed flow in the atmosphere that produces wind gusts or wind eddies.� Any sudden change in wind direction, speed or general flow can be called turbulence and can cause problems for aircraft. Turbulence can also be manmade or occur naturally. Aircraft in motion generate turbulence in their wake (called, appropriately enough, �wake turbulence�), which can present a serious hazard to other aircraft flying through this wake. This section is concerned with turbulence associated with thunderstorms. Storm clouds are the visible portions of a turbulent weather system, whose updrafts and down drafts often extend outside the storm proper. Hazardous turbulence is present in all thunderstorms, and in a severe thunderstorm it can cause serious injury to passengers and crew. Outside the cloud, shear turbulence has been encountered several thousand feet above and 20 miles laterally from a severe storm. Severe turbulence can be encountered in the anvil of a thunderstorm 15 to 30 miles downwind. Any air operations (especially launch and recovery) must take into account the presence of turbulent systems near the carrier, along intended flight routes and at possible divert fields. e. Fog/stratus - Fog is a layer of suspended water droplets adjacent to the Earth's surface. Stratus is fog that has been lifted or has formed some distance above ground. Stratus clouds and fogs occur at or near the surface of the earth and can seriously restrict visibility at low levels. Therefore, they are a very important consideration in aircraft operations, particularly in connection with landings and takeoffs. Fogs are especially significant to the pilot who limits his flying to visual flight rules, because ceilings under stratus clouds often are very low, and visibility in fog conditions often are not sufficient to permit navigation by visual reference.
103.2 State the weather criteria for the following launch/recovery conditions:
a. Case I - When it is anticipated that flights will not encounter instrument conditions during daytime departures, recoveries, and the ceiling and visibility in the carrier control zone are no lower than 3,000 feet and 5 nm respectively. b. Case II - When it is anticipated that flights may encounter instrument conditions during a daytime departure or recovery, and the ceiling and visibility in the carrier control zone are no lower than 1,000 feet and 5 nm respectively. c. Case III - When it is anticipated that flights will encounter instrument conditions during a departure or recovery, because the ceiling or visibility in the Carrier Control Zone is below 1,000 or 5 nm respectively; or a nighttime departure or recovery (one-half hour after sunset and one-half hour before sunrise).
103.3 Explain the function of the plane guard helicopter. - During flight operations, a plane guard (helicopter) mission is scheduled on each departure and recovery for the purpose of rescuing aircraft crew members who may go down during the operations.
103.4 Discuss the following: - On a carrier, two spaces are responsible for the control of airborne aircraft � the Carrier Air Traffic Control Center (CATCC) and the Combat Direction Center (CDC). CATCC (pronounced "KAT-SEE") is responsible for the control of aircraft operating within the Carrier Control Area (a circular airspace within a radius of 50 nm around the carrier). It is organized into Air Operations (AirOps), Carrier Controlled Approach (CCA), and the Air Transfer Office (ATO). CCA is responsible for operational control of aircraft departing the ship and recovery of inbound aircraft after a mission is complete. It is roughly equivalent to the Approach Control branch of an ashore Air Traffic Control (ATC) facility. Air traffic control is provided by the following positions in CCA: Departure Control, Marshal Control, Approach Control and Final Control. Each of these four areas has �control� of aircraft at different times and during different phases of aircraft flight. a. Departure control - Departure Control is responsible for the control of departing aircraft during Case I, II and III departures. Departure control is provided between initial radar contact with aircraft and transfer of control to CDC. This position is also responsible for monitoring the location and package status of tanker aircraft; the location of low-state aircraft and their fuel requirements; and may provide positive control during rendezvous between a tanker and low-state aircraft. b. Marshal control - Marshal Control is responsible for the control of inbound aircraft during Case I, II and III. Control is provided between initial contact normally commencing with the pilot�s check-in report and transfer of control to either PriFly during Case I operations or to Approach control during Case II and III operations. Marshall Control provides arrival information, establishes the initial interval between aircraft, and monitors the commencement of the approach until a handoff has been completed. Note: Positive control is provided only upon commencement and radar contact unless under non-radar control. c. Approach control - Approach Control is responsible for the control of aircraft on approach during Case II and III. Control is provided between handoff from Marshall and transfer of control to PriFly during Case II. Control is transferred to Final Control during Case III operations but Approach Control retains responsibility for aircraft separation. Approach Control tasks include making holes for bolter traffic, maintaining the appropriate interval and ensuring the first aircraft makes the ramp time. d. Final control - Final Control is responsible for the control of aircraft on final approach during Case III to ensure optimum alignment until transfer of control to the LSO or the aircraft reaches approach weather minimums. Final Control is primarily responsible for the control of aircraft glide slope and lineup performance and secondarily responsible for aircraft separation.
103.5 Discuss the following evolutions as they pertain to air traffic control:
a. Cyclic operations - Normal flight operations are conducted in cycles. In cyclic operations, aircraft are launched and recovered in groups. These groups of aircraft are referred to as events, and are assigned a numeric designator based upon their launch order, i.e., Event 1, Event 2, Event 3, etc. Each aircraft in an event is referred to as a sortie. A sortie is the flight of one aircraft from launch to recovery. In cyclic operations, the launch of each event is followed immediately by the recovery of the preceding event. b. Carrier Qualifications (CQ) - Carrier Qualification (CQ) operations, also referred to as CARQUALS, are conducted by carriers to qualify newly designated pilots in carrier flight operations and to requalify previously qualified pilots. CQ operations differ from cyclic operations in that launch and recovery operations are conducted concurrently (i.e., as each aircraft is recovered, it is taxied to the catapult area and launched, referred to as a hot spin). This process is interrupted only for aircraft refueling and the switching of pilots (during CQ operations, more than one pilot will qualify in the same aircraft). To expedite CQ operations, aircraft refueling and the switching of pilots are often performed with the aircraft engines running, referred to as hot pump and hot switch, respectively. Special recovery condition requirements are imposed upon CQ in terms of approach weather minimums, carrier deck motion, divert fields, air traffic control procedures, etc. The requirements are more stringent than those for cyclic operations. Also the shorter cyclic interval enables aircraft to be recovered immediately after their fuel and/or weapons are expended, i.e., after one, two or three cyclic intervals. c. Flex deck - Flex Deck is a special type of flight operation in which the flight deck is kept ready (flexible) to launch and recover aircraft at short and irregular intervals of time. The operations are performed when there is a calculable and significant threat of attack to the carrier. The normal cyclic interval of 90 minutes is typically reduced to between 40 and 60 minutes. The shorter cyclic interval enhances the capability of the carrier to respond to the escalated threat of attack by increasing the opportunities for launching, recovering, refueling, rearming and reconfiguring aircraft.
103.6 Define the term ramp time. - During cyclic operations, launch times are fixed but recovery times are not. Recovery times are estimates calculated by the Air Boss and are referred to as Charlie time for Case I recoveries, break time for Case II recoveries and ramp time for Case III recoveries.
103.7 State the responsibilities of the Landing Signal Officer (LSO). - The Landing Signal Officer (LSO), under supervision of the air officer, is responsible for the visual control of aircraft in the terminal phase of the final approach and landing. The LSO assumes control of aircraft when they are approximately � nm from the carrier, giving radio directions to the pilot if necessary. If the pilot fails to respond or if the approach continues to deteriorate, the LSO will command a waveoff. For aircraft that are waved off or fail to make an arrested landing, the LSO is responsible for ensuring that pilot and aircraft performance is satisfactory during the initial climb out. The LSO's primary responsibility is the safe recovery of fixed-wing aircraft aboard ship. The LSO shall inform the Commanding Officer, through the Air Boss, of any conditions which might interfere with the recovery (e.g., equipment malfunctions, improper deck configuration, adverse weather, wind or sea conditions). In addition, the LSO must constantly monitor pilot performance, schedule and conduct necessary ground training, counsel and debrief individual pilots, and certify their carrier readiness qualification and maintain records of each carrier landing.
103.8 Describe the following systems: [ref. b]
a. Bullseye - A term used in pilot/controller communications to refer to the Independent Landing Monitor (ILM). The ILM components are the AN/SPN-41 (shipboard) or AN/TRN-28 (shore based) and the AN/ARA-63 or AN/ARN-138 (airborne). The SPN-41 radar measures azimuth and elevation of the approaching aircraft and relays the data to a display within the aircraft, giving the pilot an indication of where the aircraft is (high, low, left, right) in relation to the proper glide-slope required to land on the carrier. The display is similar to the �needles� display covered below (in �PALS�) and gives the pilot a visual �bullseye� of sorts to aim for. The Bullseye system (SPN-41) only provides this info to the approaching aircraft (not the controllers onboard the carrier). Bullseye is normally used in conjunction with PALS. b. Precision Approach and Landing System (PALS) - PALS enables carrier pilots to perform instrument approaches under either manual or automatic control. The system consists of a precision tracking radar coupled to a computer and data link that provides continuous azimuth and elevation information to aircraft and shipboard controllers. PALs is also referred to as easy rider. The following terms are associated with PALS: APC � Approach Power Compensator. An aircraft component that automatically controls engine thrust to maintain the appropriate angle-of-attack during PALS approaches. AFCS � Automatic Flight Control System. An autopilot used to automatically control aircraft on approach to the carrier. It controls aircraft pitch and bank attitude from commands furnished through the data link (see above). DRO � Data Readout. A component of the PALS system that provides the PALS operator (Final Controller) with aircraft address and range, final bearing, and the status of the data link. Needles. Term used in pilot/controller communications to refer to the PALS display of azimuth (left � right) and elevation error signals (i.e. � how far left or right, and high or low the aircraft on approach is relative to it�s proper glide slope).
103.9 State the effects of Emissions Control (EMCON) on aviation. - Electronic Emission Control (EMCON) imposes restrictions on the use of electronic systems to deny information to the enemy for determining the location of the carrier. When imposed, radio transmissions between pilots, and between pilots and carrier control agencies, are held to the minimum necessary for safety of flight. During EMCON, restrictions can be imposed on the use of all electronic systems, including the radar and radio systems used by CATCC to control aircraft. As a result, CATCC provides monitor control during EMCON. However, CATCC is manned as required by the type of departure and recovery (Case) being conducted, and prepared to assume control of aircraft if EMCON is terminated.
103.10 State the purpose of the Air Plan. - The Air Plan is an event by event listing of scheduled flight activity (see �cyclic ops� above), in visual form. It lays out in plain view what squadrons will be flying what types of missions, the type of aircraft, when they are scheduled to launch and recover, sunrise and sunset times (and moonrise and moonset times for night ops), all on a single sheet of paper. Also included on the reverse side are divert field information, fuel loads, ordnance loads (if any), and any other notes. The Air Plan is drafted by Strike Operations.
103.11 Define the acronym TARPS. - Tactical Air Reconnaissance Pod System. TARPS is a system of photographic cameras mounted in a "pod" and carried on properly configured F-14 aircraft. It gives the carrier battle group its only organic (ship-based) photoreconnaissance capability. The pod contains 2 film cameras and an infrared line scanner. A digital version that can transmit its images directly to the carrier in real time is being deployed to some units. | 1 | 2 |
<urn:uuid:f77ed218-0160-43f2-b12b-99c38d7016d0> | A floating point unit (FPU) is a part of a computer system specially designed to carry out operations on floating point numbers. A computer is a Machine that manipulates data according to a list of instructions. In Computing, floating point describes a system for numerical representation in which a string of digits (or Bits represents a Real number. Typical operations are addition, subtraction, multiplication, division, and square root. Some systems (particularly older, microcode-based architectures) can also perform various transcendental functions such as exponential or trigonometric calculations, though in most modern processors these are done with software library routines. A transcendental function is a function that does not satisfy a Polynomial equation whose Coefficients are themselves polynomials in contrast to an
In most modern general purpose computer architectures, one or more FPUs are integrated with the CPU; however many embedded processors, especially older designs, do not have hardware support for floating point operations. In Computer engineering, computer architecture is the conceptual design and fundamental operational structure of a Computer system An embedded system is a special-purpose Computer system designed to perform one or a few dedicated functions often with Real-time computing constraints
In the past, some systems have implemented floating point via a coprocessor rather than as an integrated unit; in the microcomputer era, this was generally a single microchip, while in older systems it could be an entire circuit board or a cabinet. A coprocessor is a Computer processor used to supplement the functions of the primary processor (the CPU microcomputer is a Computer with a Microprocessor as its Central processing unit. Microchipsjpg|right|thumb|200px|Microchips ( EPROM memory with a transparent window showing the integrated circuit inside A printed circuit board, or PCB, is used to mechanically support and electrically connect Electronic components using conductive pathways or traces
Not all computer architectures have a hardware FPU. In the absence of an FPU, many FPU functions can be emulated, which saves the added hardware cost of an FPU but is significantly slower. Emulation can be implemented on any of several levels - in the CPU as microcode, as an operating system function, or in user space code. Microprogramming (ie writing microcode) is a method that can be employed to implement Machine instructions in a CPU relatively easily often using less An operating system (commonly abbreviated OS and O/S) is the software component of a Computer system that is responsible for the management and coordination "kernel space" redirects here For mathematical definition see Null space.
In most modern computer architectures, there is some division of floating point operations from integer operations. The integers (from the Latin integer, literally "untouched" hence "whole" the word entire comes from the same origin but via French This division varies significantly by architecture; some, like the Intel x86 have dedicated floating point registers, while some take it as far as independent clocking schemes. See also X86 assembly language The generic term x86 refers to the most commercially successful Instruction set architecture in the history of Personal In Computer architecture, a processor register is a small amount of storage available on the CPU whose contents can be accessed more quickly than storage In Electronics and especially synchronous Digital circuits a clock signal is a signal used to coordinate the actions of two or more circuits
Floating point operations are often pipelined. Pipelining redirects here For HTTP pipelining see HTTP pipelining. In earlier superscalar architectures without general out-of-order execution, floating point operations were sometimes pipelined separately from integer operations. A superscalar CPU architecture implements a form of parallelism called Instruction-level parallelism within a single processor In Computer engineering, out-of-order execution, OoOE, is a paradigm used in most high-performance Microprocessors to make use of cycles that Today, many CPUs/architectures have more than one FPU, such as the PowerPC 970, and processors based on the Netburst and AMD64 architectures (such as the Pentium 4 and Athlon 64, respectively. The PowerPC 970, PowerPC 970FX, PowerPC 970GX, and PowerPC 970MP, are 64-bit Power Architecture processors from IBM The Intel NetBurst Microarchitecture, called P68 inside Intel was the successor to the P6 microarchitecture in the X86 family of CPUs x86-64 is a Superset of the x86 instruction set architecture. The Pentium 4 brand refers to Intel 's line of single- core mainstream desktop and Laptop Central processing units (CPUs introduced The Athlon 64 is an eighth-generation AMD64 architecture Microprocessor produced by AMD, released on )
When a CPU is executing a program that calls for a floating-point operation, there are three ways to carry it out:
Most modern computers have integrated FPU hardware.
Some floating-point hardware only supports the simplest operations -- addition, subtraction, and multiplication. But even the most complex floating-point hardware has a finite number of operations it can support -- for example, none of them directly support arbitrary-precision arithmetic. On a Computer, arbitrary-precision arithmetic, also called bignum arithmetic is a technique whereby Computer programs perform Calculations on
When a CPU is executing a program that calls for a floating-point operation not directly supported by the hardware, the CPU uses a series of simpler floating-point operations. In systems without any floating-point hardware, the CPU emulates it using a series of simpler fixed-point arithmetic operations that run on the integer arithmetic logic unit. In Computing, a fixed-point number representation is a Real data type for a number that has a fixed number of digits after (and sometimes also before the In Computing, an arithmetic logic unit ( ALU) is a Digital circuit that performs Arithmetic and Logical operations
The software that lists the necessary series of operations to emulate floating point operations is often packaged in a floating-point library. In Computer science, a library is a collection of Subroutines used to develop Software.
In some cases, FPUs may be specialized, and divided between simpler floating point operations (mainly addition and multiplication) and more complicated operations, like division. In some cases, only the simple operations may be implemented in hardware, while the more complex operations could be emulated.
In some current architectures, the FPU functionality is combined with units to perform SIMD computation; an example of this is the replacement of the x87 instructions set with SSE instruction set in the x86-64 architecture used in newer Intel and AMD processors. In Computing, SIMD ( S ingle I nstruction M ultiple D ata is a technique employed to achieve data level parallelism as in a Vector x87 is a math-related instruction subset of the X86 architecture of processors. S treaming '''S'''IMD E xtensions ( SSE) is a SIMD (Single Instruction Multiple Data Instruction set extension to the X86 x86-64 is a Superset of the x86 instruction set architecture.
In the 1980s, it was common in IBM PC/compatible microcomputers for the FPU to be entirely separate from the CPU, and typically sold as an optional add-on. A coprocessor is a Computer processor used to supplement the functions of the primary processor (the CPU The 1980s was the decade spanning from January 1 1980 to December 31 1989. microcomputer is a Computer with a Microprocessor as its Central processing unit. It would only be purchased if needed to speed up or enable math-intensive programs.
The IBM PC, XT, and most compatibles based on the 8088 or 8086 had a socket for the optional 8087 coprocessor. The IBM Personal Computer XT, often shortened to the IBM XT or simply XT, was IBM's successor to the original IBM PC. The Intel 8088 is an Intel X86 Microprocessor based on the 8086, with 16- Bit registers and an 8-bit external Data bus The 8086 is a 16-bit Microprocessor chip designed by Intel and introduced on the market in 1978 which gave rise to the X86 architecture The 8087 was the first math Coprocessor for 16 bit processors designed by Intel (the I8231 was older but designed for the 8 bit Intel 8080) The AT and 80286-based systems were generally socketed for the 80287, and 80386/80386SX based machines for the 80387 and 80387SX respectively, although early ones were socketed for the 80287, since the 80387 did not exist yet. The IBM Personal Computer/AT, more commonly known as the IBM AT and also sometimes called the PC AT or PC/AT, was IBM 's second-generation The Intel 286, introduced on February 1, 1982, (originally named 80286, and also called iAPX 286 in the programmer's manual The Intel 80287 ( i287) was the math Coprocessor for the Intel 80286 series of Microprocessors It was used to perform Floating The Intel 80387 ( 387 or i387) was the math Coprocessor for the 80386 series of Microprocessors and the first Intel The Intel 80387SX ( 387SX or i387SX) is the math Coprocessor for the Intel 80386SX Microprocessor. The Intel 80287 ( i287) was the math Coprocessor for the Intel 80286 series of Microprocessors It was used to perform Floating The Intel 80387 ( 387 or i387) was the math Coprocessor for the 80386 series of Microprocessors and the first Intel
Starting with the 80486, in x86 chips the floating point unit was integrated with the CPU, something true for almost all later x86-architecture processors. The Intel 486, otherwise known as the 80486 i486 or just 486 was the first tightly pipelined X86 design One notable exception is the 80486SX; it was also unusual in that no actual coprocessor was available -- the 80487 was a full CPU with an integrated FPU; when installed, the original 80486SX would be disabled. The Intel 486, otherwise known as the 80486 i486 or just 486 was the first tightly pipelined X86 design The Intel 's i487 is a Floating point unit Coprocessor for Intel i486SX machines
In addition to the Intel x87 series, several other companies manufactured co-processors for the x86 series. These included Cyrix which marketed its FasMath series as higher performance but fully x87 compatible, and Weitek which offered a high-performance but not fully x87 compatible series of coprocessors. Cyrix was a CPU manufacturer that began in 1978 in Richardson Texas as a specialist supplier of high-performance math co-processors for 286 and Weitek Corporation was a former Chip -design company that originally concentrated on Floating point units for a number of commercial CPU designs
In addition to the Intel architectures, FPUs as coprocessors were available for the Motorola 680x0 line. Motorola Inc ( is an American, multinational Fortune 100, Telecommunications company based in Schaumburg Illinois. The Motorola 680x0 / m68k / 68k / 68K is a family of 32-bit CISC Microprocessor CPU chips and was the primary These FPUS, the 68881 and 68882, were common in 68020/68030-based workstations like the Sun 3 series. Motorola 68881 was a Floating-point Coprocessor chip that was utilized in some computer systems that used the 68020 or 68030 Motorola 68881 was a Floating-point Coprocessor chip that was utilized in some computer systems that used the 68020 or 68030 The Motorola 68020 is a 32-bit Microprocessor from Motorola, released in 1984. The Motorola 68030 is a 32-bit Microprocessor in Motorola 's 68000 family. A workstation, such as a Unix workstation, RISC workstation or Engineering workstation, is a high-end Microcomputer Sun-3 was the name given to a series of UNIX Computer workstations and servers produced by Sun Microsystems, launched on September 9th 1985 They were also commonly added to higher-end models of Apple Macintosh and Commodore Amiga series, but unlike IBM PC-compatible systems, sockets for adding the coprocessor were not as common in lower end systems. Macintosh, commonly nicknamed Mac is a Brand name which covers several lines of Personal computers designed developed and marketed by Apple Inc The Amiga is a family of Personal computers originally developed by Amiga Corporation. With the 68040, Motorola integrated the FPU and CPU, but like the x86 series, a lower cost 68LC040 without an integrated FPU was also available.
Also, there are add-on FPUs Coprocessor units for Microcontroller units (MCUs/µCs)/Single Board Computers (SBCs) which serve to provide Floating Point Arithmetic capability in systems that might not otherwise possess said functionality. A microcontroller (also MCU or µC is a functional Computer system-on-a- chip. Single-board computers ( SBCs) are complete Computers built on a single Circuit board. The difference in these types of FPU Coprocessors, when compared to more traditional Floating Point Coprocessors such as the 80x87 series, is that these add-on FPUs are host-processor independent, possess their own programming requirements, and are often provided with their own IDEs. In Computing, an integrated development environment ( IDE) is a Software application that provides comprehensive facilities to Computer programmers A non-exhaustive listing of these type of FPUs: | 1 | 67 |
<urn:uuid:c724cc6d-bb63-46f4-bb46-1b68196da68d> | News, Notables, Innovations
Rerailing the Brain Train | Preserving Arabic Manuscripts |
Stunned UO Says Goodbye to Lariviere, Welcomes Berdahl Back
The Best . . . | Around The Block | PROFile: Yvonne Braun | In Brief
Focus on Footwear
Since before recorded history, humans have fashioned shoes in an almost infinite variety of styles and materials, optimized for thousands of specialized purposes. That breadth is captured in a new book of photos, 10,000 Years of Shoes (University of Oregon, 2011) by Pulitzer Prize–winning photojournalist Brian Lanker (1947–2011). Accompanying the images are three essays; in the one excerpted below, “Luther Cressman and the Fort Rock Sandals,” Thomas J. Connolly, MS ’80, PhD ’86, director of archaeological research at the UO Museum of Natural and Cultural History, tells the story of the world’s oldest known shoe.
Roughly 15,000 years ago, as the Ice Age waned, humans migrated into the Americas from northeast Asia. By land or by sea, this migration was impossible without clothing that protected people from the cold, including footwear. Some of the earliest evidence for humans in the Americas comes from the Great Basin—a region that spans parts of Oregon, Nevada, and California. People camped at Paisley Caves in central Oregon almost 14,500 years ago. To the north, Fort Rock Cave overlooks a broad basin that once held a huge lake where wave erosion carved numerous caves and rockshelters at the water’s edge. When the first Americans arrived on the continent, the massive lakes had shrunk to shallow marshes. Rich in edible plants and game animals, the area attracted people who sheltered in these caves.
Millennia later, these dry and dusty Great Basin caves also attracted archaeologists—and there they found some of the world’s oldest shoes. Archaeological excavations at Fort Rock Cave in 1938 by the University of Oregon’s Luther Cressman led to the discovery of nearly 100 sagebrush-bark sandals buried by wind-blown silt and later by Mazama ash from the cataclysmic volcanic explosion that created Crater Lake. At the time of their discovery, these sandals could not be precisely dated, but their position below Mazama ash suggested that they were truly ancient.
Luther Cressman was a professor of anthropology and founded the Museum of Natural and Cultural History. However, he could scarcely have predicted where his life and research would take him. Cressman grew up on a farm in Pennsylvania and attended Pennsylvania State University, where he earned a degree in English. After a brief army stint in World War I, he trained for the Anglican ministry and completed graduate studies in sociology at Columbia University. There he was mentored by the influential anthropologist Franz Boas and also met and married his first wife, Margaret Mead (1923–27), who later became a world-famous anthropologist. He later married Dorothy Cecilia Bloch and in 1929, at the age of thirty-two, moved west to take a position in sociology at the University of Oregon.
Cressman’s first step into archaeology was more by circumstance than planning. In 1930, he was invited to investigate several Indian burials exposed in a farmer’s field near Gold Hill in southwest Oregon. Awed by the opportunity to learn of “human beings and their works,” he acknowledged that the endeavor was neither the sociology nor cultural anthropology with which he was familiar. Recognizing his shortcomings in geology, botany, zoology, and other fields critical to interpreting archaeological sites, he sought the help of specialists in a multidisciplinary approach that marked his entire career.
Cressman followed his serendipitous entry into archaeology with a plan to systematically document Native rock art. He wrote to postmasters throughout Oregon to contact people who knew of petroglyphs and pictographs, followed by an extended field trip in 1932. Talking with ranchers and amateur historians, he learned of caves and rock shelters with the potential to hold a long record of human history.
Today, it is difficult to appreciate the obstacles Cressman faced in pursuing fieldwork in the 1930s. At that time, there were only two paved highways in eastern Oregon; a north-south route along the base of the Cascade Range from the Columbia River to the California border, and an east-west route from Bend to Boise. Neither provided access to the areas that drew his interest, where detailed maps were nonexistent.
In 1935, Cressman planned his first excavation at Catlow Cave, south of Burns. The site produced a wealth of twined basketry of a distinctive type now known as “Catlow Twine.” Three years later, he and his crew spent a week at Paisley Caves and another week at Fort Rock Cave and found artifacts well below Mazama ash. At the time, most archaeologists considered the Oregon cave materials to be less than 2,000 years old. However, Paisley Caves produced artifacts that appeared to be directly associated with the bones of extinct horses and camels, animals that disappeared from North America more than 10,000 years ago. These associations provided evidence that people were present in the region thousands of years earlier than previously thought.
Cressman’s colleagues remained skeptical until 1951, when the new method of radiocarbon (14C) dating vindicated him. When first found, the age of the nearly 100 sandals from beneath the Mazama ash layer at Fort Rock Cave could not be precisely known. Radiocarbon dating showed that the Mazama ash was deposited about 7,600 years ago. Cressman directly dated a sandal, buried deep beneath the ash, to more than 9,000 years ago. Subsequent 14C dates have shown that Fort Rock–style sandals were made between about 10,250 and 9,300 years ago—the oldest directly dated shoes in the world!
Cressman and his crew carefully plotted the position of many of the sandals as they found them. They were distributed in an arc around a living area, suggesting people threw them away. Excavations of nearby sites show that brush shelters were often built in caves to conserve heat and protect people from icy winds. Such a shelter may have been present in Fort Rock Cave.
The Fort Rock sandals may have been winter wear, since the Klamath Indians who still live in the area historically made shoes from tule reeds stuffed with dry grass that provided comfort even when walking in icy marsh waters. Most of the sandals from the cave are heavily worn, and many are fragmentary, supporting the idea that they were discarded rather than stored for later use. Seeing them as a group, it is impossible not to be moved by the people and community they represent. There are large adult shoes for men and women, child-sized shoes, and those for mothers and uncles, sons and daughters, cousins and grandparents—the extended family who made the cave their home 10,000 years ago. Some sandals are caked with mud, others are mud-free, illustrating the varied environments they visited for hunting, food harvesting, or play. Many sandals are worn through at the balls of feet or at the heels, allowing you to trace the toes and other features of the feet that occupied them. Looking at one pair with well-worn soles and tiny char marks on the toe flaps, one can visualize sparks rising from a crackling hearth fire, as their wearer added fuel or paced the floor as a grandmother might.
Found in caves throughout southeast Oregon and northern Nevada, Fort Rock–style sandals are stylistically distinctive. They disappear by about 9,300 years ago, after which other sandal forms take their place.
Luther Cressman’s excavations un-earthed shoes that tell us about the people who wore them, the environments they lived in, and the community that sheltered in Fort Rock Cave—providing future generations the opportunity to study, interpret, and admire the oldest shoes in the world.
When an expected three million vacationers roll into Yellowstone National Park later this year, they will see the geysers and grizzlies and other natural wonders for which the park has been renowned since its founding in 1872. And now, thanks to an extraordinary effort by a team of UO geographers, those visitors will also have access to the most comprehensive and data-rich compilation of information ever assembled about Yellowstone.
The Atlas of Yellowstone (University of California Press, 2012) includes more than 830 maps, charts, graphs, and photos in nearly 300 large-format pages. The project, which took eight years to complete, involved contributions from about 100 “topic experts” specializing in areas such as physical geography (the land and its attributes such as volcanoes, rainfall, rivers, and geology), plants and animals, and the early Native American inhabitants and ever more complex human interactions with the environment that have marked more recent times. These experts came mostly from Yellowstone and Grand Teton National Parks, the U.S. Geological Survey, the University of Oregon, Montana State University, the University of Wyoming, the Museum of the Rockies, the Buffalo Bill Historical Center, Headwaters Economics, and the Yellowstone Ecological Research Center.
Shaping this wealth of information into an atlas was the work of the UO’s Department of Geography and its InfoGraphics Laboratory, which also produced the award-winning Atlas of Oregon (UO Press, 2001). Lab director James E. Meacham ’84, MA ’92, managed a team of fifteen paid geographers and cartographers—most of them students—aided by another five cartographers at Allan Cartography in Medford. Overseeing the entire effort was UO professor of geography W. Andrew Marcus, who says of the epic project, “It was like coordinating a small army.”
UO GEOGRAPHY DEPARTMENT–INFOGRAPHICS LABORATORY
Much of the two-page spread on the topic of income (reduced in size to fit here)—a good example of the many layers of information presented in the Atlas of Yellowstone.
Girls Will Be Boys and Boys Will Be Girls
Americans have long cherished romantic images of the West and its colorful cast of characters. According to Peter Boag, PhD ’88, who holds the Columbia Chair in the History of the American West at Washington State University, that cast of cowpokes and saloon keeps, farmers and ranchers, sheriffs and “soiled doves” might well also include cross-dressers, male and female. In his extensively researched Re-Dressing America’s Frontier Past (University of California Press, 2011), Boag explores the historical and cultural setting of the time, recounts the stories of many of these western denizens, and discusses why they have largely faded from our collective memory of those much-celebrated days. One of these stories is excerpted below.
Most sources suggest that Joe Monahan turned fifty-three in 1903. By then he had made his home for almost four decades in and about the Owyhee Mountains of extreme southwestern Idaho. The last twenty or so of those years he resided on Succor Creek, a small stream that tumbles westward, down from the Owyhees, before it meanders out into the deserts of neighboring southeastern Oregon. In the last days of 1903, just as late autumn turned to early winter, Monahan contracted some unspecified malady. As he led an otherwise solitary existence, his enfeebled condition led him to seek refuge at the home of Barney and Kate Malloy, who lived just down a spot, on the Oregon side of the state line. . . . Unfortunately, as the new year arrived at the Malloy ranch, Monahan’s sickness only worsened. A virulent coughing fit overcame him during the evening of 5 January 1904. Sometime later that night, Monahan’s otherwise obscure life slipped away.
Similar stories—the sad passing of a weakened and relatively aged pioneer—were stuff of the everyday in the West by the turn of the twentieth century. But this tale turned out to be among the more newsworthy: when Monahan’s neighbors began to prepare his remains for burial, they discovered that their pioneer friend had the body of a woman. Troubled by exactly what to do, they administered a rather perfunctory funeral. A local from nearby Rockville, Idaho, who had for some time known Monahan, later wrote in dismay to a Boise newspaper when he learned about how Monahan had been treated in death. “Not a word was spoken, not a word read, not a prayer offered,” the concerned man lamented. And yet, in his mind, “‘Little Joe’ never did anyone harm . . . so far as is known her life was pure, although disguised as a man. . . . And who can say they never sinned more than ‘Little Joe,’ and who knows the cause that made her do as she did? A cause that might have made [any] one of us a vagabond, a drunkard or a criminal. So let us pray that ‘Little Joe’s’ soul has been received at the ‘Pearly Gates’ as we would wish our’s to be received.”
As this Rockville correspondent’s words evince, despite the fact that Joe Monahan had resided for many years in this remote corner of the Idaho-Oregon borderlands, few there knew a great deal about him. What seems certain about this Idaho pioneer, in fact, composes a rather short list. Monahan shows up in southwestern Idaho as early as the 1870 federal census. He was born about 1850; the census over the years varies somewhat on the exact year. Most sources identify his birthplace as New York . . . Monahan voted in the Republican primary on 28 August 1880 some sixteen years before women in Idaho received suffrage rights. When he died, his estate included about one hundred head of cattle. . . .
Over the days following the deathbed mystery of Monahan, locals in the Idaho-Oregon border country began to relate to the press additional bits of information that they claimed to have learned over the years about their secretive neighbor whose national celebrity was now growing. These stories pretty much held to 1867 as the year that Monahan originally showed up in Silver City. They explain that he began working there first in a livery, followed by a stint in a sawmill. He struck it big in mining, accumulating upward of $3,000, but he had the misjudgment of entrusting the sum to a shady mining superintendent to invest in the business’s stock. Instead, the rascal departed the country, absconding with Monahan’s life savings. Doggedly starting anew, Monahan began selling milk from a cow and eggs from a few chickens he still retained and worked odd jobs here and there until he had accumulated somewhere between $800 and $1,000. He held on to his money this time, taking it with him when he left Silver City and moved across the divide to Succor Creek in about 1883. There he built a rather mean cabin, which some described as little more than a chicken coop while others likened the shack to a dugout. He fenced in forty acres and hired, at least for a short time, a Chinese laborer to help cut grass to feed the one cow and one horse he had brought with him to his new homestead. Over the years Monahan saw his stock increase, tending it about as carefully as he did his earnings. He became known as something of a miser, living sparingly in his cabin, dressing poorly, and often denying himself food, though availing himself of the hospitality that neighbors gladly and often provided. During these years, Monahan also took his civil duties seriously, reportedly voting in every election and serving several times on a jury. Locals also recalled that he could well handle a revolver and a Winchester rifle and that he had become an accomplished horseman.
As the news related these bits and pieces of Monahan’s life, papers farther afield described the revelation of his successful masquerade as causing a local sensation. An Olympia, Washington, publication, for example, explained with the certainty of an eyewitness that “when friendly neighbors were preparing the body for burial, the community was given a decided shock when it was announced that ‘Joe’ Monahan was a woman.” In reality, that Monahan turned out to be physically female caught hardly anyone in and about the Owyhees off-guard. . . . [Friend] William Schnabel . . . explained rather sensitively that “it was always surmised that Joe was a woman. . . . He was a small, beardless, little man with the hands, feet, stature and voice of a woman.”
The 1880 census lends credence to Schnabel’s story. That year, a local farmer and father of six by the name of Ezra Mills served as the census enumerator for District 29 Owyhee County, Idaho Territory, the very census tract in which both he and Monahan resided. . . . For Monahan’s sex, Mills recorded “M” (male) in the appropriate column but took the time to pencil in next to it the editorial comment “Doubtful Sex.” Clearly, for years locals had suspected that Monahan was a woman. But, as Schnabel explained, “no one could vouch for the truth of it. . . . He never would reveal his identity and all cowboys respected him. . . . He never told a word to his best friends who he was and what he was.”
. . . [R]esidents of the Owyhees, although they might have wondered for years and maybe even “surmised” that Joe was a woman, nevertheless had long accepted Monahan as a man, one who was deeply enmeshed in their community. Moreover, the cowboys of the area, to use Schnabel’s words, “treated him with the greatest respect, and he was always welcome to eat and sleep at their camp.”
Expanded web version of Bookshelf, with selected new books written by UO faculty members and alumni and received at the Oregon Quarterly office. Quoted remarks are from publishers’ notes or reviews.
Along the Trail to Thunder Hawk (CreateSpace, 2011) by Sharon Rasmussen ’67 and George Gilland. “A novel based on the true adventures of a young man trying to embrace his White and Lakota (Sioux) heritage while finding a place in the rapidly changing America of the early twentieth century.”
The Archaeology of North Pacific Fisheries (University of Alaska Press, 2011) edited by professor of anthropology Madonna L. Moss and Aubrey Cannon. “Covering Alaska, British Columbia, and the Puget Sound, The Archaeology of North Pacific Fisheries illustrates how the archaeological record reveals new information about ancient ways of life and the histories of key species.”
An Archaeology of Desperation: Exploring the Donner Party’s Alder Creek Camp (University of Oklahoma Press, 2011) by Julie M. Schablitsky, senior research archaeologist at the Museum of Natural and Cultural History, Kelly J. Dixon, and Shannon A. Novak. “Centered on archaeological investigations in the summers of 2003 and 2004, this book includes detailed analyses of artifacts and bones that suggest what life was like in the survival camp.”
Becoming Who We Are: Temperament and Personality in Development (Guilford Press, 2011) by Mary K. Rothbart, professor emerita of psychology. In her latest work, Rothbart “not only explains basic and advanced concepts of temperament, but also beautifully shows how a temperament framework can enrich understanding of social development.”
Countercultural Conservatives: American Evangelicalism from the Postwar Revival to the New Christian Right (University of Wisconsin Press, 2011) by Axel R. Schäfer, MA ’89. “Carefully examining evangelicalism’s internal dynamics, fissures, and coalitions, this book offers an intriguing reinterpretation of the most important development in American religion and politics since World War II.”
Evensong (Truman State University Press, 2011) by Ingrid Wendt, MFA ’68. A book of poems that showcases this classically trained musician-turned-poet’s talent for “making poems sing.”
Evolutionaries: Transformational Leadership: The Missing Link in Your Organization (Inkwater Press, 2011) by Randy Harrington, PhD ’92, and Carmen E. Voillequé ’97. Evolutionaries “describes the characteristics of evolutionary leaders—people who lead organizations through transformational change.”
Fascinating Mathematical People: Interviews and Memoirs (Princeton University Press, 2011) by Gerald L. Alexanderson ’55 and Donald J. Albers. Including informal interviews and memoirs with sixteen leading members of the mathematical community, this book illustrates the unifying power of math.
Hidden History of Civil War Oregon (History Press, 2011) by Randol B. Fletcher ’80. While “many Oregonians think of the Civil War as a faraway event or something that happens when the Ducks and the Beavers tangle,” Fletcher “explores the tales behind the monuments and graves that dot” the state’s landscape.
In the Shadow of Melting Glaciers: Climate Change and Andean Society (Oxford University Press, 2010) by Mark Carey, Robert D. Clark Honors College assistant professor of history. Winner of the Elinor Melville Prize for Latin American Environmental History, this book explores Peru’s Cordillera Blanca mountain range where global climate change has resulted in “severe environmental, economic, and social impacts,” killing 25,000 people since 1941.
Joe Rochefort’s War: The Odyssey of the Codebreaker Who Outwitted Yamamoto at Midway (Navel Institute Press, 2011) by Elliot Carlson ’61. This biography is “the first to be written about the officer who headed Station Hypo” at Pearl Harbor. It’s the book that, critics say, “all who are interested in the Battle of Midway have literally been waiting decades to read.”
Northwest Coast: Archaeology as Deep History (Society for American Archaeology Press, 2011) by Madonna L. Moss, professor of anthropology. An overview of archeology along North America’s northwest coast, this book offers the argument that the area’s hunter-gatherers were complex food producers worthy of further modern study.
Oregon Archaeology (Oregon State University Press, 2011) by C. Melvin Aikens, professor emeritus of anthropology; Thomas J. Connolly, MS ’80, PhD ’86, director of archaeological research at the UO’s Museum of Natural and Cultural History; and Dennis L. Jenkins, PhD ’91, the museum’s senior research archaeologist. Called “an essential reference” for those interested in the field, Oregon Archaeology “incorporates new archaeological research, telling the story of Native American cultures in Oregon.”
Oregon Coast Bridges (North Left Coast Press, 2011) by Ray A. Allen ’65. With more than 200 pages, this book showcases black-and-white photography alongside historical anecdotes about forty of “the most interesting and significant bridges along the Oregon Coast Highway, from Astoria to the California border.”
Sanderlings (Tupelo Press, 2011) by Geri Doran, assistant professor of creative writing. Doran’s second collection of poems offers a “variety of expression” with pastoral, oracular poems that “don’t fix on a single way to regard our sense of living.”
Social Perspective: The Missing Element in Mental Health Practice (University of Toronto Press, 2011) by Richard U’Ren ’60. The author “shows the ways in which the organization and dynamics of society contribute to either personal well-being or distress,” concentrating on the relationship between mental health and social class.
10,000 Years of Shoes: The Photographs of Brian Lanker Edited by Jon Erlandson and Sarah McClure (University of Oregon, 2011). The book is available at the Museum of Natural and Cultural History and the campus Duck Store for $34.99. To order by mail, contact Ashley Robinson at the museum, 541-346-5331.
Atlas of Yellowstone by W. Andrew Marcus, James E. Meacham, Ann W. Rodman, Alethea Y. Steingisser (University of California Press, 2012)
Re-Dressing America’s Frontier Past by Peter Boag (University of California Press, 2011)
News, Notables, Innovations
UO psychology professor Helen Neville and her research team take human, sheep, and dog brains to the Oregon Country Fair and Eugene Ems baseball games. They encourage the public to examine them up close and ask questions. They promote their DVD on brain development and hand out sponge brains to anyone who wants one. Their take-home message is this: Environment and experience can change the brain. Biology is not destiny.
Neville is the director of the Brain Development Laboratory. Experiments carried out by her team of postdoctoral researchers, PhD and MS candidates, and undergraduate research assistants show that children from relatively well-off families have better-developed brains than children from poor families. A critical consequence is that underprivileged kids have difficulty in focusing their attention on important information and tuning out distractions. Without the ability to concentrate, they frequently endure a lifetime of diminished literacy, numeracy, attention span, and emotional development.
Recent U.S. census statistics show that the number of people living in poverty has increased during the past four years and is now at a historical high. More than nine million Americans families—with nearly sixteen million children—are affected.
“Eighty-three percent of kids living below the poverty line don’t graduate from high school,” Neville says, adding that “there are crucial quality of life issues for individuals and great economic costs to society,” increasing the likelihood that “they don’t get jobs and commit crimes.”
Not satisfied with simply observing low cognitive function in poor kids, Neville’s team has devised strategies to overcome it. They teach at-risk kids how to focus their attention on classroom tasks. They also teach parents how to help their kids at home. “After only eight weeks of intervention, underprivileged children show brain function for attention similar to that found in peers from higher-income parents,” says postdoctoral research associate Eric Pakulak ’90, MA ’97, ’01, MS ’02, PhD ’08.
Neville says her group is one of a handful studying how to help children develop their brainpower to stop the poverty cycle. University-based cognitive neuroscientists often ignore this segment of society, finding research subjects by offering academic credit or cash to undergraduate psychology majors. “These are high socioeconomic status kids,” Neville says. “It’s not accurate or scientific to characterize the brain based on this small proportion of the population.”
Many children suffer from chronic stress brought on by living in poverty. This stress, asserts Pakulak, is a major culprit behind their cognitive disabilities. “It’s toxic,” he says. “It shrinks the part of the brain associated with learning, long-term memory, long-term planning, evaluating choices, and inhibiting bad ones.”
Neville began recruiting three-to-five-year-old children for neurological assessment in 2004 through Lane County’s Head Start preschool program, which promotes intellectual and emotional development in at-risk children. At the start of the fall, winter, and spring terms, Neville and her team meet with parents, explain their work, and encourage them to participate.
For the children who are signed up, the brain development staff makes sure they have a fun time while in the lab. A research team member escorts parents and their children into a room decorated with Winnie- the-Pooh stickers. A toy box filled with books, puzzles, blocks, and puppets awaits exploration. The kids are encouraged to play and munch on Goldfish crackers.
As a child who has come for testing settles down, a research assistant slips a perforated swim cap over his head. The cap bristles with thirty-two electrodes, which shoot out in all directions. Once the child is accustomed to the hat, he is escorted into a second room and seated in a cushy chair that faces a video screen. Speakers sit at ear level on shelves to the right and left of the chair. The researcher gathers the wires leading from the cap’s electrodes and plugs them into an amplifier. “We tell the kids it’s like a stethoscope, but we’re listening to their brain-beat, not their heartbeat,” says research associate Courtney Stevens, MS ’03, PhD ’07.
The child is then asked to watch a cartoon and focus on the narration coming from the right speaker. Simultaneously the subject hears a different story, which has no connection with the cartoon, coming from the left speaker. By examining specific brainwaves recorded by the electrodes, researchers can distinguish how well the child can tune out the distraction coming from the left speaker and tune into the story line coming from the right speaker.
Typically, kids from higher socioeconomic families suppress distractions better. Kids from low-socioeconomic backgrounds struggle. “It’s basically impossible for a child to learn in a classroom if she can’t tune into what her teachers are saying and tune out other students’ disruptive behavior,” Stevens says. “That’s why we believe selective attention is so important. Learn to focus your attention, and you are then prepared to learn anything.”
Back at Head Start, children continue their normal curriculum during the day. But one night per week for eight weeks, they return for two-hour enhanced-learning sessions accompanied by their parents. Scott Klein, a Brain Development Lab research assistant with ten years of grade-school teaching experience, coaches parents on communication skills. He advises them to create routines. Daily rituals allow kids to predict what will happen and how to respond, Klein says. Their stress levels go down. And when children cooperate, parents’ stress levels also go down.
Klein teaches parents to give their kids choices: Do you want to put on your pajamas first or brush your teeth first? Do you want to pick up your toys before or after dinner? “This is a huge first step in gaining kids’ attention,” Klein says. “Choices engage the thinking process.”
Klein also developed the “Brain Train,” the method used to increase kids’ concentration skills. Initially, the children sit with crayons and color. Then, they learn what distractions are, usually through puppet shows and role-playing. At the next level, one group of kids will color while others stand at the edges of the classroom and play with balloons. Head Start teachers encourage the children to keep coloring. The groups switch their roles. At the end of the eight weeks, kids are able to color while others are standing right next to them bouncing balloons in their hands. Neville attributes children’s increased ability to remain focused not only on these exercises but also on the change in their parents’ behavior.
Single parent Matt Dillender says the intervention provided him with group support from other parents and taught him new ways to communicate with his child. “Over time I have seen a clear, positive impact on my son’s emotional health as well as my own.”
The Brain Development Lab is a busy place. Every eight weeks, new recruits come in for initial brain-wave measurements. Children who have been tested previously and completed the intervention return for follow-up measurements. The Neville team has worked with about 400 families so far. They made a DVD, called Changing Brains, which explains their research. Neville hopes the program can continue and help more kids.
The Institute of Education Sciences recently awarded her grants to follow Head Start kids long term and translate the entire research project into Spanish. She’s previously received funding from the National Institutes of Health. However, Neville now calls the funding organization “broken” because grants are increasingly difficult to come by. She admits that the Brain Development Lab may not attract sufficient funding to continue supporting its thirty full-time employees. Neville says she is frustrated with the situation.
Forty or fifty years ago, scientists thought that brains were fixed, unchangeable. Researchers in the Brain Development Lab have not only helped to dispel that myth, but they are now exploiting the organ’s changeability to level the playing field for disadvantaged children. “Our research is being used to make a difference in the world,” Neville says.
—Michele Taylor MS ’03, ’10
WEB EXTRA: To see the Brain Development Lab’s video production Changing Brains, visit OregonQuarterly.com
An international team of scholars is racing against time to digitize thousands of Arabic-language texts from Yemen—some dating to the eleventh century—and make them available online before the original manuscripts either fall to pieces or are confiscated or destroyed. The texts—law, history, literature, and grammar—reflect a tradition in Islam practiced by the Zaydi sect of Shi’ites. Some of the manuscripts exist nowhere in the Muslim world outside this Arabian Peninsula country, where the Zaydi tradition has been out of favor since Yemen’s current borders were established in the 1960s.
For two of the scholars, Ahmed Ishaq and Abdul Rahman Alneamy of the Imam Zaid bin Ali Cultural Foundation, located in Yemen’s capital, Sana’a, it’s nothing short of a mission to preserve their cultural heritage, page by digital page. They’ve been laboring for more than a decade, obtaining manuscripts by donation, loan, or purchase from mosques and private collections. Even so, 10,000 manuscripts have disappeared during that span of time before they could save them, according to another team member, David Hollenberg, UO assistant professor of Arabic language and literature. “I’ve interviewed families who have had their entire libraries seized” by religious extremists, he says.
Hollenberg regards Ishaq, Alneamy, and the entire staff at their foundation as heroes. He learned of their work in 2006, while he was in Yemen as a doctoral student to conduct his dissertation research. At the time, they were only working with a simple digital camera. Four years later, when Hollenberg was an assistant professor at James Madison University in Virginia, he teamed up with the digital collections specialists at Princeton University to obtain a $300,000 grant from the National Endowment for the Humanities to purchase state-of-the-art equipment for the foundation. The grant created the Yemeni Manuscript Digitization Initiative (YMDI), which Hollenberg directs from his current post at the UO, where he has taught since 2010.
The grant money enabled the foundation to buy an archive-quality digital camera, lighting equipment, hard drives, and a generator, and receive professional training in archiving and cataloging. Nevertheless, many circumstances in Yemen make their work difficult, if not flat-out dangerous. For one thing, electricity is unreliable, which severely limits the amount of time they can devote to scanning. (“I was very lucky today, we had power for two hours,” Ishaq wrote by e-mail from Sana’a when contacted for this story.) They have the generator for when the power goes out, but that requires fuel, another scarce resource. Last year, when the wave of popular uprisings in North Africa and the Middle East, known as the Arab Spring, swept through Yemen and ultimately toppled a president who had held power for three decades, their work came to a halt because their foundation was located in the heart of the civil unrest.
Whenever they do manage to fill a hard drive with digitized manuscripts, there’s still the logistical challenge of getting it to Princeton, whose library houses the YMDI collection. Direct commercial shipping is out of the question, following a November 2010 incident when Yemen-based terrorists attempted to ship parcel bombs to the United States via the courier company DHL, causing DHL and other freight companies to discontinue service. So the first hard drive had to be routed through Saudi Arabia. They’ve found other resourceful methods, too. Clifford Wulfman, Princeton’s coordinator of library digital initiatives, says the Yemen-based members of the team once gave a hard drive to a man in Yemen who happened to be on his way to New Jersey, and Wulfman drove to the volunteer courier’s home to pick it up. “Since then,” Wulfman says, “we’ve managed to make contacts with the diplomatic corps” through a former U.S. ambassador to Yemen who works at Princeton. The contact has enabled Ishaq to send material out via diplomatic pouch. (Diplomatic relations between the United States and Yemen have been strained since the DHL incident. Nevertheless, says Hollenberg, “the U.S. Embassy in Yemen has been very helpful to us.”)
When Wulfman receives the hard drives, he adds the images to Princeton’s already-extensive digital library of Islamic manuscripts, and catalogs their descriptive information (what digital archivists call metadata) so that search engines can find them. Here at the UO, Hollenberg teams up with the Wired Humanities program to convert them from static, read-only images into interactive documents that allow readers to click on sections of text, open pop-up windows, and log descriptive information such as the scribe’s name, when and where the manuscript was written, and who its various owners have been over time.
These technical operations may lack the intrigue and adventure animating the work of the people at the Imam Zaid bin Ali Cultural Foundation, who brave revolution and dodge the authorities to usher their culture into the digital age. But the YMDI is slowly building into a dynamic teaching resource that Hollenberg is thrilled to make available for students and scholars of medieval Islam. “This is like going back to grad school for me, because I’m reading everything again,” he says. This past winter term, students in his Arabic literature course began transcribing pages (in Arabic) to create word-searchable versions of the texts. During one class session, they had the unique opportunity to converse with a scholar in Yemen familiar with the manuscripts and their relevance in contemporary Zaydi society via a digital link provided by the U.S. Embassy in Sana’a.
Hollenberg’s fascination with these manuscripts lies in the Zaydi tradition of incorporating the commentaries of intervening generations of readers when interpreting the meaning and significance of texts. When he brings up some of the images on his desktop computer, this commentary tradition literally shows up in the form of notes handwritten on blank pages and crammed into margins at odd angles and patterns in every conceivable color of ink. They were written by the people who owned the manuscripts over the years—often centuries—since they were bound and published, and they offer interpretation and other commentary on text passages. Contemporary Zaydi adherents reading the manuscripts today, says Hollenberg, treat the commentaries as important insights that they have to know to understand the text. (Contrast this, he says, with a strict fundamentalism—in any religion—that insists on relying solely on the original text for meaning.)
Suppression of Zaydi texts in Yemen since the 1960s caused a decline in this commentary tradition, says Hollenberg. However, he adds, “a lot of young [Zaydi] scholars are picking up the mantle again,” and he believes YMDI has the potential to help this revival along. The means for doing this may cause academic purists to cringe, but Hollenberg is enthusiastic about the phenomenon of crowd-sourcing (à la Wikipedia and other public-input sites). He envisions the YMDI manuscript collection being accessed worldwide by readers of Arabic, who would be given free rein to provide online transcriptions. “Young Yemenis might be very interested in this,” he believes, especially those who are taking part in the Zaydi revival, and he plans to pass the word along if he ever gets another chance to visit Yemen. That could happen as soon as this summer if the situation there calms down, but friends have warned him that the current postrevolutionary atmosphere is too volatile to be considered safe.
That sounds like a much different David Hollenberg than the doctoral student who originally went to Yemen for some quiet research in a mosque library, and he admits to being inspired by the sense of mission driving his Yemeni colleagues. “This is an important moment in these peoples’ history, and I feel privileged to help with their aims,” he says. The manuscripts embody “a profound intellectual heritage,” he adds, and insists that “it would be a tremendous loss for everyone if these texts were to be lost.”
—Dana Magliari, MA ’98
WEB EXTRA: See a video of David Hollenberg discussing the Arab text preservation project at OregonQuarterly.com
The UO community was shocked in late November when the State Board of Higher Education voted unanimously to terminate the contract of UO President Richard Lariviere at the end of December, thirty months after he became the University’s sixteenth president. The action came a week after board chairman Matt Donegan told Lariviere that his contract would not be renewed, causing many UO faculty members, alumni, and students to rally against that decision and in support of Lariviere.
Robert Berdahl, a former professor and dean at the University of Oregon; an administrative leader at the University of California, University of Illinois, and University of Texas; and a national leader in higher education has been named interim president by the state board.
Lariviere had clashed with Oregon University System chancellor George Pernsteiner and the board on issues that won him great support among University faculty and staff members and alumni. He was the leading advocate for the combination of proposals known as the New Partnership that would have created a local governing board for the UO and established a new and potentially more stable system of funding the University (see newpartnership.uoregon.edu). He allowed the UO’s union employees to make up wages lost to state-mandated furloughs by working overtime. And he approved pay raises for faculty members and administrators without board approval.
“This turn of events is a result of the ongoing difference of opinion over the future of the UO,” Lariviere said in a message to the campus community.
The University Senate Executive Committee called an emergency session on November 23 and initiated a petition to support Lariviere that garnered more than 6,000 signatures in less than a week. A letter to the state board from the senate said, “The spontaneous and widespread outcry of support for President Lariviere . . . demonstrates that he inspires deep and passionate commitment among those who carry out and support [the] UO’s teaching and research mission. . . . The state board’s plan to remove President Lariviere without first consulting the [U]niversity community demonstrates a profound lack of understanding about [the] UO’s educational mission.”
“I am humbled by your support, but your cause should not be my employment status,” Lariviere said in a November 27 e-mail to students and faculty and staff members. “Your cause must be how institutions like the University of Oregon can be strong in a state with weak public resources.”
Last spring, the UO gave pay raises to 80 percent of tenure-track faculty members, 20 percent of nontenure-track faculty members, and 33 percent of administrators to address issues of equity and retention. That may have been the final straw for board members, who in June 2011—before the raises were announced—had already demonstrated frustration with Lariviere by adding conditions to his contract limiting his advocacy and requiring more participation with the state board.
Governor John Kitzhaber, who supported the board’s action, said of the pay raises, “[Lariviere’s] decision not only undermined the board, it undermined my own directive and the credibility of my administration with the other campuses that complied with the agreement” not to raise salaries. At the board hearing when Lariviere’s contract was formally terminated, board chairman Donegan spoke of a “deeply dysfunctional dynamic” between Lariviere and the board. “This has been brewing for so long,” Donegan said. “It’s horrific, like you are seeing a train wreck.”
Lariviere received three extended standing ovations during a brief appearance at an emergency meeting of the statutory faculty on November 30 at Mac Court attended by more than 1,000 members of the University community—as well as Chancellor Persteiner and state board member Lynda Ciuffetti, who fielded angry questions and comments from audience members. That assembly passed motions condemning the firing of Lariviere and calling for UO involvement in the search for a new president, for an independent governing board for the University, and for the UO Senate or its executive committee to recommend someone to serve as interim president.
Lariviere, who is a tenured faculty member at the UO, plans to return to teaching next fall.
Berdahl Steps In
Within days of the board’s decision to terminate Lariviere’s contract, Robert Berdahl emerged as the only candidate supported by the University Senate to serve as interim president. He was appointed by a unanimous vote of the state board on December 9, despite the fact that he had written a strongly worded criticism of Lariviere’s dismissal in The Register-Guard nine days earlier. “The chancellor and board have recklessly ignored the wishes of donors, alumni, faculty, and students,” he wrote. “They have signaled the academic community throughout the nation that innovative, courageous leadership will neither be sought nor tolerated.”
In a message to campus after his appointment, Berdahl vowed, “I am . . . moved to carry forward the important agenda President Lariviere has outlined for the campus.”
Berdahl was a history professor at the UO from 1967 to 1986 and served as dean of the College of Arts and Sciences from 1981 to 1986. He then spent seven years as vice chancellor for academic affairs at the University of Illinois at Urbana-Champaign, four years as president of the University of Texas at Austin, and seven years as chancellor at the University of California at Berkeley. He became president of the Association of American Universities in May 2006 and served until his retirement in June 2011. Last fall, Lariviere, who had worked with Berdahl at the University of Texas, persuaded him to come out of retirement to take a part-time advisory position at the UO.
In a state of the university address in January, Berdahl, who has agreed to serve only until September 2012, outlined the three priorities of his presidency: First, to assist in the process of hiring “top-notch” faculty members, building on momentum begun under Lariviere to “seize the moment to hire the very best.” Second, doing all he can to ensure the hiring of a “strong, visionary leader” to be the next president. And third, to advance the project of gaining an independent governing board for the UO, which he said was essential to maintain morale, to attract the best faculty members, to make the most effective use of UO resources, and to recruit a strong new president.
A twenty-one person search committee for the new president was formed in early February. Headed by Allyn Ford, a member of the State Board of Higher Education and president of Roseburg Forest Products, it included three UO students and ten faculty or staff members.
As Oregon Quarterly went to press, Governor Kitzhaber and a committee of the state legislature were supporting a plan that could lead to consideration of local governing boards in the 2013 session.
—Guy Maynard ’84
Web Extra: Read UO Interim President Robert Berdahl’s complete state of the university address at OregonQuarterly.com.
A world away from the noisy, bustling food court of the Erb Memorial Union, the Taylor Lounge sits neatly tucked between a staircase to the EMU Ballroom and glass doors leading to the Mills International Center. The size of a classroom (with soft carpet underfoot), the lounge features tall windows that allow friendly sunlight to flow inside. Here, a student can slip away from the stressful world of academia and take a peaceful rest, without leaving the campus completely. When it’s cold and rainy outside, the room can be a refuge for an entire flock of wet, weary students.
Long ago, this space was known as the Leather Lounge, due to its collection of prim and proper leather chairs. But it was renamed to honor the memory of Thomas H. Taylor ’41, who died while commanding a bombing raid in France in early 1943. His portrait now hangs in the lounge, kitty-corner to a patchwork quilt created collectively by members of University’s clubs. The lounge has grown cozier as the years have gone by, and it is now filled with plush couches and chairs, some so worn that the fuzz is now flat. Add in artificial plants, and the Taylor Lounge looks like a student’s funky basement apartment—the perfect place to crash on a couch.
What’s amazing is how strongly the unspoken no-talking rule is enforced. If a cell phone goes off or people start chatting loudly, many dirty looks are thrown. There is no sign on the wall. There is no book of regulations for the Taylor Lounge, but if a group is loud while others are trying to work or sleep, a wave of narrowed eyes will fly their way.
The Taylor Lounge is a safe place, where students scrunch up their faces in concentration and even the most self-conscious “cool kid” feels free to sleep sprawled out, vulnerable to the world. When the weather turns cold, someone will occasionally light a fire in the fireplace, sealing the room against the rainy world outside.
“The Best . . .” is a series of student-written essays describing superlative aspects of campus. Brit McGinnis (napping above in the Taylor Lounge) is a senior psychology major.
Arthur R. Miller has had the sort of career that would take most people three lifetimes to achieve. A noted legal scholar and accomplished law professor, he is perhaps most widely known outside legal circles for his role as a celebrity television jurist. On April 13, the Oregon Law Review and the University’s law school will welcome Miller to the White Stag Block for a daylong symposium dedicated to Miller’s impact on the law and questions of access to civil courts. Law Review editor in chief Nadia Dahab writes, “[Miller’s] work, at bottom, is about empowering individuals in the civil justice system. The general theme of our symposium, therefore, is access to justice, and we will address that theme in the context of emergent rights particularly relevant to the Oregon civil law landscape. In Oregon, and at Oregon law, access to justice in our civil courts is especially resonant with the public interest and environmental causes that many of our lawyers undertake.”
Miller spent thirty-six years as the Bruce Bromley Professor of Law at Harvard Law School, where his fierce, dramatic teaching style and notoriously demanding course work made him the stuff of law school legend—an experience numerous Oregon law professors can attest to personally. He has written more than forty books, including the landmark Federal Practice and Procedure (with coauthor Charles Alan Wright). With his sharp wit, encyclopedic knowledge of civil procedure, and trademark red pocket squares, Miller spent two decades as on-air legal editor for Good Morning, America and has won numerous awards for his work as a moderator of seminars exploring public policy and legal issues on PBS and the BBC. He currently teaches at New York University and also serves as Special Counsel to Milberg LLP, a pioneer in the field of class-action lawsuits.
“Miller’s Courts: Media, Rules, Policy, and the Future of Access to Justice” will feature a broad roster of distinguished speakers and panelists from academic, legal, judicial, and media backgrounds. Students, alumni, and interested members of the public are encouraged to attend. For more information, and to register, visit law.uoregon.edu/org/olr/symposia.
—Mindy Moreland, MS ’08
Portland2012: A Biennial of Contemporary Art | March 31 – May 19, 2012
The White Box visual laboratory at the UO’s White Stag Block in Portland will be one of several exhibition spaces around the city showcasing contemporary art by local and regional artists (among them UO faculty members).
For details about Portland2012 visit www.disjecta.org
Li-soo-too. That’s how to say the name of the country where Yvonne Braun has based her research for the past fifteen years. Even though the southern African nation (spelled Lesotho) is more than 10,000 miles from the University of Oregon, Braun has no problem making a place smaller than Maryland interesting to her students.
“At the start of the term, they very often don’t know where Lesotho is or how to pronounce it,” she says. “They’re usually pretty intrigued by my work there because it’s got so many dimensions. It lends itself well to teaching.”
Specifically, Braun focuses on the local influence of the multibillion-dollar Lesotho Highlands Water Project, the biggest World Bank–funded dam in Africa. “People are being resettled and losing land and livelihoods,” she says of the project. “It’s radically reorganizing resources in the region.” These examples of literally life-and-death importance inspire energized and engaged conversation in the classroom when Braun presents her work from the frontlines of modern sociology.
“I find my students have a real desire to think in applied ways,” she says. “That can be really fun in terms of seeing them think about how to take abstract ideas and to design projects that allow them to see those issues in the world.”
A proponent of group work, Braun cites an example from 2009 as her most successful use of taking learning outside class. She and her Sociology of Africa students created an exhibit for African Cultural Night, an annual celebration hosted by the UO’s African Student Association that typically attracts 500 to 600 people.
“The students actually exhibited the group projects that they did for class,” Braun says. “They were at their stations and got to talk to all of these different people. They became part of the night.”
The response, she recalls, was amazing. Instead of simply focusing on the images of disease, poverty, and famine commonly associated with Africa, Braun had her students explore the continent’s positive changes such as education growth and economic development. The result: audience members walked away excited by how students “complicated the way in which Africa is represented.”
A memento from the evening, a promotional poster, still hangs in Braun’s office. After pointing it out on the wall, she explains why projects like African Cultural Night are important to her.
“Part of what I love about teaching is getting students excited thinking about the world,” Braun says. “Wherever they decide to focus their passions, I just want them to realize that they can be active in creating the kind of world that they want to have, whatever that looks like for them.”
Name: Yvonne Braun
Education: BA ’94, State University of New York at Geneseo; MA ’00, University of California at Irvine; PhD ’05, University of California at Irvine.
Teaching Experience: Joined the UO faculty in 2005.
Awards: Recipient of the 2010–11 Ersted Award for Distinguished Teaching.
Off-campus: The mother of a three-year-old, Braun volunteers at her daughter’s school and local nonprofits like Food for Lane County.
Last Word: “My goal is to get students thinking about being active in the world rather than simply seeing the world as something that they’re just in.”
Of Roses and Recruiting About 500 prospective students from across Southern California attended an information session hosted by the UO Office of Enrollment Management before the UO Alumni Association’s Rose Bowl pep rally at the Santa Monica Pier on New Year’s Day. Here, Vice President for Student Affairs Robin Holmes (center) talks with potential future Duck Jerel Rogers of Murrieta, California.
In a first-ever honor for a student from the UO, Katie Dwyer ’10 has won a prestigious Mitchell Scholarship for academic excellence, leadership, and community involvement. Dwyer, a second-year master’s student in the UO School of Law’s conflict and dispute resolution program, will study international human rights law as a Mitchell Scholar in Ireland.
Seven Oregon student athletes have recently earned national Academic All-American standing, making a total of fifty-nine Ducks ever to achieve the honor.
The UO Institute of Neuroscience has received a $16 million grant from the National Human Genome Research Institute for five years of continued funding of ZFIN, the zebra fish model organism database, a vital resource for biomedical researchers worldwide.
Phyllis ’56 and Andy Berwick ’55 have committed $10 million in support of a planned renovation and expansion of the Erb Memorial Union and Student Recreation Center.
By earning $7.5 million in license income on $115.6 million in research expenditures, the UO was ranked sixteenth nationally among colleges and universities in “innovation yield” (the rate at which research is turned into revenue) by the Association of University Technology Managers.
The Historic Preservation Program in the UO School of Architecture and Allied Arts has received a $2.8 million gift from Art DeMuro, a Portland-based developer and salvager of neglected buildings. The gift will substantially advance preservation studies at the UO.
The UO is one of the 100 best values among more than 500 public colleges and universities because of “its high four-year graduation rate, low average student debt at graduation, abundant financial aid, a low sticker price, and overall great value,” according to the annual ranking by Kiplinger’s Personal Finance magazine.
Two UO professors, Michael Haley (chemistry) and Craig Young (marine biology), are among 539 fellows of the American Association for the Advancement of Science named this year. Chemistry associate professor Marina Guenza is among 238 scientists selected as 2011 fellows by the American Physical Society. The National Academy of Science has awarded UO professor emeritus of psychology Michael Posner the 2012 John J. Carty Award for the Advancement of Science for his contributions in the area of children’s brain function and for his pioneering research on brain imaging. | 1 | 3 |
<urn:uuid:e277bdcf-205d-471b-a40d-2e4bc31caaf9> | A Guide to Photographing Sites
This document also provides some basic photography advice and instructions for contributing your digital images for web publication on the Perseus Project web site.
In writing this document we have taken into account the Dublin Core metadata standards.
Methods: shooting new photographs
General photography advice:Pay careful attention to light. Plan your day of shooting so that the sun will be in a helpful place for as much of the day as possible. For instance, when the sun is low in the eastern sky, in the morning, do not take views looking east at the west side of a building, which will be back-lit. Save those views for later in the day, when the sun will have moved to better illuminate it.
Low angled light at sunrise and sunset is good for showing textures of surfaces, for modeling the side of your subject, and for casting useful long shadows. Overhead light at midday can be overly contrasty, causing dark shadow areas and bright highlights. Use caution when shooting at midday; film, especially slide film, is not as good as the human eye in recording the large difference between bright and dark areas. Your camera's light meter will have trouble finding a good average exposure in this situation. Digital cameras in particular often have trouble registering the full range of detail in a highlight or a shadow area. Two ways of dealing with this are 1) narrow the tonal range in the picture, by reframing the scene to eliminate deep shadow areas or bright highlight areas, or 2) expose for the highlights in the picture.
Get close to the subject and isolate it, if possible. Fill your viewfinder, and keep the background simple. In many cases, an overview and then a close-up of a feature from the same point of view can be very helpful; for example, when you take a picture of a block with clamp holes, also take a detail picture of the holes.
Before you take the picture, check to be sure the horizon line in your picture is going to be level.
Take notes about the pictures you take, in sequence, and label your rolls of film with a permanent pen. The better your notes as you shoot, the easier it will be back home to caption the pictures and put them on line. Note down the direction you're shooting in, and the name of the structure, as well as any special features visible (e.g. White House, detail of columns at N entry, looking N).
Film travels well in plastic tupperware containers. Remove rolls from their canisters and stack them in the tupperware, to travel as light as possible. The tupperware will also seal out dirt or moisture.
When in doubt, take the picture. Film is cheap but plane tickets are not. If the light is hard to read or you're not sure about the exposure, then bracket: take 3 pictures, one 1/2 stop over, one 1/2 stop under, and one at the indicated reading. Yes, you can set your camera's f-stop ring in between 2 apertures, and it will work fine (in fact most aperture rings open and close continuously; the detentes are there for convenience but you can actually shoot in 1/2 stops or smaller increments). Example: if your meter says you should shoot at f11, you can bracket by shooting at f11.5, f11, and f8.5.
Use the same type of film for all your shooting. Buy it all at once, and check for the same batch number or expiration date on the box, to guarantee as much color consistency as possible. Avoid high-speed films with ratings above ISO 400, if possible. Use a tripod or monopod, or brace the camera against a wall, to minimize shake, especially with telephoto lenses.
Please direct all questions and comments regarding photography Standards and Practices to:
Methods: digitizing your slides and photographsConverting slides or prints into digital pictures requires three essentials: a decision on file size, based on an understanding of the elements controlling the image's appearance, a consistent system of work flow, and a plan for the longevity of the images. It is also helpful to distinguish between the archival images you create and the delivery images people will see on-line; good scanning practice creates an archive of large images from which smaller images can be easily derived.
Some basic termsResolution refers to the number of pixels in the image. Sometimes expressed as a total number of pixels, other times listed as a ratio of height and width dimensions, the number of pixels determines the quality of the picture. In print terminology, resolution is expressed as dpi, dots per inch. Every film and flatbed scanner has a maximum resolution; choose your input device carefully by comparing these resolution numbers, along with other crucial features listed below.
Interlinked with resolution, file size and dimensions are other important ideas to keep in mind. A low-resolution version and a high-resolution version of a digital image might have the exact same file size; for example, an image that is about 2.91 mb in size could be displayed either as a 14-inch square picture at 72 dpi, or a 2.52-inch square picture at the much higher resolution of 400 dpi. In both cases, the file size is the same, because the amount of information is the same, it's just being deployed differently.
These factors become important when you consider how the picture will be seen by your audience. For a long time, monitor display resolution hovered around 640 x 480; now, a more typical monitor might be capable of 1024x768 display. In other words, images displayed on current monitor technology do not need to be very high resolution. If you need to print out your pictures, though, or want them to meet very long-term archival standards, scanning them at a high resolution, and creating very large files, will be a priority. For instance, to complete the example above, to get a 14-inch square picture at 400 dpi, you would need about 89.7 mb of storage space for a single image.
In some cases, a large file size is necessary; for instance, the Perseus Project scanned page images of Shakespeare's First Folio from the Brandeis University Library, and since the source was so rare and fragile, we decided to make 75 mb archival images of each page, calculating that this level of resolution would be sufficient to guarantee that the book need not be handled again soon. This magnitude of file size is still impractical for most projects, including Perseus for its regular archiving practice, because the cost of storage space, backup devices, and hardware for image manipulation becomes prohibitive. A reasonable approach is to factor in these costs with the ultimate destination of the images, and aim for the highest resolution possible within those limits. For example, if you know you will ultimately use the scanned images to create high-quality printed pictures, you will need more resolution. But if your goal is to create an on-line archive for teaching purposes, your archival resolution can be lower.
Color accuracy is another important factor. Monitors each have their own characteristic display colors, contrast, and brightness, and some scanners handle tonal range better than others. Make sure you understand the characteristics of your own hardware, and calibrate your monitor; one handy calibration control panel, called Gamma, comes with Adobe Photoshop. The Windows and Mac operating systems have different gamma, or contrast curve, settings for monitors. An image that looks lovely on your Mac may possibly look too contrasty or too dark on your Windows machine. Though most of the people who will view your images will have uncalibrated monitors, and probably all different operating systems, you can control your own production hardware and ensure you get good results for your archive. When you shop around for scanners, be sure to compare the results of the same image from one scanner to another. You will see obvious differences from product to product, in both the color reproduction and in the scanner's ability to capture full detail, most obviously in bright highlight areas and deep shadows.
A variety of image formats are available for storing your images. A very commonly used format is TIFF, a format that can be either uncompressed or provide lossless compression. Unlike TIFF, the JPEG format uses a compression scheme that is lossy: it throws away some of the image's information in order to store it in less space. It is best to choose a format with lossless compression, or a format that does not use compression at all, to store archival images, so that the original scans can be restored byte for byte.
Scanning PhotographsThe process of scanning is unbearably boring, but fortunately, many scanners now come with bulk loading attachments for slides, and can be left to run unattended, once you set the parameters for the scanning. Flatbed scanners, which are like photocopiers, can handle reflective artwork, like photographs on paper, or drawings. Slide scanners usually can handle all images on a transparent base, both negatives and positives. Some flatbed scanners include attachments for scanning transparencies or slides, but generally these seem to produce worse results than dedicated slide scanners. All scanners come with software, frequently used as a plug-in to Photoshop, which allows you to set up the brightness, contrast, gamma, and scaling of your source images, as well as the final dimensions and resolution of your digital images. These settings, as mentioned above, will be determined by factors such as how much storage space you have for the archive, and what the source images look like.
The Perseus Project uses a Nikon CoolScan LS-1000 with a bulk loading attachment to scan slides. Now, we mostly scan in slides of sites, topography, and architecture, which we do not archive at as high a resolution as we archive art objects. In museums, we use a high-end digital camera and acquire 18 mb (3060x2036) files. In contrast, the current acquisition size for site images is c. 5 mb per image, or c. 1620x1080 pixels. It is important to have lots of local storage space on hand, in the form of a large hard drive, a SCSI drive, or a Jaz drive, for storing the images as they get scanned. If you use a bulk loading scanner, make sure you include time in your work flow to check the digital images that are being produced.
NamingEach photograph needs to have a unique identifier, so it can be easily captioned and located. Perseus images are given a 3-part number in the form xxxx.xx.xxxx. The first four slots are for the year the image came to the Perseus archive; the second two are a unique code identifying a group of images, e.g. objects from the Museum of Fine Arts, Boston; the final 4 digits indicate a sequential number for the image. For example, 1999.03.0001 is the first picture numbered from the Roman art objects photographed at the Museum of Fine Arts, Boston, which we added to the archive in 1999.
BackupIt's a good idea to back up your digital images, twice, and the two sets of backups should be stored in different locations. Even if you have plenty of storage space on a server, or on hard drives, duplicate the archive and store redundant copies, in case of disaster. Perseus currently uses DAT tape backup and the Mac backup software Retrospect, by Dantz, for its image archive, but CD burners, Jaz drives, or other devices can also be used.
Images for Web DeliveryConverting an archive of digital images into versions for Web delivery is a straightforward operation. Scriptable software like Equilibrium's DeBabelizer or Adobe Photoshop can automate batch processing of images, handling alterations like scaling, cropping, renaming, and saving in a variety of formats. At present, Perseus archival images are scaled down to fit on a lowest common denominator 640x 480 monitor; for vertical pictures, the maximum height is c. 400 pixels, and for horizontals, the maximum width is c. 600 pixels. The pictures are converted to JPEG format, using a high-quality compression option, so that users of the Web site will be able to see the pictures quickly. The converted pictures are saved with a .jpeg extension at the end of their names, then uploaded to the Perseus server. With compression, each one is under 100k. JPEG is the current format of choice for most internet-based photographs. The benefit to this system is that larger versions of the pictures can be generated easily in the future, as display and transmission technology improves. At this point, if you want to add a protective identifier to your images, you can use batch processing software to add digital watermarks, visible or invisible, to your images. One final point: the delivery images should be backed up, just as the archival versions are, in order to save time restoring your data in the event of a server disaster.
Other ResourcesA useful overview of digital imaging is the Getty Information Institute's Introduction to Imaging. Though it was published in 1995 and is a little out of date, it contains good explanations and illustrations of key concepts in digital imaging. This document touches on some of the issues, and lists standards adopted by the Perseus Project, but for a more complete picture of digital imaging, please refer to the Getty site. Other useful on-line resources include the specifications published by AMICO, the Art Museum Image Consortium, and the Library of Congress site. Additional resources for technical recommendations, file formats and digital imaging projects in general can be found in Graphics section of the Stoa Useful Links database.
Please direct all questions and comments regarding digitization Standards and Practices to:
Standards: Contributing Digital Images to the Perseus Digital LibraryThe Perseus Project is interested in receiving submissions of digital images, photographed and digitized according to the specifications outlined in this document, for inclusion in the Perseus Digital Library. Of particular interest are images of features and objects that are not already represented in the library. If you would like to contribute, we ask that you contact the Perseus Project. The remainder of this section outlines the type of documentation that must accompany all submissions of materials to the Perseus Project.
The Dublin Core Metadata Initiative provides an internationally recognized common core of semantics for resource description. For more information about the Dublin Core Initiative, please visit the Dublin Core Initiative web site.
The supplementary information falls into two categories: Project Level Information and Image Level Information.
Project Level InformationPlease provide the following information about the images you are submitting. This data should be submitted as a separate ASCII file.
Record Level Information:Please provide the following information for each image, preferably in the form of a spreadsheet or a tab delimited file. | 1 | 3 |
<urn:uuid:4238a135-5cfa-4cfa-ad68-762a0fd57148> | The Irish general election of 1918 was that part of the 1918 United Kingdom general election that took place in Ireland. The United Kingdom general election of 1918 was the first to be held after the Representation of the People Act 1918, which meant it was the first United Kingdom Ireland (pronounced /ˈaɾlənd/ Éire) is the third largest island in Europe, and the twentieth-largest island in the world It is seen as a key defining moment in modern Irish history. The history of Ireland begins with the first known settlement in Ireland around 8000 BC when Hunter-gatherers arrived from Great Britain and continental This is because it saw the overwhelming defeat of the moderate nationalist Irish Parliamentary Party (IPP), which had dominated the Irish political landscape since the 1880s, and a landslide victory for the radical Sinn Féin party, which had never previously enjoyed such significant electoral success. Irish nationalism (Náisiúnachas Éireannach refers to political and sociological movements and sentiment that embodies a love for Irish ancestry, culture and language and The Irish Parliamentary Party (IPP (commonly called the Irish Party was formed in 1882 by Charles Stewart Parnell, the leader of the Nationalist Party, replacing Sinn Féin () is a political party in Ireland. The current party led by Gerry Adams was formed following a split in January 1970
The aftermath of the elections saw the convention of an extra-legal parliament, now known as the First Dáil, by the elected Sinn Féin candidates, and the outbreak of the Irish War of Independence. The First Dáil (An Chéad Dáil was Dáil Éireann as it convened from 1919&ndash1921 The Irish War of Independence (or Tan War, or Anglo-Irish War, Irish: Cogadh na Saoirse) from January 1919 to July 1921 was a guerrilla
In 1918 the whole of Ireland was a part of the United Kingdom of Great Britain and Ireland, and was represented in the British Parliament by one hundred and three MPs. The United Kingdom of Great Britain and Ireland was the formal name of the United Kingdom from 1 January 1801 until 12 April 1927 The Parliament of the United Kingdom of Great Britain and Northern Ireland is the supreme legislative body in the United Kingdom and British overseas territories A Member of Parliament, or MP, is a representative elected by the voters to a Parliament. Whereas in Great Britain most elected politicians were members of either the Liberal Party or the Conservative Party, from the early 1880s most Irish MPs were nationalists, who sat together in the British House of Commons as the Irish Parliamentary Party. See also Kingdom of Great Britain Great Britain (Breatainn Mhòr Prydain Fawr Breten Veur Graet Breetain is the larger of the two main islands The Liberal Party was one of the two major British political parties from the early 19th century until the rise of the Labour Party in the 1920s and a third party The Conservative Party (officially the Conservative and Unionist Party) is a Political party in the United Kingdom. The Irish Parliamentary Party (IPP (commonly called the Irish Party was formed in 1882 by Charles Stewart Parnell, the leader of the Nationalist Party, replacing The IPP strove for Home Rule, that is self-government for Ireland within the United Kingdom and were supported by most Catholics in Ireland. Home rule refers to a demand that constituent parts of a state be given greater self-government within the greater administrative purview of the central government Home Rule was opposed by most Protestants in Ireland, who formed a majority of the population in the northern province of Ulster and favoured maintenance of the Union with Great Britain (and were therefore called Unionists). Ulster ( Ulaidh ˈkwɪɟɪ ˈʌlˠu / ˈʌlˠi is one of the four provinces of Ireland, in addition to Connacht, Munster and Leinster Unionism in Ireland, is a belief in the desirability of a full constitutional and institutional relationship between Ireland and Great Britain based on the terms and The Unionists were supported by the Conservative Party, whereas from 1885 the Liberal Party was committed to enacting some form of Home Rule. Unionists eventually formed their own representation, first the Irish Unionist Party then the Ulster Unionist Party. The Irish Unionist Alliance (also known as the Irish Unionist Party) was a Unionist party founded in Ireland in the second half of the 19th century The Ulster Unionist Party ( UUP, sometimes referred to as the Official Unionist Party or OUP or in a historic sense simply the Unionist Party Home Rule was finally achieved with the passing of the Home Rule Act 1914. The Home Rule Act of 1914, also known as the ( Irish) Third Home Rule Act (or Bill) and formally known as the Government of Ireland Act 1914 The implementation of the Act was however temporarily postponed with the outbreak of World War I, expected to be over in a year, but largely due to Ulster Unionist's resistance to the Act. World War I (abbreviated WWI; also known as the First World War, the Great War, and the War to End All Ulster ( Ulaidh ˈkwɪɟɪ ˈʌlˠu / ˈʌlˠi is one of the four provinces of Ireland, in addition to Connacht, Munster and Leinster As the war prolonged, the more radical Sinn Féin began to grow in strength.
Sinn Féin was founded by Arthur Griffith in 1905. Arthur Griffith (Art Ó Gríobhtha 31 March 1872 &ndash 12 August 1922 was the founder and third leader of Sinn Féin. He believed that Irish nationalists should emulate the Ausgleich of Hungarian nationalists who, in the 19th century under Ferenc Deák, had chosen to boycott the imperial parliament in Vienna and unilaterally established their own legislature in Budapest. The Austro-Hungarian Compromise of 1867 (Ausgleich Kiegyezés established the Dual Monarchy of Austria-Hungary. Hungary (Magyarország 'mɔɟɔrorsaːg) officially in English the Republic of Hungary ( Magyar Köztársaság, literally Magyar (Hungarian Republic The 19th century of the Common Era began on January 1, 1801 and ended on December 31, 1900, according to the Gregorian calendar Deák Ferenc, ( October 17, 1803, Söjtör - January 28, 1876, Budapest) was a Hungarian statesman known as Vienna ( in Wien; see also other names) is the Capital of Austria, and is also one of the nine States of Austria. Budapest ( also /ˈbʊ-/) is the capital city of Hungary. As the largest city of Hungary it serves as the country's principal Political, Griffith had favoured a peaceful solution based on 'dual monarchy' with Britain, that is two separate states with a single head of state and a weak central government to control matters of common concern only. The United Kingdom of Great Britain and Ireland was the formal name of the United Kingdom from 1 January 1801 until 12 April 1927 However by 1918, under its new leader Éamon de Valera, Sinn Féin had come to favour achieving separation from Britain by means of an armed uprising if necessary and the establishment of an independent republic. Éamon de Valera (ˈeɪmən dɛvəˈlɛrə (born Edward George de Valera) (14 October 1882 &ndash 29 August 1975 was one of the dominant political figures in 20th century In the aftermath of the 1916 Easter Rising the party's ranks were swelled by participants and supporters of the rebellion as they were freed from British gaols and internment camps, and at its 1917 Ard Fheis (annual conference) de Valera was elected leader and the new, more radical policy adopted. The Easter Rising (Éirí Amach na Cásca was a rebellion staged in Ireland in Easter Week, 1916 Internment is the imprisonment or confinement of people commonly in large groups without trial An Ardfheis or Ard Fheis ( pronounced ˈɛɕ plural Ardfheiseanna) (Ardfheis is an annual convention or special convention usually of a political party
Prior to 1916, Sinn Féin had been a fringe movement having a limited cooperative alliance with William O'Brien's All-for-Ireland League and enjoyed little electoral success. William O'Brien (Irish Parliamentary Party should not be confused with his contemporary William X The All-for-Ireland League (AFIL, was an Irish, Munster -based political party (1909-1918 However between the Easter Rising of that year and the 1918 general election the party's popularity increased dramatically. This was due to the perceived failure to have Home Rule implemented when the IPP resisted the partition of Ireland demanded by Ulster Unionist in 1914, 1916 and 1917, but also popular antagonism towards the British authorities created by the execution of most of the leaders of the 1916 rebels and by their botched attempt to introduce Home Rule linked with military conscription in Ireland (see Conscription Crisis of 1918). The Partition of Ireland took place on 3 May 1921 under the Government of Ireland Act 1920. Conscription (also known as the draft, the call-up or national service) is a general term for involuntary labor demanded by some established authority The Conscription Crisis of 1918 stemmed from a move by the Government of the United Kingdom to impose Conscription in Ireland, and contributed to pivotal
Sinn Féin demonstrated its new electoral capability in three by-election successes in 1917 in which Count Plunkett, W. T. Cosgrave and De Valera were each elected, although it did not win all by-elections in that year and in at least one case there were allegations of electoral fraud. George Noble Plunkett or Count Plunkett (An Cunta Pluincéad (1851 &ndash 1948 was an Irish nationalist and father of Joseph Mary Plunkett, one of the William Thomas Cosgrave (Liam Tomás Mac Cosgair 6 June 1880 &ndash 16 November 1965 known generally as W Overall, however, the party would benefit from a number of factors in the 1918 elections.
The Irish electorate in 1918, as with the entire electorate throughout the United Kingdom, had changed in two major ways since the preceding general election. Firstly, because of the intervening Great War, which had been fought from 1914 to 1918, the British general election due in 1915 had not taken place. World War I (abbreviated WWI; also known as the First World War, the Great War, and the War to End All As a result, no election took place between 1910 and 1918, the longest such spell in modern British and Irish constitutional history. Thus the 1918 elections saw dramatic generational change. In particular:
Secondly, the franchise had been greatly extended by the Representation of the People Act 1918. Suffrage (from the Latin suffragium, meaning "voting tablet" and figuratively "right to vote" probably from suffrago "hough" and originally The Representation of the People Act 1918 was an Act of Parliament passed to reform the electoral system in the United Kingdom. This increased the Irish electorate from around 700,000 to about two million. All men over 21 and military servicemen over 19 gained a vote in parliamentary elections without property qualifications. It also granted voting rights to women (albeit only those over 30) for the first time.
Overall, a new generation of young voters, the disappearance of much of the oldest generation of voters, and the sudden influx of women over thirty, meant that vast numbers of new voters of unknown voter affiliation existed, changing dramatically the make-up of the Irish electorate.
Voting in most Irish constituencies occurred on 14 December 1918. Events 1287 - St Lucia's flood: The Zuider Zee sea wall in the Netherlands collapses killing over 50000 people Year 1918 ( MCMXVIII) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common While the rest of the United Kingdom fought the 'Khaki election' on other issues involving the British parties, in Ireland four major political parties had national appeal. These were the IPP, Sinn Féin, the Irish Unionist Party and the Irish Labour Party. The Irish Unionist Alliance (also known as the Irish Unionist Party) was a Unionist party founded in Ireland in the second half of the 19th century The Labour Party (Páirtí an Lucht Oibre is a Democratic socialist and Social democratic Political party in the Republic of Ireland. The Labour Party, however, decided not to take participate in the election, fearing that it would be caught in the political crossfire between the IPP and Sinn Féin; it thought it better to let the people make up their minds on the issue of Home Rule versus a Republic by having a clear two way choice between the two nationalist parties. The Unionist Party favoured continuance of the union with Britain (along with its subordinate, the Ulster Unionist Labour Association, who fought as 'Labour Unionists'). The Ulster Unionist Labour Association was an association of Trade unionists founded by Edward Carson in 1918 aligned with the Ulster Unionists in A number of other small nationalist parties also took part.
In Ireland 105 MPs were elected from 103 constituencies. Ninety-nine seats were elected from single seat geographical constituencies under the Single Member Plurality or 'first past the post' system. The plurality voting system is a Single-winner voting system often used to elect executive officers or to elect members of a legislative assembly which is based on single-member However, there were also two two-seat constituencies: University of Dublin, (Trinity College) elected two MPs under the Single Transferable Vote and Cork City elected two MPs under the Bloc voting system. Dublin University is a University constituency in Ireland, which has been used to elect members of various legislative bodies including currently Seanad Éireann Single transferable vote (STV is a preferential Voting system designed to minimize Wasted votes and provide Proportional representation Cork (Corcaigh is the second largest city in the Republic of Ireland and the island of Ireland 's third most populous city after Dublin and Belfast
In addition to ordinary geographical constituencies there were three university constituencies: the Queen's University of Belfast and the University of Dublin (which would return two Unionist MPs), and the National University of Ireland (which would elect a member of Sinn Féin). Queen's University of Belfast was a University constituency in both the United Kingdom Parliament (from 1918 until 1950) and the Parliament National University of Ireland ( NUI for short is a parliamentary constituency in Ireland through which graduates of the National University of Ireland have elected
Of the 105 seats in Ireland many were uncontested. In some cases this was clearly because there was a certain winner, and the rival parties decided against devoting their money and effort to unwinnable seats. British government propaganda formulated in Dublin Castle and circulated through a censored press alleged that republican militants had threatened potential candidates to discourage non-Sinn Féin candidates from running. Irish republicanism (Poblachtánachas is an ideology based on the Irish nationalist belief that all of Ireland should be a single independent Republic For whatever reason, in the 73 constituencies in which Sinn Féin candidates were elected 25 were returned unopposed, although the constituencies which Sinn Féin won uncontested seats were those which subsequently showed high levels of support for republican candidates.
Sinn Féin candidates were elected in 73 constituencies but four party candidates (Arthur Griffith, Éamon de Valera, Eoin MacNeill and Liam Mellows) were elected for two constituencies and so the total number of individual Sinn Féin MPs elected was 69. Liam Mellows (25 May 1895 – 8 December 1922 often spelled 'Liam Mellowes' was an Irish Nationalist and Sinn Féin politician Despite the isolated allegations of intimidation and electoral fraud on the part of both Sinn Féin supporters and its Unionist opponents, the election was seen as a landslide victory for Sinn Féin. Intimidation (also called cowing) is intentional behavior "which would cause a person of ordinary sensibilities" fear of Injury or Harm. Electoral fraud is illegal interference with the process of an Election.
The proportion of votes cast for Sinn Féin, namely 46,9% of votes for 48 "first past the post" seats won in the 80 constituencies it contested, is understated by the fact that 25 candidates in some of its strongest support bases were uncontested, reducing its real support level in these constituencies from a possible level of 80pc. This is close to the total level of enjoyed by Sinn Féin's three major breakaway parties after partition. The Partition of Ireland took place on 3 May 1921 under the Government of Ireland Act 1920.
The party returned with the second-largest number of seats was the Irish Unionist Party with 22 seats. The success of the Unionists was largely limited to Ulster, however, and in the rest of Ireland Southern Unionists were elected only in the constituencies of the University of Dublin and Rathmines. Ulster ( Ulaidh ˈkwɪɟɪ ˈʌlˠu / ˈʌlˠi is one of the four provinces of Ireland, in addition to Connacht, Munster and Leinster The Irish Unionist Alliance (also known as the Irish Unionist Party) was a Unionist party founded in Ireland in the second half of the 19th century The University of Dublin, corporately designated the Chancellor Doctors and Masters of the University of Dublin (since the 19th century located in Dublin, Dublin Rathmines, a division of County Dublin based on the suburb of Rathmines, was a former UK Parliament constituency in Ireland.
The IPP suffered a catastrophic defeat and even its leader, John Dillon, failed to be re-elected. John Dillon (4 September 1851 – 4 August 1927 was an Irish land reform agitator Irish Home Rule activist nationalist politician Member of Parliament The IPP won six seats in Ireland, all but one of which were won in Ulster. Ulster ( Ulaidh ˈkwɪɟɪ ˈʌlˠu / ˈʌlˠi is one of the four provinces of Ireland, in addition to Connacht, Munster and Leinster The sole exception was Waterford City, the seat previously held by John Redmond, who had died earlier in the year, and retained by his son Captain William Archer Redmond. John Edward Redmond (Seán Éamonn Mac Réamoinn (1 September 1856 &ndash 6 March 1918 was an Irish nationalist Politician, Barrister, MP William Archer Redmond DSO (1886 &ndash 17 April 1932 was the son of John Redmond, the Irish nationalist politician and leader of the Irish Parliamentary Four of their Irish seats were a part of the arrangement brokered by Cardinal Logue between Sinn Féin and the IPP to avoid unionist victories in Ulster, a deal which saved some seats for the party but may have cost it the support of Protestant voters elsewhere. IPP came close to winning other seats in Louth and Wexford South, and in general their support held up better in the north and east of the country. County Louth is a former UK Parliament constituency in Ireland returning two Members of Parliament 1801-1885 and one in 1918-1922 South Wexford was a former UK Parliament constituency in Ireland returning one Member of Parliament 1885-1922 The party was represented in Westminster by seven MPs because T. P. O'Connor won an election from emigrant votes in the English city of Liverpool. Thomas Power O'Connor ( 5 October 1848 &ndash 18 November 1929) known as T England is a Country which is part of the United Kingdom. Its inhabitants account for more than 83% of the total UK population whilst its mainland Liverpool ( is a City and Metropolitan borough of Merseyside, England along the eastern side of the Mersey Estuary The IPP's losses were exaggerated by the "first-past-the-post" system which gave it a share of seats far short of its rather larger share of the vote (21,7%) and the number of seats it would have won under a "proportional representation" ballot system. The remnants of the IPP then became the Nationalist Party (Northern Ireland) under the leadership of Joseph Devlin. The Nationalist Party † (NP - was the continuation of the Irish Parliamentary Party, and was formed after partition by the Northern Ireland based members Joseph Devlin, also known as Joe Devlin ( 13 February 1871 &ndash 18 January 1934) was an Irish Journalist, influential
|Irish (UK) General Election 1918|
|No. of Seats||% of Seats||No. of Votes||% of Votes|
|Sinn Féin||Éamon de Valera||73||69. Sinn Féin () is a political party in Ireland. The current party led by Gerry Adams was formed following a split in January 1970 Éamon de Valera (ˈeɪmən dɛvəˈlɛrə (born Edward George de Valera) (14 October 1882 &ndash 29 August 1975 was one of the dominant political figures in 20th century 5||476,087||46. 9|
|Irish Unionist||Edward Carson||22||20. The Irish Unionist Alliance (also known as the Irish Unionist Party) was a Unionist party founded in Ireland in the second half of the 19th century Edward Henry Carson Baron Carson, PC, Kt, KC (often known as Sir Edward Carson or Lord Carson) ( 9||257,314||25. 3|
|Irish Parliamentary||John Dillon||6||5. The Irish Parliamentary Party (IPP (commonly called the Irish Party was formed in 1882 by Charles Stewart Parnell, the leader of the Nationalist Party, replacing John Dillon (4 September 1851 – 4 August 1927 was an Irish land reform agitator Irish Home Rule activist nationalist politician Member of Parliament 7||220,837||21. 7|
|Labour Unionist||3||2. The Ulster Unionist Labour Association was an association of Trade unionists founded by Edward Carson in 1918 aligned with the Ulster Unionists in 8||30,304||3. 0|
|Belfast Labour Party||—||—||12,164||1. The Belfast Labour Party was a Political party in Belfast, Ireland from 1892 until 1924 2|
|Independent Unionist||1||0. 95||9,531||0. 9|
|Independent Nationalist||—||—||8,183||0. 8|
|Independent Labour||—||—||659||0. 1|
|Totals||105||100. 0||615,515||100. 0|
After the election the elected Sinn Féin candidates, although entitled to sit as MPs in the British parliament, chose to boycott the Westminster body and instead assembled as a revolutionary parliament they called Dáil Éireann: the Irish for "Assembly of Ireland". The First Dáil (An Chéad Dáil was Dáil Éireann as it convened from 1919&ndash1921 Irish (ga ''Gaeilge'' is a Goidelic language of the Indo-European language family originating in Ireland and historically spoken by the Irish. However Unionists and members of the IPP refused to recognise the Dáil. At its first meeting attended by 27 deputies (other were still imprisoned or impaired) on 21 January 1919 the Dáil issued a Declaration of Independence and proclaimed itself the parliament of new a state called the "Irish Republic". Events 1189 - Philip II of France and Richard I of England begin to assemble troops to wage the Third Crusade. Year 1919 ( MCMXIX) was a Common year starting on Wednesday (link will display the full calendar of the Gregorian calendar (or a Common The Declaration of Independence (Forógra na Saoirse Déclaration d'Indépendance was a document adopted by Dáil Éireann, the revolutionary parliament of the self-proclaimed The Irish Republic ( Irish: Poblacht na hÉireann or Saorstát Éireann) was a unilaterally declared independent state of Ireland proclaimed
On the same day, in unconnected circumstances, two local Irish members of the Royal Irish Constabulary guarding gelignite were ambushed and killed at Soloheadbeg, in Tipperary, by members of the Irish Volunteers. The Royal Irish Constabulary ( RIC) ( Irish: Constáblacht Ríoga na hÉireann) was one of Ireland's two police forces in the early twentieth century Soloheadbeg ('sɔləhɛdbɛg Solchaid Beag is a small Townland, some two miles outside Tipperary Town, near Limerick Junction railway station Tipperary ( Irish: Tiobraid Árann, lit "The well of Arra" is the name of a town (pop 4546 in the south-west of County Tipperary, Ireland The Irish Volunteers ( Óglaigh na hÉireann) was a military organisation established in 1913 by Irish nationalists. Although it had not ordered this incident the course of events soon drove the Dáil to recognise the Volunteers as the army of the Irish Republic and the ambush as an act of war against Great Britain. The Volunteers therefore changed their name, in August, to the Irish Republican Army. The Irish Republican Army ( IRA) (Óglaigh na hÉireann was a military organisation descended from the Irish Volunteers, established 25 November 1913 and who In this way the 1918 elections lead to the outbreak of the Anglo-Irish War. The Irish War of Independence (or Tan War, or Anglo-Irish War, Irish: Cogadh na Saoirse) from January 1919 to July 1921 was a guerrilla
The train of events set in motion by the elections would eventually bring about the first internationally recognised independent Irish state, the Irish Free State, established in 1922. The Irish Free State (Saorstát Éireann (1922&ndash1937 was the state established as a Dominion on 6 December 1922 under the Anglo-Irish Treaty, signed by Year 1922 ( MCMXXII) was a Common year starting on Sunday of the Gregorian calendar. Furthermore the leaders of the Sinn Féin candidates elected in 1918, such as de Valera, Michael Collins and W. Michael John ("Mick" Collins (Mícheál Seán Ó Coileáin 16 October 1890 &ndash 22 August 1922 was an Irish revolutionary leader, Minister for T. Cosgrave, came to dominate Irish politics. De Valera, for example, held at least some form of elected office from his first election as an MP in a by-election in 1917 until 1973. A by-election or bye-election (called special election in the United States) is an Election held to fill a political office that has become vacant Year 1917 ( MCMXVII) was a Common year starting on Monday (link will display the full calendar of the Gregorian calendar (or a Common year Year 1973 ( MCMLXXIII) was a Common year starting on Monday (link will display full calendar of the 1973 Gregorian calendar. The two major parties in the Republic of Ireland today, Fianna Fáil and Fine Gael, are both descendants of Sinn Féin, a party that first enjoyed substantial electoral success in 1918. Ireland ( Irish: Éire, ˈeːrʲə is a country in north-western Europe. Fianna Fáil – The Republican Party (Fianna Fáil – An Páirtí Poblachtánach shortened to Fianna Fáil ( is currently the largest Political party in the Fine Gael – The United Ireland Party, shortened to Fine Gael (ˌfina gail meaning Family of the Irish or Tribe of the Irish, is the second largest
The correct interpretation of the results of the 1918 general election has been the subject of some controversy. This is because Sinn Féin treated the result as a unilateral mandate from the Irish people, to immediately set about establishing an independent, all-Ireland state, and to initiate an undeclared war of separation from Great Britain while totally ignoring the unresolved Ulster and Unionist situation. See also Kingdom of Great Britain Great Britain (Breatainn Mhòr Prydain Fawr Breten Veur Graet Breetain is the larger of the two main islands However, the party's Democratic Programme did not promise the electorate a war, just a 32-county Irish Republic. The Democratic Programme was a declaration of economic and social principles adopted by the First Dáil at its first meeting on 21 January 1919 The Irish Republic ( Irish: Poblacht na hÉireann or Saorstát Éireann) was a unilaterally declared independent state of Ireland proclaimed Further, its election Manifesto sought a place for Ireland at the peace conference, which could not be expected on launching a new war. Sinn Féin Manifesto for the December 1918 election Following its reform in 1917 the Sinn Féin party campaigned against conscription in Ireland.
In 1921, under the Government of Ireland Act 1920, or Fourth Home Rule Act, Ireland was divided into two separate jurisdictions: six counties in the northeast became home ruled Northern Ireland, and the rest of the country that would eventually become the modern Republic of Ireland. An Act to Provide for the Better Government of Ireland, more usually the Government of Ireland Act 1920, (and sometimes called the Fourth Home Rule Act) was an Act Northern Ireland (Tuaisceart Éireann Ulster Scots: Norlin Airlann) is a Country within the United Kingdom, lying in the northeast of Ireland ( Irish: Éire, ˈeːrʲə is a country in north-western Europe. 1918 was therefore the last occasion on which a general election occurred across the whole of Ireland, north and south, on the same day. For this reason many republicans have regarded the election as conferring a mandate for a united Ireland that was still unchanged over eighty years later. Indeed the 1918 general election has become a potent symbol for militant republicans who have argued that the elections conferred legitimacy both on the anti-Treaty faction in the Irish Civil War of 1922–1923 and on the violent campaigns of later groups such as the Provisional IRA that erupted many decades later. The Anglo-Irish Treaty (An Conradh Angla-Éireannach officially called the Articles of Agreement for a Treaty Between Great Britain and Ireland, was a Treaty The Irish Civil War ( June 28 1922 &ndash May 24 1923) pitted supporters of the Anglo-Irish Treaty against its opponents The Provisional Irish Republican Army (Óglaigh na hÉireann ( IRA; also referred to as the PIRA, the Provos, or by some of its supporters as the However, subsequent republican legitimatism is based on the members of the Second Dáil elected in 1921. A principle within Irish republicanism, the concept of Irish republican legitimatism denies the legitimacy of the political entities of Republic of Ireland and The Second Dáil was Dáil Éireann as it convened from 16 August 1921 until 8 June 1922.
Critics of these interpretations make a number of arguments. Some question the legitimacy of the original mandate won by Sinn Féin. It is argued that Sinn Féin practiced widespread intimidation and electoral fraud and that this called the result into question. Some also argue that the use of the first-past-the-post electoral system and/or the large number of uncontested constituencies exaggerated the effect of the pro-Sinn Féin vote so that, while the party won around 70% of the total number of Irish seats, its share of the vote may have been less than 50% and so not have amounted to a majority. Turnout in contested seats was 68%, appreciable by any standards where many were first time voters, others possibly unaware of their voting rights, even especially for such a crucial election where certainly all Sinn Féin supporters would have voted.
Because of the large number of uncontested constituencies, it is impossible to know with certainty what share of the vote Sinn Féin would have won had all seats been contested, except that it would have increased. However, this has not stopped some historians attempting to speculate, for example by extrapolating from the vote counts in constituencies neighbouring those that were uncontested.
Unionists argue from a different perspective. They insist that, regardless of the result, no election result considered on an all-Ireland basis could justify the imposition of a united Ireland on the Unionist minority in the north-east. Some still point to the fact that Unionists won a majority share of the vote, in both the historical northern province of Ulster and in the six counties that would later become Northern Ireland, to argue that the 1918 election in fact established a mandate for the north-east, at least, to remain within the United Kingdom.
Other arguments, leaving aside the immediate politics of 1918, dispute the capacity of any 1918 mandate for a united Ireland to legitimise acts of violence committed then or later. Although the 1918 general election was the last held throughout the whole of Ireland on a single day, in every election held since 1921 candidates advocating violent resistance to the partition of Ireland have fallen far short of winning a majority in either part of Ireland. The Partition of Ireland took place on 3 May 1921 under the Government of Ireland Act 1920.
In 1998 both Northern Ireland and the Republic of Ireland voted on the same day in referendums on the Belfast Agreement. Year 1998 ( MCMXCVIII) was a Common year starting on Thursday (link will display full 1998 Gregorian calendar) A referendum (plural referendums or referenda) ballot question, or plebiscite (from Latin plebiscita The Agreement, most often referred to as the Belfast Agreement (Comhaontú Bhéal Feirste Belfast Greeance or the Good Friday Agreement (Comhaontú Aoine an Voters in both jurisdictions endorsed the agreement which, among many other provisions, enshrined the principle that a united Ireland should be brought about by only peaceful, constitutional means. A United Ireland is the term used to refer to a sovereign state encompassing the whole of the island of Ireland. Whether or not it should be prevented by only peaceful, constitutional means is still debated hotly in some circles. | 1 | 4 |
<urn:uuid:8729e9cb-da11-4067-b8e0-9ccc1e5f9d79> | There are at least 4, 5 different ways for you to find out where someone (or a device) is currently located:
Satellite based GPS
This is the free, radio signal based one. Satellites up our heads beeping signals in a constant frequency that can be picked up by GPS (radio) receivers.
You need a device which actually "listens" to the GPS radio signals to get it. They need to carry chipsets providing a combination of RF components and software to correlate and process the location data into latitude, longitude and altitude plus time values.
Phone Carrier based Assistance to GPS signaling
With E911 requirements, carriers were obligated to provide progressively more accurate location information in emergencies to cell phone users.
There are basically two main ways to broadcast digital signals to multiple cell phone receivers: GSM and CDMA.
CDMA radio signals as those broadcasted by the GPS satellites, have a time-stamp signature. This is the key data for trilaterating (also referred to as triangulation) three or more points for determining location of a given receiver.
Assisted GPS combines triangulation results from a cell phone obtained from the time a signal takes to reach it from the cell towers; to the GPS data of known locations.
Being a time-based network CDMA allows for more precise determination of a cell phone and its user. GSM provides a much lower precision and there are attempts to improve its resolution through methods like Enhanced GPS for example.
Cell Tower ID databases
Another approach is to create and refer to a database of cell towers id's to obtain its corresponding location (latitude/longitude). For that customers knowingly (or unknowingly) provide the data to seed a database.
Google Maps for Mobile uses this approach with its My Location feature.
Wi-Fi MAC addresses databases
In heavily populated areas, the use of data from wireless access points associated to their corresponding location extends the Cell Tower database approach.
(Static) IP address can also be used for location.
And there are also proposed standards and implementations of GPS data transport protocols which opens lots of possibilities.
What do you got?
So based on this you have GPS or more precisely Location data depending on how manufactures, developers and phone companies decide what is available for you as customer.
In cell phones some sort of E911 will be available. This can be implemented in several forms. For example, CDMA carriers use a Position Determination Entity, or PDE Server that keeps track of devices location.
Privacy concerns should abound here and in any other case where private data is kept. | 1 | 2 |
<urn:uuid:8257259e-0964-401b-abf7-1ca4d18dfb62> | |This article's factual accuracy is disputed. (April 2013)|
The paleolithic diet (abbreviated paleo diet or paleodiet), also popularly referred to as the caveman diet, Stone Age diet and hunter-gatherer diet, is a modern nutritional plan based on the presumed ancient diet of wild plants and animals that various hominid species habitually consumed during the Paleolithic era—a period of about 2.5 million years which ended around 10,000 years ago with the development of agriculture and grain-based diets. In common usage, such terms as "paleolithic diet" also refer to the actual ancestral human diet.
Centered on commonly available modern foods, the "contemporary" Paleolithic diet consists mainly of fish, grass-fed pasture raised meats, eggs, vegetables, fruit, fungi, roots, and nuts, and excludes grains, legumes, dairy products, potatoes, refined salt, refined sugar, and processed oils.
First popularized in the mid-1970s by gastroenterologist Walter L. Voegtlin, this nutritional concept has been promoted and adapted by a number of authors and researchers in several books and academic journals. A common theme in evolutionary medicine, Paleolithic nutrition is based on the premise that modern humans are genetically adapted to the diet of their Paleolithic ancestors and that human genetics have scarcely changed since the dawn of agriculture, and therefore that an ideal diet for human health and well-being is one that resembles this ancestral diet.
Proponents of this diet argue that modern human populations subsisting on traditional diets allegedly similar to those of Paleolithic hunter-gatherers are largely free of diseases of affluence, and that multiple studies of the Paleolithic diet in humans have shown improved health outcomes relative to other widely recommended diets. Supporters also point to several potentially therapeutic nutritional characteristics of preagricultural diets.
The paleolithic diet is a controversial topic amongst some dietitians and anthropologists, and an article on the National Health Service of the United Kingdom Choices website refers to it as a fad diet. Critics have argued that to the extent that hunter-gatherer societies fail to suffer from "diseases of civilization", this may be due to reduced calories in their diet, shorter average lifespans, or a variety of other factors, rather than dietary composition. Some researchers have also taken issue with the accuracy of the diet's underlying evolutionary logic or suggested that the diet could potentially pose health risks.
CT scans of mummies dating back up to 5,000 years and across four populations (ancient Egyptian, ancient Peruvian, Ancestral Puebloan and Unangan) encompassing agrarian, forager-farmer and hunter-gatherer lifestyles, shows clear and similar indication of atherosclerosis across all three lifestyle types and all four populations, rising in each case with age, suggesting that atherosclerosis is likely an inherent disorder of human aging. There is also evidence of cancer in these and other ancient populations. While the Egyptians, Peruvians and Puebloans may all have had an agricultural lifestyle, the Unungans of Alaska were hunter-gatherers. Given that mummies from all four of these cultures displayed the same level of atherosclerosis, a paleo diet would not seem to be protective against atherosclerosis and heart disease.
A 2011 ranking by U.S. News & World Report, involving a panel of 22 experts, ranked the Paleo diet lowest of the 20 diets evaluated based on factors including health, weight-loss and ease of following. These results were repeated in the 2012 survey, in which the diet tied with the Dukan diet for the lowest ranking out of 25 diets; U.S. News & World Report stated that their experts "took issue with the diet on every measure". However, one expert involved in the ranking stated that a "true Paleo diet might be a great option: very lean, pure meats, lots of wild plants. The modern approximations… are far from it." He added that "duplicating such a regimen in modern times would be difficult."
The U.S. News ranking assumed a low-carb version of the paleo diet, specifically containing only 23% carbohydrates. Higher carbohydrate versions of the paleo diet, which allow for significant consumption of root vegetables, were not a part of this ranking. Dr. Loren Cordain, a proponent of a low-carbohydrate Paleolithic diet, responded to the U.S. News ranking, stating that their "conclusions are erroneous and misleading" and pointing out that "five studies, four since 2007, have experimentally tested contemporary versions of ancestral human diets and have found them to be superior to Mediterranean diets, diabetic diets and typical western diets in regards to weight loss, cardiovascular disease risk factors and risk factors for type 2 diabetes." The editors of the U.S. News ranking replied that they had reviewed the five studies and found them to be "small and short, making strong conclusions difficult".
Gastroenterologist Walter L. Voegtlin was one of the first to suggest that following a diet similar to that of the Paleolithic era would improve a person's health. In 1975, he self-published The Stone Age Diet: Based on in-depth Studies of Human Ecology and the Diet of Man, in which he argued that humans are carnivorous animals and that the ancestral Paleolithic diet was that of a carnivore — chiefly fats and protein, with only small amounts of carbohydrates. His dietary prescriptions were based on his own medical treatments of various digestive problems, namely colitis, Crohn's disease, irritable bowel syndrome and indigestion.
In 1985, S. Boyd Eaton and Melvin Konner, both of Emory University, published a paper on Paleolithic nutrition in the New England Journal of Medicine which increased mainstream medical attention to the concept. Three years later, S. Boyd Eaton, Marjorie Shostak and Melvin Konner published a book about this nutritional approach, which was based on achieving the same proportions of nutrients (fat, protein, and carbohydrates, as well as vitamins and minerals) as were present in the diets of late Paleolithic people, not on excluding foods that were not available before the development of agriculture. As such, this nutritional approach included skimmed milk, whole-grain bread, brown rice, and potatoes prepared without fat, on the premise that such foods supported a diet with the same macronutrient composition as the Paleolithic diet. In 1989, these authors published a second book on Paleolithic nutrition.
Starting in 1989, Swedish medical doctor and scientist Staffan Lindeberg, now associate professor at Lund University, Sweden, led scientific surveys of the non-westernized population on Kitava, one of the Trobriand Islands of Papua New Guinea. These surveys, collectively referred to as the Kitava Study, found that this population apparently did not suffer from stroke, ischemic heart disease, diabetes, obesity or hypertension. Starting with the first publication in 1993, the Kitava Study has subsequently generated a number of scientific publications on the relationship between diet and western disease. In 2003, Lindeberg published a Swedish language medical textbook on the subject. In 2010, this book was wholly revised, updated and published for the first time in English.
The concepts framing the paleodiet developed significantly online from 1997, with the establishment of the Paleodiet listserv - a major cross-disciplinary enterprise. Staffan Lindeberg, Loren Cordain, Art de Vany, and others now seen as founders of the paleo diet and exercise schools, were regular contributors.
Since the end of the 1990s, a number of medical doctors and nutritionists have advocated a return to a so-called Paleolithic (preagricultural) diet. Proponents of this nutritional approach have published books and created websites to promote their dietary prescriptions. They have synthesized diets from modern foods that emulate nutritional characteristics of the ancient Paleolithic diet, some of which allow specific foods that would have been unavailable to pre-agricultural peoples, such as some animal products (i.e. dairy), processed oils, and beverages.
The paleolithic diet is a modern dietary regimen that seeks to mimic the diet of preagricultural hunter-gatherers, one that corresponds to what was available in any of the ecological niches of Paleolithic humans. Based upon commonly available modern foods, it includes cultivated plants and domesticated animal meat as an alternative to the wild sources of the original preagricultural diet. The ancestral human diet is inferred from historical and ethnographic studies of modern-day hunter-gatherers as well as archaeological finds, anthropological evidence and application of optimal foraging theory.
The Paleolithic diet consists of foods that can be hunted and fished, such as meat, offal and seafood, and can be gathered, such as eggs, insects, fruit, nuts, seeds, vegetables, mushrooms, herbs and spices. Some sources advise eating only lean cuts of meat, free of food additives, preferably wild game meats and grass-fed beef since they contain higher levels of omega-3 fats compared with grain-produced domestic meats. Food groups that advocates claim were rarely or never consumed by humans before the Neolithic (agricultural) revolution are excluded from the diet, mainly grains, legumes (e.g. beans and peanuts), dairy products, salt, refined sugar and processed oils, although some advocates consider the use of oils with low omega-6/omega-3 ratios, such as olive oil and canola oil, to be healthy and advisable.
On the Paleolithic diet, practitioners are permitted to drink mainly water, and some advocates recommend tea as a healthy drink, but alcoholic and fermented beverages are restricted from the diet. Furthermore, eating a wide variety of plant foods is recommended to avoid high intakes of potentially harmful bioactive substances, such as goitrogens, which are present in some roots, vegetables and seeds. Unlike raw food diets, all foods may be cooked, without restrictions. However, raw Paleolithic dieters exist who believe that humans have not adapted to cooked foods, and so they eat only foods which are both raw and Paleolithic.
They recommend a diet high in protein (19–35% energy) and relatively low in carbohydrates (22–40% energy), with a fat intake (28–58% energy) similar to or higher than that found in Western diets. Furthermore, some proponents exclude from the diet foods which exhibit high glycemic indices, such as potatoes. Staffan Lindeberg advocates a Paleolithic diet, but does not recommend any particular proportions of plants versus meat or macronutrient ratios.
Rationale and evolutionary assumptions
According to S. Boyd Eaton, "we are the heirs of inherited characteristics accrued over millions of years; the vast majority of our biochemistry and physiology are tuned to life conditions that existed before the advent of agriculture some 10,000 years ago. Genetically our bodies are virtually the same as they were at the end of the Paleolithic era some 20,000 years ago."
Paleolithic nutrition has its roots in evolutionary biology and is a common theme in evolutionary medicine. The reasoning underlying this nutritional approach is that natural selection had sufficient time to genetically adapt the metabolism and physiology of Paleolithic humans to the varying dietary conditions of that era. But in the 10,000 years since the invention of agriculture and its consequent major change in the human diet, natural selection has had too little time to make the optimal genetic adaptations to the new diet. Physiological and metabolic maladaptations result from the suboptimal genetic adaptations to the contemporary human diet, which in turn contribute to many of the so-called diseases of civilization.
More than 70% of the total daily energy consumed by all people in the United States comes from foods such as dairy products, cereals, refined sugars, refined vegetable oils and alcohol, that advocates of the Paleolithic diet assert contributed little or none of the energy in the typical preagricultural hominin diet. Proponents of this diet argue that excessive consumption of these novel Neolithic and Industrial era foods is responsible for the current epidemic levels of obesity, cardiovascular disease, high blood pressure, type 2 diabetes, osteoporosis and cancer in the US and other contemporary Western populations.
Physical activity
The evolutionary rationale has also been applied by researchers into the paleolithic lifestyle to argue for high levels of physical activity in addition to dietary practices. It has been proposed that human genes "evolved with the expectation of requiring a certain threshold of physical activity" and that sedentary lifestyle results in abnormal gene expression. Compared to ancestral humans, modern humans often have increased body fat and substantially less lean muscle, which is a risk factor for insulin resistance. Human metabolic processes were evolved in the presence of physical activity-rest cycles, which regularly depleted skeletal muscles of their glycogen stores. To date it is unclear whether these activity cycles universally included prolonged endurance activity (e.g. persistence hunting) and/or shorter, higher intensity activity. S. Boyd Eaton estimated that ancestral humans spent one-third of their caloric intake on physical activity (1000 kcal/day out of the total caloric intake of 3000 kcal/day), and that the paleolithic lifestyle was well approximated by the WHO recommendation of the physical activity level of 1.75, or 60 minutes/day of moderate-intensity exercise. L. Cordain estimated that the optimal level of physical activity is on the order of 90 kcal/kg/week (900 kcal/day for a 70 kg human.)
Opposing views
There have been criticisms over the accuracy of the science the diet is based on. Dr John Mcdougall (M.D) attempted to discredit the science used to determine the paleolithic diet, and instead proposed that the human diet around this time was instead based primarily on starches. The evolutionary assumptions underlying the Paleolithic diet have been disputed. According to Alexander Ströhle, Maike Wolters and Andreas Hahn, with the Department of Food Science at the University of Hanover, the statement that the human genome evolved during the Pleistocene (a period from 1,808,000 to 11,550 years ago) rests on the gene-centered view of evolution, which they believe to be controversial. They rely on Gray (2001) to argue that evolution of organisms cannot be reduced to the genetic level with reference to mutation and that there is no one-to-one relationship between genotype and phenotype.
They further question the notion that 10,000 years is an insufficient period of time to ensure an adequate adaptation to agrarian diets. For example, alleles conferring lactose tolerance increased to high frequencies in Europe just a few thousand years after animal husbandry was invented, and recent increases in the number of copies of the gene for salivary amylase, which digests starch, appear to be related to agriculture. Referring to Wilson (1994), Ströhle et al. argue that "the number of generations that a species existed in the old environment was irrelevant, and that the response to the change of the environment of a species would depend on the heritability of the traits, the intensity of selection and the number of generations that selection acts." They state that if the diet of Neolithic agriculturalists had been in discordance with their physiology, then this would have created a selection pressure for evolutionary change and modern humans, such as Europeans, whose ancestors have subsisted on agrarian diets for 400–500 generations should be somehow adequately adapted to it. In response to this argument, Wolfgang Kopp states that "we have to take into account that death from atherosclerosis and cardiovascular disease (CVD) occurs later during life, as a rule after the reproduction phase. Even a high mortality from CVD after the reproduction phase will create little selection pressure. Thus, it seems that a diet can be functional (it keeps us going) and dysfunctional (it causes health problems) at the same time." Moreover, S. Boyd Eaton and colleagues have indicated that "comparative genetic data provide compelling evidence against the contention that long exposure to agricultural and industrial circumstances has distanced us, genetically, from our Stone Age ancestors"; however, they mention exceptions such as increased lactose and gluten tolerance, which improve ability to digest dairy and grains, while other studies indicate that human adaptive evolution has accelerated since the Paleolithic.
Referencing Mahner et al. (2001) and Ströhle et al. (2006), Ströhle et al. state that "whatever is the fact, to think that a dietary factor is valuable (functional) to the organism only when there was ‘genetical adaptation’ and hence a new dietary factor is dysfunctional per se because there was no evolutionary adaptation to it, such a panselectionist misreading of biological evolution seems to be inspired by a naive adaptationistic view of life."
Katharine Milton, a professor of physical anthropology at the University of California, Berkeley, has also disputed the evolutionary logic upon which the Paleolithic diet is based. She questions the premise that the metabolism of modern humans must be genetically adapted to the dietary conditions of the Paleolithic. Relying on several of her previous publications, Milton states that "there is little evidence to suggest that human nutritional requirements or human digestive physiology were significantly affected by such diets at any point in human evolution."
Plant-to-animal ratio
The specific plant to animal food ratio in the Paleolithic diet is also a matter of some dispute. The average diet among modern hunter-gatherer societies is estimated to consist of 64–68% of animal calories and 32–36% of plant calories, with animal calories further divided between fished and hunted animals in varying proportions (most typically, with hunted animal food comprising 26–35% of the overall diet). As part of the Man the Hunter paradigm, this ratio was used as the basis of the earliest forms of the Paleolithic diet by Voegtlin, Eaton and others. To this day, many advocates of the Paleolithic diet consider high percentage of animal flesh to be one of the key features of the diet.
However, great disparities do exist, even between different modern hunter-gatherer societies. The animal-derived calorie percentage ranges from 25% in the Gwi people of southern Africa, to 99% in Alaskan Nunamiut. The animal-derived percentage value is skewed upwards by polar hunter-gatherer societies, who have no choice but to eat animal food because of the inaccessibility of plant foods. Since those environments were only populated relatively recently (for example, Paleo-Indian ancestors of Nunamiut are thought to have arrived in Alaska no earlier than 30,000 years ago), such diets represent recent adaptations rather than conditions that shaped human evolution during much of the Paleolithic. More generally, hunting and fishing tend to provide a higher percentage of energy in forager societies living at higher latitudes. Excluding cold-climate and equestrian foragers results in a diet structure of 52% plant calories, 26% hunting calories, and 22% fishing calories. Furthermore, those numbers may still not be representative of a typical Stone Age diet, since fishing did not become common in many parts of the world until the Upper Paleolithic period 35-40 thousand years ago, and early humans' hunting abilities were relatively limited,[dubious ] compared to modern hunter-gatherers, as well (the oldest incontrovertible evidence for the existence of bows only dates to about 8000 BCE, and nets and traps were invented 20,000 to 29,000 years ago).
Another view is that, up until the Upper Paleolithic, humans were frugivores (fruit eaters), who supplemented their meals with carrion, eggs, and small prey such as baby birds and mussels, and, only on rare occasions, managed to kill and consume big game such as antelopes. This view is supported by the studies of higher apes, particularly chimpanzees. Chimpanzees are closest to humans genetically, sharing more than 98% of their DNA code with humans, and their digestive tract is functionally very similar to that of humans. Chimpanzees are primarily frugivores, but they could and would consume and digest animal flesh, given the opportunity. However, their actual diet in the wild is about 95% plant-based, with the remaining 5% filled with insects, eggs, and baby animals. Some comparative studies of human and higher primate digestive tracts do suggest that humans have evolved to obtain greater amounts of calories from sources such as animal foods, allowing them to shrink the size of the gastrointestinal tract, relative to body mass, and to increase the brain mass instead.
A difficulty with this point of view is that humans are established to conditionally require certain long-chain polyunsaturated fatty acids (LC-PUFAs), such as AA and DHA, from the diet. Human LC-PUFA requirements are much greater than chimpanzees' because of humans' larger brain mass, and humans' abilities to synthesize them from other nutrients are poor, suggesting readily available external sources. Pregnant and lactating females require 100 mg of DHA per day. But LC-PUFAs are almost nonexistent in plants and in most tissues of warm-climate animals.
The main sources of DHA in the modern human diet are fish and the fatty organs of animals, such as brains, eyes and viscera; microalgae is a plant-based source. Despite the general shortage of evidence for extensive fishing, thought to require relatively sophisticated tools which have become available only in the last 30–50 thousand years, it has been argued that exploitation of coastal fauna somehow provided hominids with abundant LC-PUFAs. Alternatively, it has been proposed that early hominids frequently scavenged predators' kills and consumed parts which were left untouched by predators, most commonly the brain, which is very high in AA and DHA. Just 100 g of scavenged African ruminant brain matter provide more DHA than is consumed by a typical modern U.S. adult in the course of a week. Other authors suggested that human ability to convert alpha-Linolenic acid into DHA, while poor, is, nevertheless, adequate to prevent DHA deficiency in a plant-based diet.
Nutritional factors and health effects
Since the end of the Paleolithic period, several foods that humans rarely or never consumed during previous stages of their evolution have been introduced as staples in their diet. With the advent of agriculture and the beginning of animal domestication roughly 10,000 years ago, during the Neolithic Revolution, humans started consuming large amounts of dairy products, beans, cereals, alcohol and salt. In the late 18th and early 19th centuries, the Industrial revolution led to the large scale development of mechanized food processing techniques and intensive livestock farming methods, that enabled the production of refined cereals, refined sugars and refined vegetable oils, as well as fattier domestic meats, which have become major components of Western diets.
Such food staples have fundamentally altered several key nutritional characteristics of the human diet since the Paleolithic era, including glycemic load, fatty acid composition, macronutrient composition, micronutrient density, acid-base balance, sodium-potassium ratio, and fiber content.
These dietary compositional changes have been theorized as risk factors in the pathogenesis of many of the so-called "diseases of civilization" and other chronic illnesses that are widely prevalent in Western societies, including obesity, cardiovascular disease, high blood pressure, type 2 diabetes, osteoporosis, autoimmune diseases, colorectal cancer, myopia, acne, depression, and diseases related to vitamin and mineral deficiencies.
Macronutrient composition
Protein and carbohydrates
"The increased contribution of carbohydrate from grains to the human diet following the agricultural revolution has effectively diluted the protein content of the human diet." In modern hunter-gatherer diets, dietary protein is characteristically elevated (19–35% of energy) at the expense of carbohydrate (22–40% of energy). High-protein diets may have a cardiovascular protective effect and may represent an effective weight loss strategy for the overweight or obese. Furthermore, carbohydrate restriction may help prevent obesity and type 2 diabetes, as well as atherosclerosis. Carbohydrate deprivation to the point of ketosis has been argued both to have negative and positive effects on health.
The notion that preagricultural hunter-gatherers would have typically consumed a diet relatively low in carbohydrate and high in protein has been questioned. Critics argue that there is insufficient data to identify the relative proportions of plant and animal foods consumed on average by Paleolithic humans in general, and they stress the rich variety of ancient and modern hunter-gatherer diets. Furthermore, preagricultural hunter-gatherers may have generally consumed large quantities of carbohydrates in the form of carbohydrate-rich tubers (plant underground storage organs). According to Staffan Lindeberg, an advocate of the Paleolithic diet, a plant-based diet rich in carbohydrates is consistent with the human evolutionary past.
It has also been argued that relative freedom from degenerative diseases was, and still is, characteristic of all hunter-gatherer societies irrespective of the macronutrient characteristics of their diets. Marion Nestle, a professor in the Department of Nutrition and Food Studies at New York University, judging from research relating nutritional factors to chronic disease risks and to observations of exceptionally low chronic disease rates among people eating vegetarian, Mediterranean and Asian diets, has suggested that plant-based diets may be most associated with health and longevity.
Fatty acids
Hunter-gatherer diets have been argued to maintain relatively high levels of monounsaturated and polyunsaturated fats, moderate levels of saturated fats (10–15% of total food energy) as well as a low omega-6:omega-3 fatty acid ratio. Cows fed a grass-based diet produce significant amounts of omega-3 fatty acids compared to grain-fed animals, while minimizing trans fats and saturated fats. The diet does include a significant amount of cholesterol due to the inclusion of lean meat. These nutritional factors may serve to inhibit the development of cardiovascular disease. This high ratio of polyunsaturated to saturated fats has been challenged. While a low saturated fat intake was argued for it has been argued that hunter-gatherers would selectively hunt fatter animals and utilise the fattiest parts of the animals (such as bone marrow).
Energy density
The Paleolithic diet has lower energy density than the typical diet consumed by modern humans. This is especially true in primarily plant-based/vegetarian versions of the diet, but it still holds if substantial amounts of lean meat are included in calculations. For example, most fruits and berries contain 0.4 to 0.8 calories per gram, vegetables can be even lower than that (cucumbers contain only 0.16 calories per gram). Lean game meat, such as cooked wild rabbit, is more energy-dense (up to 1.7 calories per gram), but it does not constitute the bulk of the diet by mass/volume at the recommended plant/animal ratios, and it does not reach the densities of many processed foods commonly consumed by modern humans: most McDonalds sandwiches such as the Big Mac average 2.4 to 2.8 calories/gram, and sweets such as cookies and chocolate bars commonly exceed 4 calories/gram.
There is substantial evidence that people consuming high energy-density diets are prone to overeating and they are at a greater risk of weight gain. Conversely, low caloric density diets tend to provide a greater satiety feeling at the same energy intake, and they have been shown effective at achieving weight loss in overweight individuals without explicit caloric restrictions.
Even some authors who may otherwise appear to be critical of the concept of Paleolithic diet have argued that high energy density of modern diets, as compared to ancestral/primate diets, contributes to the rate of diseases of affluence in the industrial world.
Micronutrient density
Fruits, vegetables, meat and organ meats, and seafood, which are staples of the hunter-gatherer diet, are more micronutrient-dense than refined sugars, grains, vegetable oils, and dairy products in relation to digestible energy. Consequently, the vitamin and mineral content of the diet is very high compared with a standard diet, in many cases a multiple of the RDA. Fish and seafood represent a particularly rich source of omega-3 fatty acids and other micronutrients, such as iodine, iron, zinc, copper, and selenium, that are crucial for proper brain function and development. Terrestrial animal foods, such as muscle, brain, bone marrow, thyroid gland, and other organs, also represent a primary source of these nutrients. Calcium-poor grains and legumes are excluded from the diet. Although, leafy greens like Kale and dandelion greens as well as nuts such as almonds are very high sources of calcium. Also, components in plants make their low calcium amounts much more easily absorbed, unlike items with high calcium content such as dairy Two notable exceptions are calcium (see below) and vitamin D, both of which may be present in the diet in inadequate quantities. Modern humans require much more vitamin D than hunter-gatherers, because they do not get the same amount of exposure to sun. This need is commonly satisfied in developed countries by artificially fortifying dairy products with the vitamin. To avoid deficiency, a modern human on a hunter-gatherer diet would have to take artificial supplements of the vitamin, ensure adequate intake of some fatty fish, or increase the amount of exposure to sunlight (it has been estimated that 30 minutes of exposure to mid-day sun twice a week is adequate for most people).
Fiber content and glycemic load
Despite its relatively low carbohydrate content, the Paleolithic diet involves a substantial increase in consumption of fruit and vegetables, compared to the Western diet, potentially as high as 1.65 to 1.9 kg/day. This leads to fiber intake which is significantly larger than either current or recommended values. Hunter-gatherer diets, which rely on uncultivated, heavily fibrous fruit and vegetables, contain even more. Fiber intake in preagricultural diets is thought to have exceeded 100 g/day. This is dramatically higher than the actual current U.S. intake of 15 g/day.
Unrefined wild plant foods like those available to contemporary hunter-gatherers typically exhibit low glycemic indices. Moreover, dairy products, such as milk, have low glycemic indices, but are highly insulinotropic, with an insulin index similar to that of white bread. However, in fermented milk products, such as yogurt, the presence of organic acids may counteract the insulinotropic effect of milk in mixed meals. These dietary characteristics may lower risk of diabetes, obesity and other related syndrome X diseases by placing less stress on the pancreas to produce insulin due to staggered absorption of glucose, thus preventing insulin insensitivity.
Sodium-potassium ratio
It has been estimated that people in the Paleolithic era consumed 11,000 mg of potassium and 700 mg of sodium daily. The modern Paleolithic diet includes neither processed foods (which often contains salt as a preservative) nor added salt as a condiment. The sodium intake of the diet (~726 mg) is lower than average U.S. values (3,271 mg) or recommended values (1,500 mg). Further, since potassium-rich fruits and vegetables compose ~30% of the daily energy, the potassium content (~9,062 mg) is nearly 3.5 times greater than average values (2,620 mg) in the U.S. diet.
Calcium and acid-base balance
Diets containing high amounts of animal products, animal protein, processed foods, and other foods that induce and sustain increased acidity of body fluid may contribute to the development of osteoporosis and renal stones, loss of muscle mass, and age-related renal insufficiency due to the body's use of calcium to buffer pH. The paleo diet may not contain the high levels of calcium recommended in the U.S. to prevent these effects. However, because of the absence of cereals and energy-dense, nutrient-poor foods in the ancestral hunter-gatherer diet—foods that displace base-yielding fruits and vegetables—that diet has been estimated to produce a net base load on the body, as opposed to a net acid load, which may reduce calcium excretion.
Bioactive substances and antinutrients
Furthermore, cereal grains, legumes and milk contain bioactive substances, such as gluten and casein, which have been implicated in the development of various health problems. Consumption of gluten, a component of certain grains, such as wheat, rye and barley, is known to have adverse health effects in individuals suffering from a range of gluten sensitivities, including celiac disease. Since the Paleolithic diet is devoid of cereal grains, it is free of gluten. The paleo diet is also casein-free. Casein, a protein found in milk and dairy products, may impair glucose tolerance in humans.
Compared to Paleolithic food groups, cereal grains and legumes contain high amounts of antinutrients, including alkylresorcinols, alpha-amylase inhibitors, protease inhibitors, lectins and phytates, substances known to interfere with the body's absorption of many key nutrients. Molecular-mimicking proteins, which are basically made up of strings of amino acids that closely resemble those of another totally different protein, are also found in grains and legumes, as well as milk and dairy products. Advocates of the Paleolithic diet have argued that these components of agrarian diets promote vitamin and mineral deficiencies and may explain the development of the "diseases of civilization" as well as a number of autoimmune-related diseases.
Archeological record
One line of evidence used to support the Stone Age diet is the decline in human health and body mass that occurred with the adoption of agriculture, at the end of the Paleolithic era. Associated with the introduction of domesticated and processed plant foods, such as cereal grains, in the human diet, there was, in many areas, a general decrease in body stature and dentition size, and an increase in dental caries rates. There is evidence of a general decline in health in some areas; whether the decline was caused by dietary change is debated academically.
Observational studies
Based on the subsistence patterns and biomarkers of hunter-gatherers studied in the last century, advocates argue that modern humans are well adapted to the diet of their Paleolithic ancestor. The diet of modern hunter-gatherer groups is believed to be representative of patterns for humans of fifty to twenty-five thousand years ago, and individuals from these and other technologically primitive societies, including those individuals who reach the age of 60 or beyond, seem to be largely free of the signs and symptoms of chronic disease (such as obesity, high blood pressure, nonobstructive coronary atherosclerosis, and insulin resistance) that universally afflict the elderly in western societies (with the exception of osteoarthritis, which afflicts both populations). Moreover, when these people adopt western diets, their health declines and they begin to exhibit signs and symptoms of "diseases of civilization". In one clinical study, stroke and ischaemic heart disease appeared to be absent in a population living on the island of Kitava, in Papua New Guinea, where a subsistence lifestyle, uninfluenced by western dietary habits, was still maintained.
One of the most frequent criticisms of the Paleolithic diet is that it is unlikely that preagricultural hunter-gatherers suffered from the diseases of modern civilization simply because they did not live long enough to develop these illnesses, which are typically associated with old age. According to S. Jay Olshansky and Bruce Carnes, "there is neither convincing evidence nor scientific logic to support the claim that adherence to a Paleolithic diet provides a longevity benefit." In response to this argument, advocates of the paleodiet state that while Paleolithic hunter-gatherers did have a short average life expectancy, modern human populations with lifestyles resembling that of our preagricultural ancestors have little or no diseases of affluence, despite sufficient numbers of elderly. In hunter-gatherer societies where demographic data is available, the elderly are present, but they tend to have high mortality rates and rarely survive past the age of 80, with causes of death (when known) ranging from injuries to measles and tuberculosis.
Critics further contend that food energy excess, rather than the consumption of specific novel foods, such as grains and dairy products, underlies the diseases of affluence. According to Geoffrey Cannon, science and health policy advisor to the World Cancer Research Fund, humans are designed to work hard physically to produce food for subsistence and to survive periods of acute food shortage, and are not adapted to a diet rich in energy-dense foods. Similarly, William R. Leonard, a professor of anthropology at Northwestern University, states that the health problems facing industrial societies stem not from deviations from a specific ancestral diet but from an imbalance between calories consumed and calories burned, a state of energy excess uncharacteristic of ancestral lifestyles.
Intervention studies
The first animal experiment on a Paleolithic diet suggested that this diet, as compared with a cereal-based diet, conferred higher insulin sensitivity, lower C-reactive protein and lower blood pressure in 24 domestic pigs. There was no difference in basal serum glucose. The first human clinical randomized controlled trial involved 29 people with glucose intolerance and ischemic heart disease, and it found that those on a Paleolithic diet had a greater improvement in glucose tolerance compared to those on a Mediterranean diet. Furthermore, the Paleolithic diet was found to be more satiating per calorie compared to the Mediterranean diet.
A clinical, randomized, controlled cross-over study in the primary care setting compared the Paleolithic diet with a commonly prescribed diet for type 2 diabetes. The Paleolithic diet resulted in lower mean values of HbA1c, triacylglycerol, diastolic blood pressure, body mass index, waist circumference and higher values of high density lipoprotein when compared to the Diabetes diet. Also, glycemic control and other cardiovascular factors were improved in both diets without significant differences. It is also important to note that the Paleolithic diet was lower in total energy, energy density, carbohydrate, dietary glycemic load and glycemic index, saturated fatty acids and calcium, but higher in unsaturated fatty acids, dietary cholesterol and some vitamins. Two clinical trials designed to test various physiological effects of the Paleolithic diet are currently underway, and the results of one completed trial have shown metabolic and physiologic improvements. The European Journal of Clinical Nutrition published a study of a trial of the Paleolithic diet in 20 healthy volunteers. The study had no control group, and only 14 individuals completed the diet. In the study, in three weeks there was an average weight reduction of 2.3 kg, an average reduction in waist circumference of 1.5 cm (about one-half inch), an average reduction in systolic blood pressure of 3 mm Hg, and a 72% reduction in plasminogen activator inhibitor-1 (which might translate into a reduced risk of heart attack and stroke.) However, the NHS Knowledge Service pointed out that this study, like most human diet studies, relied on observational data. The NHS concluded that the lack of a control group, and the small sample of size of the study, compromises their conclusions. With only 14 participants the study lacks the statistical power to detect health improvements, and perhaps the simple fact that these 14 individuals knew that they were on a diet program made them more aware of weight and exercise regime, skewing the results.
See also
- Diabetic diet
- Inuit diet
- Mark Sisson
- Modern primitive
- Natural foods
- Nutritional genomics
- Paleolithic lifestyle
- Peter Ungar
- Prehistoric medicine
- Protein poisoning
- Raw feeding
- Ray Mears
- Roger MacDougall
- Vilhjalmur Stefansson
- Whole foods
- Lindeberg, Staffan (June 2005). "Palaeolithic diet ('stone age' diet)". Scandinavian Journal of Food & Nutrition 49 (2): 75–7. doi:10.1080/11026480510032043.
- Bryngelsson, Susanne; Asp, Nils-Georg (March 2005). "Popular diets, body weight and health: What is scientifically documented?". Scandinavian Journal of Food & Nutrition 49 (1): 15–20. doi:10.1080/11026480510031990.
- Cordain, Loren (Summer 2002). "The Nutritional Characteristics of a Contemporary Diet Based Upon Paleolithic Food Groups" (PDF). Journal of the American Nutraceutical Association 5 (5): 15–24.[unreliable medical source?]
- Lindeberg, Staffan; Cordain, Loren; Eaton, S. Boyd (September 2003). "Biological and Clinical Potential of a Palaeolithic Diet". Journal of Nutritional and Environmental Medicine 13 (3): 149–60. doi:10.1080/13590840310001619397.
- Voegtlin, Walter L. (1975). The stone age diet: Based on in-depth studies of human ecology and the diet of man. Vantage Press. ISBN 0-533-01314-3.[page needed]
- Smith, Emma (October 12, 2008). "The Ray Mears caveman diet". The Sunday Times. Retrieved November 1, 2008.
- Richards, Michael P. (December 2002). "A brief review of the archaeological evidence for Palaeolithic and Neolithic subsistence". European Journal of Clinical Nutrition 56 (12): 1270–78. doi:10.1038/sj.ejcn.1601646. PMID 12494313.
- Naugler, Christopher T. (September 1 2008). "Evolutionary medicine: Update on the relevance to family practice". Canadian Family Physician 54 (9): 1265–9. PMC 2553465. PMID 18791103.
- Eaton, S.Boyd; Strassman, Beverly I; Nesse, Randolph M; Neel, James V; Ewald, Paul W; Williams, George C; Weder, Alan B; Eaton, Stanley B et al. (2002). "Evolutionary Health Promotion". Preventive Medicine 34 (2): 109–18. doi:10.1006/pmed.2001.0876. PMID 11817903.
- Cordain, Loren; Eaton, S Boyd; Sebastian, Anthony; Mann, Neil; Lindeberg, Staffan; Watkins, Bruce A; O’Keefe, James H; Brand-Miller, Janette (2005). "Origins and evolution of the Western diet: health implications for the 21st century". American Journal of Clinical Nutrition 81 (2): 341–54. PMID 15699220.
- Kligler, Benjamin & Lee, Roberta A. (eds.) (2004). "Paleolithic diet". Integrative medicine. McGraw-Hill Professional. pp. 139–40. ISBN 0-07-140239-X.
- Eaton, S.Boyd; Cordain, Loren; Lindeberg, Staffan (2002). "Evolutionary Health Promotion: A Consideration of Common Counterarguments". Preventive Medicine 34 (2): 119–23. doi:10.1006/pmed.2001.0966. PMID 11817904.
- Lindeberg S, Jönsson T, Granfeldt Y, Borgstrand E, Soffman J, Sjöström K, Ahrén B (September 2007). "A Palaeolithic diet improves glucose tolerance more than a Mediterranean-like diet in individuals with ischaemic heart disease" (PDF). Diabetologia 50 (9): 1795–807. doi:10.1007/s00125-007-0716-y. PMID 17583796.
- Frassetto, L A; Schloetter, M; Mietus-Synder, M; Morris, R C; Sebastian, A (2009). "Metabolic and physiologic improvements from consuming a paleolithic, hunter-gatherer type diet". European Journal of Clinical Nutrition 63 (8): 947–955. doi:10.1038/ejcn.2009.4. PMID 19209185.
- Eaton, S. Boyd (2007). "The ancestral human diet: What was it and should it be a paradigm for contemporary nutrition?". Proceedings of the Nutrition Society 65 (1): 1–6. doi:10.1079/PNS2005471. PMID 16441938.
- Cordain, Loren (June 15, 2011). "Dr. Cordain’s Rebuttal to U.S. News and World Report Top 20 Diets".[self-published source?]
- Nestle, Marion (May 1999). "Animal v. plant foods in human diets and health: is the historical record unequivocal?". Proceedings of the Nutrition Society 58 (2): 211–18. doi:10.1017/S0029665199000300. PMID 10466159.
- Cannon, Geoffrey (June 2006). "Out of the Box". Public Health Nutrition 9 (4): 411–14. doi:10.1079/PHN2006959.
- Milton, Katharine (2002). "Hunter-gatherer diets: wild foods signal relief from diseases of affluence (PDF)". In Ungar, Peter S. & Teaford, Mark F. Human Diet: Its Origins and Evolution. Westport, CT: Bergin and Garvey. pp. 111–22. ISBN 0-89789-736-6.
- "Caveman fad diet".
- Milton, Katharine (March 1 2000). "Hunter-gatherer diets—A different perspective". The American Journal of Clinical Nutrition 71 (3): 665–67. PMID 10702155.
- Elton, Sarah (2008). "Environments, Adaptation, and Evolutionary Medicine: Should We Be Eating a Stone Age Diet". In Elton, Sarah; O'Higgins, Paul. Medicine and Evolution: Current Applications, Future Prospects. London: Taylor and Francis. pp. 9–34. ISBN 978-1-4200-5134-6.
- Ströhle, Alexander; Wolters, Maike; Hahn, Andreas (January 2007). "Carbohydrates and the diet–atherosclerosis connection—More between earth and heaven. Comment on the article 'The atherogenic potential of dietary carbohydrate'". Preventive Medicine 44 (1): 82–4. doi:10.1016/j.ypmed.2006.08.014. PMID 16997359.
- Nestle, Marion (March 2000). "Paleolithic diets: a sceptical view". Nutrition Bulletin 25 (1): 43–7. doi:10.1046/j.1467-3010.2000.00019.x.
- Thompson, Randall C, et. al. (March 2013). "Atherosclerosis across 4000 years of human history: the Horus study of four ancient populations". The Lancet. doi:10.1016/S0140-6736(13)60598-X.
- Gorski, David (March 18, 2013). "It’s a part of my paleo fantasy, it’s a part of my paleo dream". Science Based Medicine. Retrieved March 20, 2013.
- "Best Diets Overall". U.S.News & World Report. 2012.
- "Paleo Diet". U.S.News & World Report. 2012.
- Lindeberg, Staffan (2010). Food and Western Disease: Health and Nutrition from an Evolutionary Perspective. Chichester, U.K.: Wiley-Blackwell. ISBN 1-4051-9771-4. OCLC 435728298.[page needed]
- Cordain, Loren (June 15, 2011). "Dr. Cordain’s Rebuttal to U.S. News and World Report Top 20 Diets".[self-published source?]
- Fallon, Sally; Enig, Mary G. (January 1, 2000). "Caveman Cuisine". Weston A. Price Foundation.
- "Functional and Structural Comparison of Man's Digestive Tract with that of a Dog and Sheep". Retrieved January 19, 2008.
- Voegtlin, Walter L. (1975). The stone age diet: Based on in-depth studies of human ecology and the diet of man. Vantage Press. ISBN 0-533-01314-3.[page needed]
- Audette, Ray V.; Gilchrist, Troy; Audette, Raymond V.; & Eades, Michael R. (November 23, 1999). NeanderThin : Eat Like a Caveman to Achieve a Lean, Strong, Healthy Body. New York: St. Martin's Paperbacks. ISBN 0-312-97591-0. Archived from the original on Juli 19, 2011. Retrieved December 15, 2011.[page needed]
- Eaton, S. Boyd; Konner, Melvin (1985). "Paleolithic Nutrition — A Consideration of Its Nature and Current Implications". The New England Journal of Medicine 312 (5): 283–9. doi:10.1056/NEJM198501313120505. PMID 2981409.
- Taylor, Mike (January 9, 2008). "Refined Food Bad! Caveman Diet Good!". TheStreet.com. Retrieved January 19, 2008.
- Eaton, S. Boyd; Shostak, Marjorie; & Konner, Melvin (1988). The Paleolithic Prescription: A Program of Diet & Exercise and a Design for Living. New York: Harper & Row. ISBN 0-06-015871-9.[non-primary source needed]
- Sirota, Lorraine Handler; Greenberg, George (1989). "Book reviews". Biofeedback and Self-Regulation 14 (4): 347–54. doi:10.1007/BF00999126.
- Gilman, Sander L.; Bauber, Joe (2007). "Paleolithic diet". In Gilman, Sander L. Diets and Dieting: A Cultural Encyclopedia. Routledge. pp. 209–11. ISBN 0-415-97420-8.
- Eaton, S. Boyd; Shostak, Marjorie; & Konner, Melvin (1989). Stone-Age Health Programme. Angus & Robertson Childrens. ISBN 0-207-16264-6.[non-primary source needed]
- Vines, Gail (August 26, 1989). "Palaeolithic recipe for the clean life / Review of 'The Stone-Age Health Programme' by S. Boyd Eaton, Marjorie Shostak and Melvin Konner". New Scientist. Retrieved January 19, 2008.
- Lindeberg, S; & Lundh, B (March 1993). "Apparent absence of stroke and ischaemic heart disease in a traditional Melanesian island: a clinical study in Kitava". Journal of Internal Medicine 233 (3): 269–75. doi:10.1111/j.1365-2796.1993.tb00986.x. PMID 8450295.
- "Kitava Study publications". PubMed, U.S. National Library of Medicine.[unreliable medical source?]
- Lindeberg, Staffan (2003). Maten och folksjukdomarna — ett evolutionsmedicinskt perspektiv (in Swedish). Lund: Studentlitteratur. ISBN 91-44-04167-5. OCLC 186108854.[non-primary source needed]
- Lindeberg, Staffan (2010). Food and Western Disease: Health and Nutrition from an Evolutionary Perspective. Chichester, U.K.: Wiley-Blackwell. ISBN 1-4051-9771-4. OCLC 435728298.
- Eades, Michael R. & Eades, Mary Dan (2000). The Protein Power Lifeplan. New York: Warner Books. ISBN 0-446-60824-6.[page needed]
- Atkins, Robert C. (1999). Dr Atkins' New Diet Revolution. Vermilion. ISBN 0-09-188948-0.[page needed]
- Worm, Nicolai (2002). Syndrom X oder ein Mammut auf den Teller. Mit Steinzeit-Diät aus det Wohl stands Falle (in German). Lünen: Systemed-Verlag. ISBN 3-927372-23-4.[page needed]
- Audette, Ray V.; Gilchrist, Troy; Audette, Raymond V.; & Eades, Michael R. (November 23, 1999). NeanderThin : Eat Like a Caveman to Achieve a Lean, Strong, Healthy Body. New York: St. Martin's Paperbacks. ISBN 0-312-97591-0. Archived from the original on Juli 19, 2011. Retrieved December 15, 2011.
- Cordain, Loren (2002). The Paleo Diet: Lose Weight and Get Healthy by Eating the Food You Were Designed to Eat. New York: Wiley. ISBN 0-471-26755-4.[page needed]
- Cordain, Loren & Friel, Joe (2005). The Paleo Diet for Athletes: A Nutritional Formula for Peak Athletic Performance. Rodale Books. ISBN 1-59486-089-0.[page needed]
- Wiss, Don. "Paleo Diet". The Paleolithic Diet Nutrition Page. Archived from the original on January 9, 1997. Retrieved December 15, 2011.
- Lindeberg, Staffan. "Home". Paleolithic Diet in Medical Nutrition. Retrieved January 19, 2008.
- Cordain, Loren. "The Science of Healthy Eating". The Paleo Diet. Retrieved January 19, 2008.
- James, Abel. "Is The Paleo Diet Too Extreme? What if I Don’t Want to be a Caveman?". The LeanBody Lifestyle. Retrieved March 1, 2012.
- Vogin, Gary (2000). "Eating Like a Caveman". WebMD. Retrieved August 3, 2008.
- Burfoot, Amby (February 11, 2005). "Should you be eating like a Caveman?". Runner's World. Retrieved January 19, 2008.
- Shreeve, Jimmy Lee (August 16, 2007). "The Stone Age Diet: Why I Eat Like a Caveman". Independent UK. Retrieved January 19, 2008.
- Tuttle, Erica (September 4, 2000). "Revolutionary Evolutionary Diets". FindArticles. Retrieved January 19, 2008.
- Mysterud, Iver (May 20, 2004). "Kosthold og evolusjon". Tidsskr nor Lægeforen (in Swedish) 124 (10): 1415.
- Cordain, Loren. "A Sample of Paleo Recipes". The Paleo Diet. Archived from the original on January 11, 2008. Retrieved January 19, 2008.
- Lindeberg, Staffan. "Frequently Asked Questions: What can I eat?". Paleolithic Diet in Medical Nutrition. Retrieved January 19, 2008.
- O'Keefe, James H.; & Cordain, Loren (January 2004). "Cardiovascular disease resulting from a diet and lifestyle at odds with our Paleolithic genome: how to become a 21st-century hunter-gatherer" (PDF). Mayo Clinic Proceedings 79 (1): 101–08. doi:10.4065/79.1.101. PMID 14708953.
- Lindeberg, Staffan (2009). "Modern human physiology with respect to evolutionary adaptations that relate to diet in the past". In Hublin, Jean-Jacques; & Richards, Michael P. The Evolution of Hominin Diets: Integrating Approaches to the Study of Palaeolithic Subsistence. Springer. ISBN 978-1-4020-9698-3.
- Cordain, Loren (2006). "Implications of Plio-Pleistocene Hominin Diets for Modern Humans (PDF)". In Ungar, Peter S. Evolution of the Human Diet: The Known, the Unknown, and the Unknowable. Oxford, USA: Oxford University Press. pp. 363–83. ISBN 0-19-518346-0.
- The Paleolithic/Paleo/Caveman/Primal Diet Defined
- Cordain L, Watkins BA, Florant GL, Kelher M, Rogers L, Li Y (March 2002). "Fatty acid analysis of wild ruminant tissues: evolutionary implications for reducing diet-related chronic disease". European Journal of Clinical Nutrition 56 (3): 181–91. doi:10.1038/sj.ejcn.1601307. PMID 11960292.
- Cordain L, Eaton SB, Sebastian A, Mann N, Lindeberg S, Watkins BA, O'Keefe JH, Brand-Miller J (1 August 2005). "Reply to SC Cunnane" (PDF). The American Journal of Clinical Nutrition 82 (2): 483–84. PMID 16087997.
- Eaton, S. Boyd (2006). "Preagricultural Diets and Evolutionary Health Promotion". In Peter Ungar. Evolution of the Human Diet: The Known, the Unknown, and the Unknowable. Oxford, USA: Oxford University Press. p. 400. ISBN 0-19-518346-0.
- Raw Paleolithic Diet & Lifestyle — Raw Paleo Lifestyle for Health
- Raw Paleo Diet - RVAF Systems Overview
- Cordain L, Miller JB, Eaton SB, Mann N, Holt SH, Speth JD (1 March 2000). "Plant-animal subsistence ratios and macronutrient energy estimations in worldwide hunter-gatherer diets". The American Journal of Clinical Nutrition 71 (3): 682–92. PMID 10702160.
- Cordain L, Eaton SB, Miller JB, Mann N, Hill K (March 2002). "The paradoxical nature of hunter-gatherer diets: meat based, yet non-atherogenic" (PDF). European Journal of Clinical Nutrition 56 (Suppl 1): S42–52. doi:10.1038/sj.ejcn.1601353. PMID 11965522.
- Eaton SB, Eaton SB 3rd, Konner MJ (1997). "Paleolithic nutrition revisited: a twelve-year retrospective on its nature and implications" (PDF). European Journal of Clinical Nutrition 51 (4): 207–16. doi:10.1038/sj.ejcn.1600389. PMID 9104571.
- Eaton SB, Cordain L, Eaton SB (2001). "An evolutionary foundation for health promotion" (PDF). World Review of Nutrition and Dietetics. World Review of Nutrition and Dietetics 90: 5–12. doi:10.1159/000059815. ISBN 3-8055-7211-5. PMID 11545045.
- Frank W Booth et al. (2002). "Exercise and gene expression: physiological regulation of the human genome through physical activity". J Physiol 543 (Pt 2): 399–411. doi:10.1113/jphysiol.2002.019265. PMC 2290514. PMID 12205177.
- Cordain L et al. (1998). "Physical activity, energy expenditure and fitness: an evolutionary perspective". International Journal of Sports Medicine.
- S. Boyd Eaton et al. (2009). "Evolution, body composition, insulin receptor competition, and insulin resistance". Preventive Medicine.
- <Please add first missing authors to populate metadata.> (January 2004). "Eating, exercise, and "thrifty" genotypes: connecting the dots toward an evolutionary understanding of modern chronic diseases". J. Appl. Physiol. 96 (1): 3–10. doi:10.1152/japplphysiol.00757.2003. PMID 14660491.
- W. H. M. Saris (2003). "How much physical activity is enough to prevent unhealthy weight gain? Outcome of the IASO 1st Stock Conference and consensus statement". Obesity.
- Eaton, SB; Eaton, SB (2003). "An evolutionary perspective on human physical activity: Implications for health". Comparative biochemistry and physiology. Part A, Molecular & integrative physiology 136 (1): 153–9. doi:10.1016/S1095-6433(03)00208-3. PMID 14527637.
- Gray, Russell D. (2001). "Selfish genes or developmental systems?". In Singh, Rama S.; Krimbas, Costas B.; Paul, Diane B.; & Beatty, John. Thinking about Evolution: Historical, Philosophical and Political Perspectives. Cambridge: Cambridge University Press. pp. 184–207. ISBN 0-521-62070-8.
- Santos, J. L.; Saus, E.; Smalley, S. V.; Cataldo, L. R.; Alberti, G.; Parada, J.; Gratacòs, M.; Estivill, X. (2012). "Copy Number Polymorphism of the Salivary Amylase Gene: Implications in Human Nutrition Research". Journal of Nutrigenetics and Nutrigenomics 5 (3): 117–131. doi:10.1159/000339951. PMID 22965187.
- Wilson, David S. (1994). "Adaptive genetic variation and human evolutionary psychology". Ethology and Sociobiology 15 (4): 219–35. doi:10.1016/0162-3095(94)90015-9.
- Kopp, Wolfgang (January 2007). "Reply to the comment of Ströhle et al". Preventive Medicine 44 (1): 84–5. doi:10.1016/j.ypmed.2006.09.003.
- Hawks J, Wang ET, Cochran GM, Harpending HC, Moyzis RK (December 2007). "Recent acceleration of human adaptive evolution". Proc Natl Acad Sci U S A 104 (52): 20753–8. doi:10.1073/pnas.0707650104. PMC 2410101. PMID 18087044.
- Mahner, Martin; & Bunge, Mario (2001). "Function and functionalism: a synthetic perspective". Philosophy of Science 68 (1): 75–94. doi:10.1086/392867.
- Ströhle, Alexander; & Hahn, Andreas (2006). "Evolutionary nutrition science and dietary recommendations of the Stone Age—The ideal answer to present-day nutritional questions or reason for criticism? Part 1: Concept, arguments and paleoanthropological findings" (PDF). Ernährungs-Umschau (in German) 53 (1): 10–16. Abstract (in English)
- Milton, Katharine; & Demment, Montague W. (1 September 1988). "Digestion and passage kinetics of chimpanzees fed high and low fiber diets and comparison with human data" (PDF). Journal of Nutrition 118 (9): 1082–88. PMID 2843616.
- Milton, Katharine (June 1999). "Nutritional characteristics of wild primate foods: do the diets of our closest living relatives have lessons for us?" (PDF). Nutrition 15 (6): 488–98. doi:10.1016/S0899-9007(99)00078-7. PMID 10378206.
- Milton, Katharine (1999). "A hypothesis to explain the role of meat-eating in human evolution" (PDF). Evolutionary Anthropology 8 (1): 11–21. doi:10.1002/(SICI)1520-6505(1999)8:1<11::AID-EVAN6>3.0.CO;2-M.
- Milton, Katharine (2000). "Back to basics: why foods of wild primates have relevance for modern human health" (PDF). Nutrition 16 (7–8): 481–83. doi:10.1016/S0899-9007(00)00293-8. PMID 10906529.
- Piperno, D; Weiss, E., Hols, I., Nadel, D (2004). "Processing of wild cereal grains in the Upper Paleolithic revealed by starch grain analysis". Nature 430 (7000): 670–673. doi:10.1038/nature02734. PMID 15295598.
- Aranguren, B; Becattani, R., Lippi, M.M., Revedin, A (2007). "Grinding flour in Upper Palaeolithic Europe (25 000 years bp)". Antiquity 81: 845–855.
- Revedin, Anna; Aranguren, B; Becattini, R; Longo, L; Marconi, E; Lippi, MM; Skakun, N; Sinitsyn, A et al. (2010). "Thirty thousand-year-old evidence of plant food processing". Proc Natl Acad Sci U S A 107 (44): 18815–9. doi:10.1073/pnas.1006993107. PMC 2973873. PMID 20956317.
- Julio Mercader (2009) 'Mozambican Grass Seed Consumption During the Middle Stone Age', Science, 18 December 2009.
- Murphy, D (2007). People, Plants and Genes: The Story of Crops and Humanity. Oxford: Oxford University Press. ISBN 0-19-920713-5.
- Marlowe FW (2005). "Hunter-gatherers and human evolution" (PDF). Evolutionary Anthropology 14 (2): 15294. doi:10.1002/evan.20046.
- Kolbert, Elizabeth. "Flesh of Your Flesh", The New Yorker, November 9, 2009, accessed January, 27, 2011.
- African Bone Tools Dispute Key Idea About Human Evolution National Geographic News article.
- Collins, Desmond (1973). Background to archaeology: Britain in its European setting (Revised ed.). Cambridge University Press. ISBN 0-521-20155-1.
- Donna Hart, Robert W. Sussman. Man the Hunted. ISBN 0-8133-3936-7.
- "Chimp hunting and flesh-eating".
- "Chimpanzees 'hunt using spears'". BBC News. February 22, 2007.
- Leslie C. Aiello, Peter Wheeler (1995). "The expensive-tissue hypothesis". Current Anthropology.
- Kris-Etherton, PM; Harris, WS; Appel, LJ; Nutrition, Committee (2003). "Fish Consumption, Fish Oil, Omega-3 Fatty Acids, and Cardiovascular Disease". Arteriosclerosis, thrombosis, and vascular biology 23 (2): e20–30. doi:10.1161/01.ATV.0000038493.65177.94. PMID 12588785.
- Crawford, M. A. et al. (1999). "Evidence for the Unique Function of Docosahexaenoic Acid (DHA) During the Evolution of the Modern Hominid Brain". Lipids: S39–S47.
- Cordain, L.; Watkins, B. A.; Mann, N. J. (2001). "Fatty acid composition and energy density of foods available to African hominids: evolutionary implications for human brain development". World Review of Nutrition and Dietetics: 144–161.
- "Dietary Fats: Total Fat and Fatty Acids".
- Bryce A. Carlson and John D. Kingston (2007). Docosahexaenoic Acid Biosynthesis and Dietary Contingency: Encephalization Without Aquatic Constraint.
- Fairweather-Tait, Susan J. (October 29, 2003). "Human nutrition and food research: opportunities and challenges in the post-genomic era". Phil. Trans. R. Soc. B 358 (1438): 1709–27. doi:10.1098/rstb.2003.1377. PMC 1693270. PMID 14561328.
- Jönsson T, Olsson S, Ahrén B, Bøg-Hansen TC, Dole A, Lindeberg S (2005). "Agrarian diet and diseases of affluence – Do evolutionary novel dietary lectins cause leptin resistance?". BMC Endocrine Disorders 5: 10. doi:10.1186/1472-6823-5-10. PMC 1326203. PMID 16336696.
- Leach, Jeff D. (2007). "Prebiotics in Ancient Diet". Food Science and Technology Bulletin 4 (1): 1–8. doi:10.1616/1476-2137.14801.
- Collins, Christopher (January–March 2007). "Said Another Way: Stroke, Evolution, and the Rainforests: An Ancient Approach to Modern Health Care". Nursing Forum 42 (1): 39–44. doi:10.1111/j.1744-6198.2007.00064.x. PMID 17257394.
- Bellisari A. (March 2008). "Evolutionary origins of obesity". Obesity Reviews 9 (2): 165–180. doi:10.1111/j.1467-789X.2007.00392.x. PMID 18257754.
- Strandvik, B. Eriksson, S. Garemo, M. Palsdottir, V. Samples, S. Pickova, J (March 4, 2008). "Is the relatively low intake of omega-3 fatty acids in Western diet contributing to the obesity epidemics?". Lipid Technology 20 (3): 57–59. doi:10.1002/lite.200800009.
- Wood LE (October 2006). "Obesity, waist–hip ratio and hunter–gatherers". BJOG: an International Journal of Obstetrics & Gynaecology 113 (10): 1110–16. doi:10.1111/j.1471-0528.2006.01070.x. PMID 16972857.
- O'Keefe JH Jr, Cordain L, Harris WH, Moe RM, Vogel R (June 2004). "Optimal low-density lipoprotein is 50 to 70 mg/dl: lower is better and physiologically normal". Journal of the American College of Cardiology (American College of Cardiology) 43 (11): 2142–46. doi:10.1016/j.jacc.2004.03.046. PMID 15172426.
- O'Keefe JH Jr, Cordain L, Jones PG, Abuissa H. (July 2006). "Coronary artery disease prognosis and C-reactive protein levels improve in proportion to percent lowering of low-density lipoprotein". The American Journal of Cardiology 98 (1): 135–39. doi:10.1016/j.amjcard.2006.01.062. PMID 16784936.
- Kopp, Wolfgang (May 2006). "The atherogenic potential of dietary carbohydrate". Preventive Medicine 42 (5): 336–42. doi:10.1016/j.ypmed.2006.02.003. PMID 16540158.
- Tekol, Yalcin (April 2008). "Maternal and infantile dietary salt exposure may cause hypertension later in life". Birth Defects Research Part B: Developmental and Reproductive Toxicology 83 (2): 77–79. doi:10.1002/bdrb.20149. PMID 18330898.
- Dedoussis GV, Kaliora AC, Panagiotakos DB (Spring 2007). "Genes, Diet and Type 2 Diabetes Mellitus: A Review". Review of Diabetic Studies 4 (1): 13–24. doi:10.1900/RDS.2007.4.13. PMC 1892523. PMID 17565412.
- Haag, Marianne; & Dippenaar, Nola (2005). "Dietary fats, fatty acids and insulin resistance: short review of a multifaceted connection". Medical Science Monitor 11 (12): RA359–367. PMID 16319806.
- Sebastian A, Frassetto LA, Sellmeyer DE, Merriam RL, Morris RC Jr (1 December 2002). "Estimation of the net acid load of the diet of ancestral preagricultural Homo sapiens and their hominid ancestors". The American Journal of Clinical Nutrition 76 (6): 1308–16. PMID 12450898.
- Morris RC Jr, Schmidlin O, Frassetto LA, Sebastian A (June 2006). "Relationship and interaction between sodium and potassium". Journal of the American College of Nutrition 25 (3): 262S–70S. PMID 16772638.
- Cordain, Loren (1999). "Cereal grains: humanity's double-edged sword" (PDF). World review of nutrition and dietetics. World Review of Nutrition and Dietetics 84: 19–73. doi:10.1159/000059677. ISBN 3-8055-6827-4. PMID 10489816.
- Bostick, Roberd M. (2001). "Diet and nutrition in the etiology and primary prevention of colon cancer". In Bendich, Adrianne; Deckelbaum, Richard J. Preventive Nutrition: The Comprehensive Guide for Health Professionals. Humana Press. pp. 47–98. ISBN 0-89603-911-0.
- Lawlor, Debbie A; & Ness, Andy R (April 2003). "Commentary: The rough world of nutritional epidemiology: Does dietary fibre prevent large bowel cancer?". International Journal of Epidemiology 32 (2): 239–43. doi:10.1093/ije/dyg060. PMID 12714543.
- Leach, Jeff D. (January 2007). "Evolutionary perspective on dietary intake of fibre and colorectal cancer". European Journal of Clinical Nutrition 61 (1): 140–42. doi:10.1038/sj.ejcn.1602486. PMID 16855539.
- Cordain L, Eaton SB, Brand Miller J, Lindeberg S, Jensen C (April 2002). "An evolutionary analysis of the etiology and pathogenesis of juvenile-onset myopia". Acta Ophthalmologica Scandinavica 80 (2): 125–35. doi:10.1034/j.1600-0420.2002.800203.x. PMID 11952477.
- Cordain L, Lindeberg S, Hurtado M, Hill K, Eaton SB, Brand-Miller J (December 2002). "Acne vulgaris: a disease of Western civilization". Archives of Dermatology 138 (12): 1584–90. doi:10.1001/archderm.138.12.1584. PMID 12472346.
- Cordain, Loren (June 2005). "Implications for the role of diet in acne" (PDF). Seminars in Cutaneous Medicine and Surgery 24 (2): 84–91. doi:10.1016/j.sder.2005.04.002. PMID 16092796.
- Cordain, Loren (2006). "Dietary implications for the development of acne: a shifting paradigm (PDF)". In Bedlow, J. US Dermatology Review 2006—Issue II. London: Touch Briefings Publications.
- Keri, Jonette E; Nijhawan, Rajiv (August 2008). "Diet and acne". Expert Review of Dermatology 3 (4): 437–40. doi:10.1586/174698188.8.131.527.
- Volker, Dianne; & NG, Jade (November 2006). "Depression: Does nutrition have an adjunctive treatment role?". Nutrition & Dietetics 63 (4): 213–226. doi:10.1111/j.1747-0080.2006.00109.x.
- Cunnane, Stephen C. (1 August 2005). "Origins and evolution of the Western diet: implications of iodine and seafood intakes for the human brain". The American Journal of Clinical Nutrition 82 (2): 483; author reply 483–4. PMID 16087997.
- Solomons, Noel W (2008). "National food fortification: a dialogue with reference to Asia: balanced advocacy" (PDF). Asia Pacific Journal of Clinical Nutrition 17 (Suppl 1): 20–3. PMID 18296293.
- Friis, Henrik (February 2007). "International nutrition and health". Danish Medical Bulletin 54 (1): 55–7. PMID 17349228.
- Mann, Neil (September 2007). "Meat in the human diet: an anthropological perspective" (PDF). Nutrition & Dietetics 64 (4): S102–S107. doi:10.1111/j.1747-0080.2007.00194.x.
- Cordain L, Miller JB, Eaton SB, Mann N (1 December 2000). "Macronutrient estimations in hunter-gatherer diets". The American Journal of Clinical Nutrition 72 (6): 1589–92. PMID 11101497.
- Westman EC, Feinman RD, Mavropoulos JC, Vernon MC, Volek JS, Wortman JA, Yancy WS, Phinney SD (1 August 2007). "Low-carbohydrate nutrition and metabolism". The American Journal of Clinical Nutrition 86 (2): 276–84. PMID 17684196.
- Colagiuri, Stephen; & Brand-Miller, Jennie (March 2002). "The 'carnivore connection'—evolutionary aspects of insulin resistance" (PDF). European Journal of Clinical Nutrition 56 (1): S30–5. doi:10.1038/sj.ejcn.1601351. PMID 11965520.
- Plaskett, L. G. (September 2003). "On the Essentiality of Dietary Carbohydrate". Journal of Nutritional & Environmental Medicine 13 (3): 161–168. doi:10.1080/13590840310001619405.
- Pérez-Guisado, J (2008). "Ketogenic diets: Additional benefits to the weight loss and unfounded secondary effects". Archivos latinoamericanos de nutricion 58 (4): 323–9. PMID 19368291.
- Westman, EC; Yancy Jr, WS; Mavropoulos, JC; Marquart, M; McDuffie, JR (2008). "The effect of a low-carbohydrate, ketogenic diet versus a low-glycemic index diet on glycemic control in type 2 diabetes mellitus". Nutrition & metabolism 5: 36. doi:10.1186/1743-7075-5-36. PMC 2633336. PMID 19099589.
- Ungar, Peter S.; Grine, Frederick E.; & Teaford, Mark F. (October 2006). "Diet in Early Homo: A Review of the Evidence and a New Model of Adaptive Versatility". Annual Review of Anthropology 35 (1): 209–228. doi:10.1146/annurev.anthro.35.081705.123153.
- Milton, Katharine; Miller, JB; Eaton, SB; Mann, N (1 December 2000). "Reply to L Cordain et al" (PDF). The American Journal of Clinical Nutrition 72 (6): 1590–92. PMID 11101497.
- Walker, Alexander RP (1 February 2001). "Are health and ill-health lessons from hunter-gatherers currently relevant?". The American Journal of Clinical Nutrition 73 (2): 353–56. PMID 11157335.
- Cordain, Loren (2006). "Saturated fat consumption in ancestral human diets: implications for contemporary intakes". In Meskin, Mark S.; Bidlack, Wayne R.; & Randolph, R. Keith. Phytochemicals: Nutrient-Gene Interactions. CRC Press. pp. 115–26. ISBN 0-8493-4180-9.
- Simopoulos, Artemis P. (2006). "Evolutionary aspects of diet, the omega-6:omega-3 ratio, and gene expression". In Meskin, Mark S.; Bidlack, Wayne R.; & Randolph, R. Keith. Phytochemicals: Nutrient-Gene Interactions. CRC Press. pp. 137–160. ISBN 0-8493-4180-9.
- Mann, NJ; Ponnampalam, EN; Yep, Y; Sinclair, AJ (2003). "Feeding regimes affect fatty acid composition in Australian beef cattle". Asia Pacific journal of clinical nutrition. 12 Suppl: S38. PMID 15023647.
- Longe, Jacqueline L. (2007). The Gale Encyclopedia of Diets: A Guide to Health and Nutrition. Gale Cengage. ISBN 1-4144-2991-6.
- Ströhle, A.; Hahn, A.; Sebastian, A. (2010). "Estimation of the diet-dependent net acid load in 229 worldwide historically studied hunter-gatherer societies". The American journal of clinical nutrition 91 (2): 406–412. doi:10.3945/ajcn.2009.28637. PMID 20042527.
- Prentice, A. M.; Jebb, S. A. (2003). "Fast foods, energy density and obesity: A possible mechanistic link". Obesity Reviews 4 (4): 187–94. doi:10.1046/j.1467-789X.2003.00117.x. PMID 14649369.
- Rolls, Barbara. The Volumetrics Eating Plan: Techniques and Recipes for Feeling Full on Fewer Calories. ISBN 0-06-073730-1.[page needed]
- "McDonald's USA Nutrition Facts for Popular Menu Items". McDonald's. March 12, 2012.
- Bell, Elizabeth A; Castellanos, Victoria H; Pelkman, Christine L; Thorwart, Michelle L; Rolls, Barbara J (1998). "Energy density of foods affects energy intake in normal-weight women". The American Journal of Clinical Nutrition 67 (3): 412–20. PMID 9497184.
- Drewnowski, Adam; Darmon, Nicole (2005). "The economics of obesity: Dietary energy density and energy cost". The American Journal of Clinical Nutrition 82 (1 Suppl): 265S–273S. PMID 16002835.
- Ello-Martin, Julia A; Roe, Liane S; Ledikwe, Jenny H; Beach, Amanda M; Rolls, Barbara J (2007). "Dietary energy density in the treatment of obesity: A year-long trial comparing 2 weight-loss diets". The American Journal of Clinical Nutrition 85 (6): 1465–77. PMC 2018610. PMID 17556681.
- "Office of Dietary Supplements fact sheet: Calcium".
- Heaney, Robert P. (2001). "Calcium intake and the prevention of chronic disease". In Wilson, Ted; Temple, Norman J. Nutritional Health: Strategies for Disease Prevention. Humana Press. pp. 31–50. ISBN 0-89603-864-5.
- Heaney, Robert P. (August 2006). "Calcium intake and disease prevention". Arquivos Brasileiros de Endocrinologia & Metabologia 50 (4): 685–693. doi:10.1590/S0004-27302006000400014.
- Heaney, Robert P. (2006). "Calcium metabolism". In Schulz, Richard. Encyclopedia of Aging: A Comprehensive Resource in Gerontology and Geriatrics. Springer. pp. 146–147. ISBN 0-8261-4843-3.
- "Dietary Supplement Fact Sheet: Vitamin D". Office of Dietary Supplements (ODS). National Institutes of Health (NIH). Retrieved 2010-04-11.
- Paul Insel, Don Ross, Kimberley McMahon, Melissa Bernstein (2010). Nutrition. p. 410. ISBN 0-7637-7663-7.
- S. Boyd Eaton, Stanley B. Eaton III, Andrew J. Sinclair, Loren Cordain, Neil J. Mann (1998). "Dietary intake of long-chain polyunsaturated fatty acids during the Paleolithic". World Rev Nutr Diet.
- Foster-Powell K, Holt SH, Brand-Miller J (1 July 2002). "International table of glycemic index and glycemic load values: 2002". The American Journal of Clinical Nutrition 76 (1): 5–56. PMID 12081815.
- Liljeberg Elmståhl H.; & Björck, Inger ME (2001). "Milk as a supplement to mixed meals may elevate postprandial insulinaemia" (PDF). European journal of clinical nutrition 55 (11): 994–99. doi:10.1038/sj.ejcn.1601259. PMID 11641749.
- Hoyt G, Hickey MS, Cordain L (2005). "Dissociation of the glycaemic and insulinaemic responses to whole and skimmed milk". British Journal of Nutrition 93 (2): 175–77. doi:10.1079/BJN20041304. PMID 15788109.
- Östman E Liljeberg H Björck. "Inconsistency between glycemic and insulinemic responses to". Retrieved 6 September 2011.
- Glucose dense processed (junk) food is absorbed fast, high glucose in blood is not oxidised that quickly so free radicals are released which cause inflammation of linings of blood vessels which is a stated reason for atherosclerosis and cardiovascular diseases, hence the importance of fibre rich food. Cordain L, Eades MR, Eades MD (2003). "Hyperinsulinemic diseases of civilization: more than just Syndrome X" (PDF). Comparative Biochemistry and Physiology Part A: Molecular & Integrative Physiology 136 (1): 95–112. doi:10.1016/S1095-6433(03)00011-4. PMID 14527633.
- Frassetto LA, Morris RC Jr, Sellmeyer DE, Sebastian A (February 2008). "Adverse effects of sodium chloride on bone in the aging human population resulting from habitual consumption of typical American diets". Journal of Nutrition 138 (2): 419S–22S. PMID 18203914.
- Remer, T.; Manz, F. (1995). "Potential Renal Acid Load of Foods and its Influence on Urine pH". Journal of the American Dietetic Association 95 (7): 791–797. doi:10.1016/S0002-8223(95)00219-7. PMID 7797810.
- Uriel S. Barzel and Linda K. Massey (1998). "Excess Dietary Protein Can Adversely Affect Bone". The Journal of Nutrition 128 (6): 1051–1053.
- Frassetto, L. A.; Morris Jr, R. C.; Sebastian, A. (2006). "A practical approach to the balance between acid production and renal acid excretion in humans". Journal of nephrology. 19 Suppl 9: S33–S40. PMID 16736439.
- Larsen, Clark Spencer (1 November 2003). "Animal source foods and human health during evolution". Journal of Nutrition 133 (11, Suppl 2): 3893S–3897S. PMID 14672287.
- Hermanussen, Michael; Poustka, Fritz (July–September 2003). "Stature of early Europeans". Hormones (Athens) 2 (3): 175–8. doi:10.1159/000079404. PMID 17003019.
- Eaton, S. Boyd; Cordain, Loren; & Sebastian, Anthony (2007). "The Ancestral Biomedical Environment (PDF)". In Aird, William C. Endothelial Biomedicine. Cambridge University Press. pp. 129–34. ISBN 0-521-85376-1.
- Eaton SB, Konner M, Shostak M (April 1988). "Stone agers in the fast lane: chronic degenerative diseases in evolutionary perspective" (PDF). The American Journal of Medicine 84 (4): 739–49. doi:10.1016/0002-9343(88)90113-1. PMID 3135745.
- Eaton, S. Boyd & Eaton, Stanley. B 3rd (1999). "The evolutionary context of chronic degenerative diseases". In Stearns, Stephen C. Evolution in health and disease. Oxford: Oxford University Press. pp. 251–59. ISBN 0-19-850445-4.
- Trowell, Hugh C. & Burkett, Denis P. (1981). Western diseases: their emergence and prevention. Cambridge, MA: Harvard University Press. xiii–xvi. ISBN 0-674-95020-8.
- Lindeberg S, Eliasson M, Lindahl B, Ahrén B (October 1999). "Low serum insulin in traditional Pacific Islanders—The Kitava study". Metabolism 48 (10): 1216–19. doi:10.1016/S0026-0495(99)90258-5. PMID 10535381.
- Solomons, Noel W. (1 March 2000). "Book Review—Evolutionary Aspects of Nutrition and Health: Diet, Exercise, Genetics and Chronic Disease". The American Journal of Clinical Nutrition 71 (3): 854–55.
- Cannon, Geoffrey (August 2007). "Drugs and bugs, and other stories [Out of the Box]". Public Health Nutrition 10 (8): 758–61. doi:10.1017/S1368980007770568.
- Olshansky, S. Jay; Carnes, Bruce A. (2002). The Quest for Immortality: Science at the Frontiers of Aging. W. W. Norton & Company. pp. 188–191. ISBN 0-393-32327-7.
- Leach, Jeff D. (2007). "Paleo Longevity Redux (Letters to the Editor)". Public Health Nutrition 10 (11). doi:10.1017/S1368980007814492.
- Michael Gurven, Hillard Kaplan. "Longevity Among Hunter-Gatherers: A Cross-Cultural Examination". | url = http://www.anth.ucsb.edu/faculty/gurven/papers/GurvenKaplan2007pdr.pdf
- Leonard, William R. (December 2002). "Food for thought: Dietary change was a driving force in human evolution" (PDF). Scientific American 287 (6): 106–15. PMID 12469653.
- Uauy, Ricardo; & Díaz, Erik (October 2005). "Consequences of food energy excess and positive energy balance". Public Health Nutrition 8 (7A): 1077–99. doi:10.1079/PHN2005797. PMID 16277821.
- Jönsson T, Ahrén B, Pacini G, Sundler F, Wierup N, Steen S, Sjöberg T, Ugander M, Frostegård J, Göransson L, Lindeberg S (2006). "A Paleolithic diet confers higher insulin sensitivity, lower C-reactive protein and lower blood pressure than a cereal-based diet in domestic pigs". Nutrition & Metabolism 3 (39): 39. doi:10.1186/1743-7075-3-39. PMC 1635051. PMID 17081292.
- Magnusson, Per A (December 18, 2007). "Paleolitisk kost ger bättre glukostolerans än medelhavskost". Läkartidningen (in Swedish) 104 (51–52): 3852.
- Jönsson T, Granfeldt Y, Erlanson-Albertsson C, Ahrén B, Lindeberg S (November 2010). "A Paleolithic diet is more satiating per calorie than a mediterranean-like diet in individuals with ischemic heart disease" (PDF). Nutr Metab (Lond) 7 (1): 85. doi:10.1186/1743-7075-7-85. PMC 3009971. PMID 21118562.
- Jonsson T, Granfeldt Y, Ahren B, Branell UC, Palsson G, Hansson A, Lindeberg S (2009). "Beneficial effects of a Paleolithic diet on cardiovascular risk factors in type 2 diabetes: a randomized cross-over pilot study". Cardiovascular Diabetology 8 (1): 35–49. doi:10.1186/1475-2840-8-35. PMC 2724493. PMID 19604407.
- ClinicalTrials.gov NCT00548782 Paleolithic Diet and Exercise Study
- ClinicalTrials.gov NCT00692536 Diet Composition - Metabolic Regulation and Long-term Compliance (KNOTA)
- ClinicalTrials.gov NCT00360516 Paleolithic Diet and Exercise Study
- Osterdahl M, Kocturk T, Koochek A, Wändell PE (May 2008). "Effects of a short-term intervention with a Paleolithic diet in healthy volunteers". European Journal of Clinical Nutrition 62 (5): 682–85. doi:10.1038/sj.ejcn.1602790. PMID 17522610.
- NHS Knowledge Service (May 9, 2008). "Caveman fad diet". NHS Choices. Retrieved August 1, 2008. | 1 | 58 |
<urn:uuid:13c34941-5310-464c-aafa-661aceacff6c> | PUBLIC HEALTH ASSESSMENT
WASHINGTON NAVY YARD
WASHINGTON, DISTRICT OF COLUMBIA
Washington Navy Yard (WNY), an active military facility, encompasses 63.3 acres of land in southeastern Washington, D.C. It lies on the Anacostia River in a heavily urbanized area with industrial, commercial, residential, and vacant properties in the immediate vicinity. Since its inception in 1799, WNY has supported diverse functions, including shipbuilding (in the 1800s), ordnance research and production (mid-1800s to 1962), and administrative duties (1962 to present). WNY employed nearly 25,000 on-site workers at its peak operation during World War II. Currently, approximately 5,400 military and civilian personnel work at WNY.
WNY was proposed for listing on the U.S. Environmental Protection Agency (EPA) National Priorities List in 1998, primarily because of contamination detected in the adjacent Anacostia River, on-site sediment, and on-site soil (USAF 1998). Past activities at WNY have also impacted groundwater underlying the property and contributed to contamination found in fish from the Anacostia River. The primary contaminants of concern are metals (metals in groundwater and lead in surface soil), polychlorinated biphenyls (PCBs) (in sediment and fish), and dioxins (in sediment). These contaminants, as well as some petroleum hydrocarbons, pesticides, and other semi-volatile organic compounds, have been detected at levels above the Agency for Toxic Substances and Disease Registry's (ATSDR) health-based comparison values.
ATSDR conducted site visits in February and September of 1999. ATSDR learned that local community members had expressed concern about the environmental quality of the Anacostia River in the WNY vicinity, but did not identify any specific community health concerns attributed to WNY.
ATSDR reviewed and evaluated groundwater data. Metals were detected slightly above ATSDR comparison values for drinking water. There is, however, no known public exposure to groundwater contaminants. Washington, D.C., receives its drinking water from an area of the Potomac River, which is not impacted by the WNY sites, and is far upstream of the city. WNY is connected to the Washington, D.C., drinking water system. Because there is no known public exposure to groundwater underlying WNY, ATSDR concludes that it poses no apparent public health hazard.
To address community concerns about the environmental quality of the Anacostia River, ATSDR reviewed surface water and sediment quality data from both on- and off-site locations. Although some contaminants appear to originate from upstream, non-point sources, the WNY storm sewer system and outfalls have also contributed contaminants, primarily metals, to the river pollutant load. How much of the pollution originated from WNY operations is not known. Contaminants were detected primarily in sediments, both in the WNY storm sewer system and the Anacostia River. PCBs are the primary contaminant of concern, although polycyclic aromatic hydrocarbons, metals, pesticides, and dioxins were also detected above ATSDR comparison values. However, minimal, if any, public exposure occurs to the surface water and sediment of the Anacostia River because local residents do not swim or drink from WNY runoff, outfalls, or the Anacostia River. Any incidental exposures to the detected levels of contaminants in local surface water and sediment are not expected to pose a public health hazard. Therefore, ATSDR concludes that past, current, and future exposures to on- and off-site surface water and sediment pose no apparent public health hazards.
ATSDR also reviewed on-site soil data and evaluated potential public health exposure at 17 locations at WNY where past industrial operations resulted in contamination. Sixteen of the 17 sites are not associated with any known public health hazards because: 1) no site-related contaminants are present where exposure to the public could occur; 2) contaminant concentrations detected are too low to pose a health hazard; and/or 3) past and current exposures to the general public have been prevented. The other site, Admiral's Row (Site 10), contained lead concentrations in surface soil above the ATSDR comparison value for soil. Due to insufficient historical data on the extent of and exposure to this lead contamination, the exact health implications from past exposures cannot be assessed. Current and future exposures have been prevented by interim measures implemented by the Navy, including fencing, sign posting, land use restrictions, and public education efforts. ATSDR concludes that current and potential future exposures to on-site soil pose no apparent public health hazards. Past soil exposure to Admiral's Row surface soil is a completed exposure pathway with the potential for adverse health effects to children, but, due to the lack of historical data, the health implications from past exposure can not be assessed.
The consumption of locally-caught fish is another completed exposure pathway in the WNY vicinity. Fish in the lower Anacostia River near WNY have been impacted primarily by PCBs, although metals, pesticides, and dioxins were also detected in tissue samples. Detected concentrations of PCBs and the pesticide chlordane in local fish may pose a public health hazard if consumed in sufficient quantities. The Washington, D.C. Department of Public Health issued an initial fish consumption advisory in 1989 and an updated advisory in 1994 that urges the general public not to eat catfish, carp, or eel and limit consumption of largemouth bass, sunfish, and other fish. The advisory encourages the practice of catch-and-release. Despite this advise, some citizens continue to eat fish and eel caught from the lower Anacostia River near WNY. ATSDR concludes that a past, current, and future public health hazard could exist for anglers who routinely have consumed or continue to consume sufficient amounts of locally-caught fish; the fish consumption advisory for the Anacostia River should continue to be observed.
ATSDR concludes that groundwater, surface water, and sediment at WNY do not pose public health hazards. However, past exposure to on-site surface soil at Admiral's Row is a completed exposure pathway with the potential for adverse health effects to children. ATSDR also concludes that consumption of locally caught fish could pose a public health hazard in the WNY vicinity.
The Washington Navy Yard (WNY) is an active military facility located on 63.3 acres of urban land bordering the Anacostia River in southeastern Washington, D.C. (Figure 1). WNY began operations in 1799 as a shipbuilding facility (NFEC 1996). Today, it is the Navy's oldest shore station and the longest continuously operated federal facility in the United States (NGF no date).
WNY operations and its primary role have evolved over the past two centuries. Shipbuilding dominated yard activities in the early 1800s, giving way to canon and large gun manufacturing for ships in the mid-1800s (NFEC 2001). Until 1962, ordnance production was the principal WNY function (NGF no date). From 1962 to present, the primary activity of the WNY has been administrative (Navy 2001).
As WNY's function changed over the years, so did the yard's physical size. Historically, additional property was added by filling a shallow embayment of the Anacostia River and a tributary entering the Anacostia River from the north. During World War II, at its largest, the yard occupied approximately 127 acres. As WNY's role shifted from primarily manufacturing to administration, 63.5 acres of the facility were sold to the General Services Administration (GSA) for administrative purposes (NFEC 1996).
WNY has identified a number of potential waste sites that have resulted from historical and modern-day industrial operations. Most WNY contamination, however, occurs in limited access areas of the site. One of the key contaminants of concern found on site, however, is lead in surface soil from peeling lead-based paint that has flaked off of Admiral's Row buildings. Some other contaminants exist in sediment and surface water of the adjacent Anacostia River, though they are not necessarily associated with WNY operations. These off-site contaminants include metals (lead, arsenic, mercury, iron, beryllium), polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), volatile organic compounds (VOCs), semivolatile organic compounds (SVOCs), and polychlorinated dibenzo-p-dioxins (PCDDs or "dioxins") (Navy 2001).
Investigations at WNY began in 1985, after WNY submitted a "Notification of Hazardous Waste Activity" to the United States Environmental Protection Agency (EPA) and identified itself as a generator of hazardous wastes (specifically PCBs). In 1988, the Naval Energy and Environmental Support Activity prepared a preliminary assessment (PA) report that indicated the presence of petroleum releases in soil and groundwater at WNY. As a result of these findings, numerous environmental investigations have been conducted, are ongoing, or have been planned to further determine the nature and extent of environmental contamination associated with WNY activities. In further investigating the site, the Navy and EPA signed a final Resource Conservation and Recovery Act (RCRA) Consent Order, effective on July 16, 1997, to conduct a two-phase RCRA facility investigation (RFI) and perform a corrective measures study (CMS). The RFI was intended to further investigate WNY sites, characterize contaminant sources, confirm contaminant releases, and assess environmental and human health impacts; the CMS also identifies and evaluates site-specific remediation options, if necessary.
In April 1998, the Navy, Department of Justice, and the Earthjustice Legal Defense Fund (formerly the Sierra Club Legal Defense Fund) negotiated an agreement on cleanup of the site, known as the Earthjustice Consent Decree, after Earthjustice expressed concern about potential Anacostia River hazards and about the timeliness of Navy cleanup activities. Later that same year, EPA placed WNY on its National Priorities List (NPL) (on August 27, 1998), part of EPA's Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), commonly known as "Superfund." As a Superfund site, the RCRA Consent Order activities have been integrated into the Navy's CERCLA remediation obligations. On June 30, 1999, EPA and Navy officials and the D.C. Mayor's office signed a Federal Facilities Agreement (FFA). The FFA was signed to ensure that environmental impacts from past operations were thoroughly investigated and appropriate remedial actions are undertaken to protect people and the environment. Future cleanup activities at WNY will continue under the requirements of CERCLA and the Department of Defense Installation Restoration Program.
Through various environmental investigations, the Navy evaluated conditions at 17 sites at WNY including basewide groundwater. Among the sites investigated are facilities that were once used for automotive maintenance operations, foundry operations, gun assembly, or laundry services. Following preliminary investigations, eight of the sites have been recommended for no further action (sites 1, 2, 3, 5, 8, 9, 14 and 17). Further investigations are underway at seven sites requiring additional study (4, 6, 7, 10, 11, 13, 15), including RCRA actions at Sites 4 and 6. Remedial investigation (RI) activities have been completed at Site 16. The Navy is also continuing its investigation of the groundwater beneath the site to further characterize the extent and magnitude of contamination.
To reduce and control the spread of contamination, the Navy conducted removal of contaminated soil and sediment at several sites. Removal of sediment from the storm sewer system has been completed as well as the rehabilitation of the storm sewer lines. At Site 16, one cubic foot (5 gallons) of subsurface soil containing free-phase mercury was removed from beneath a parking lot. At site 13, PCB contaminated soil near building 290 was removed. Although the removal of lead-contaminated soil from Admiral's Row, Site 10 (NFEC 2001, Navy 2001) is proposed, soil is currently being managed in-place until a risk assessment can be completed. The Navy has also conducted site removal evaluations at Sites 7 and 11. The results of the evaluations indicate that no removal activities were warranted at these locations.
To characterize the population and identify the presence of sensitive subpopulations, such as young children, in the vicinity of WNY, the Agency for Toxic Substances and Disease Registry (ATSDR) examines the demographics of the nearby communities. This information also provides details on residential history in a particular area that helps ATSDR assess time frames of potential human exposure to contaminants. The demographic and housing data for WNY and the surrounding areas, historically and at present, are outlined in this section. Current demographics are based on U.S. Census data from 1990 (Figure 2). Based on the 1990 data, there are approximately 50,528 people residing within one mile of WNY.
WNY lies in an urban setting and adjacent land use varies considerably. Within less than a one mile radius of the facility there are industrial, commercial, residential, and vacant properties. WNY is bordered on the north by commercial and vacant commercial properties along M Street, on the east by an abandoned industrial area along 11th Street, on the west by the Southeast Federal Center owned by GSA, and on the south by the Anacostia River (Figures 1 and 2).
The WNY facility itself consists of administrative, supply, and storage buildings; residences; and training facilities (Figure 3). Approximately 9 acres of WNY has been listed on the National Register of Historic Districts. The district includes a Naval museum that was opened to the public in 1993 (WNY 2001). Almost 95% of the WNY is covered by buildings, asphalt, and other impervious surfaces (CH2MHILL 1999). Two combined and one separate storm sewer pipes owned by Washington, D.C., underlie WNY and flow into the Anacostia River. Another separate storm sewer pipe owned by the city that runs beneath 11th Street had accepted storm water from WNY in the past. Since storm sewer renovations, WNY connections to city owned lines have been removed and rerouted to other separate storm sewers operated by the Navy. Many of the former industrial and storage buildings have been converted to office buildings. Additional renovations are currently underway at several more buildings to create more office space for future employees.
Throughout its history, WNY has provided thousands of jobs for D.C. area residents. At the turn-of-the-century, WNY employed approximately 2,000 individuals. When the facility was operating at its peak, during World War II, nearly 25,000 people worked at WNY. Today, WNY employs an average of 5,400 military and civilian personnel. WNY is headquarters for the Naval Sea Systems Command, with a population of 10,500 (Brief History Fact Sheet no date; ATSDR 1999a, Navy 2001).
Admiral's Row (Site 10) along Warrington Avenue consists of an on-site residential area and Leutze Park. In the past, the area supported 20 residential quarters and 3 buildings (some residential) (CH2MHILL 1998). The quarters and buildings were multi-storied and painted with lead paint. Leutze Park is an unfenced recreational area that is used for public access and as a parade ground for official change of command and retirement ceremonies. Luetze Park is the only substantial on-site vegetated area. (NFEC 1996). Historically, the quarters and buildings housed several hundred Naval officers and their family members. (It is unknown how many children or what age children lived in these residences in the past.) Today, only the quarters are used to house the 15 to 20 families that currently reside at WNY (Navy 2001). No new housing construction is planned for WNY (Navy 2001).
Access to the entire site is restricted by a perimeter wall. Security guards are stationed at gate entrances. Individuals with direct access to the site include on-site employees, on-site residents, and on-site workers under contract with the Navy to conduct various building and lawn maintenance activities. Adult and child visitors and recreational users may also pass through security clearance. Bus tours are conducted on site and the Navy Museum (Building 76) receives visitors daily (except holidays). (As of January 2000, the Navy Museum building was closed for renovations. During renovations, visitors could view the exhibits at the Navy Museum Annex [Building 70] and at the Navy Art Gallery [Building 67]). Willard Park, across from the Navy Museum on the banks of the Anacostia River, is visited by the public. The park displays naval ordnance used in battle from the Civil War to the Vietnam era (WNY 2001).
WNY lies on the banks of the Anacostia River in a historically natural wetlands area that has been filled to accommodate urban development expansions. Prior to 1800, approximately one-third of WNY property was covered by a shallow embayment, but wetlands no longer exist in the WNY vicinity. Upstream from WNY and outside of Washington, D.C., agricultural and forested areas remain and drain into the large Anacostia River watershed. Less than two miles downstream from WNY, the Anacostia River merges with the Potomac River and flows into the Chesapeake Bay.
Washington, D.C., receives its drinking water from an area of the Potomac River, which is not impacted by the WNY sites, and is far upstream of the city. WNY is connected to the Washington, D.C., drinking water system. There are no private wells in the vicinity (Miller 1999).
Many groups have been formed to protect the natural resources of the Anacostia River and other water bodies in the area. In 1999, the Anacostia Watershed Toxics Alliance was formed to review existing research data and evaluate the entire watershed.
In July 1998, ATSDR published a Health Consultation for the Anacostia River Initiative, Washington, D.C. (ATSDR 1998). This Health Consultation did not specifically address environmental contamination issues related to WNY, but evaluated the safety of consuming fish caught in the Anacostia and Potomac Rivers. ATSDR concluded that the reported concentrations of chemical residues in fish from the Anacostia and Potomac Rivers could pose a public health hazard for sport anglers.
In February 1999, ATSDR conducted a site visit at WNY. ATSDR viewed on-base sites and remediation efforts, as well as off-base residential areas, the GSA property, and both sides of the Anacostia River. ATSDR met with representatives from the Naval District Washington, the Naval Facilities Engineering Command, the Naval Research Laboratory, EPA Region III, and the Washington, D.C., Department of Public Health. ATSDR did not identify any specific community health concerns attributed to WNY, but learned that local community members had expressed concern about the environmental quality of the Anacostia River in the WNY vicinity (ATSDR 1999a).
Due to changes in technical staff, ATSDR conducted a follow-up site visit in September 1999. No additional public health issues or community concerns were identified by ATSDR during the site visit. In June 2000, ATSDR met with the Washington Navy Yard Restoration Advisory Board (RAB) and presented an overview of ATSDR and the public health assessment process. Prior to the RAB, we met with representatives of the DC Department of Health and others agencies to let them know of our activities. Navy representatives gave ATSDR an off-site tour of areas surrounding WNY.
ATSDR held public availability sessions on September 26 and 27, 2000, to obtain community concerns. The sessions were held at the Van Ness Elementary School and the Anacostia Park Pavilion. Although the general public did not attend these sessions, ATSDR spoke with representatives of the National Park Service and The Student Conservation Association, Inc. and fishermen fishing in Anacostia Park. No public comments were received during the public comment period for this assessment, September 28 through October 31, 2001. However, ATSDR received comments from the Navy (Navy 2001) and DC Department of Health (DC Department of Health 2001b) and made revisions, which are included in this final Public Health Assessment.
In preparing this public health assessment, ATSDR reviewed and evaluated information provided in the referenced documents. Documents prepared for the CERCLA program must meet specific standards for adequate quality assurance and control measures for chain-of-custody procedures, laboratory procedures, and data reporting. The environmental data presented in this public health assessment are from Navy remedial site investigations, Anacostia River monitoring data; municipal drinking water reports; and other information provided primarily by Navy, EPA, and Washington, D.C. Department of Public Health reports. Based on our evaluation, ATSDR determined that the quality of environmental data available in most site-related documents for WNY is adequate to make public health decisions.
In this section, exposure pathways are evaluated to determine whether people accessing or living near WNY could have been (past scenario), are (current scenario), or will be (future scenario) exposed to site-related contaminants. In evaluating exposure pathways, ATSDR determines whether exposure to contaminated media has occurred, is occurring, or will occur through ingestion, dermal (skin) contact, or inhalation of contaminants. If the contamination is located in an area where exposure is not likely, no public health hazard will be expected to occur (for instance, where contaminated soil is located in an area that is fenced and access is restricted). Exposure to contaminants does not necessarily result in adverse health effects. For a health hazard to be possible, the contaminants must be present in large enough amounts to cause harm, and the exposure must be for a long enough time for the effect to be possible. To determine whether completed pathways pose a potential health hazard, ATSDR compares contaminant concentrations to health-based comparison values.
Comparison values are calculated from available scientific literature on exposure and health effects. These values, which are defined for each of the different media, reflect the estimated maximum contaminant concentration for a given chemical that is not expected to cause adverse health effects, given a standard daily ingestion rate and standard body weight. If contaminant concentrations are above comparison values or background concentrations, ATSDR further analyzes exposure variables (for example, duration and frequency) and the toxicology of the contaminant. This exposure evaluation process is summarized in Figure 4.
ATSDR evaluated available information on underlying groundwater, local surface water, the WNY investigation sites (Figure 3), and locally-caught fish to determine if they pose any past, current, or potential future public health hazards. After fully evaluating potential human exposure pathways at WNY, ATSDR concluded that public exposures (past, current, and future) to groundwater, surface water, sediment, and most soil are not expected to result in adverse human health effects. No adverse health effects are anticipated because contamination in these media is below levels of health concern or there is no public exposure. ATSDR identified past exposure to on-site Admiral's Row surface soils and the consumption of locally-caught fish as completed exposure pathways. Information on the various contaminated media, exposure pathways, and exposure doses is summarized in Table 1, Appendix A, Appendix B, Appendix C, and the following text. Appendix D provides a glossary to explain related key words and terms.
The following discussion evaluates community concerns about potential human exposure via contaminated groundwater, surface water, sediment, soil, and locally-caught fish. ATSDR's conclusions regarding the past, current, and potential future exposures to the various environmental media on and in the vicinity of WNY are based on an evaluation of information gathered from site investigations, groundwater monitoring data, surface water investigations, and observations compiled during site visits.
Could contaminants in groundwater result in adverse human health effects for local employees, residents, or visitors?
- Past activities at WNY affected groundwater underlying military property. Metals are the main contaminants of concern.
- On-site residents, employees, and visitors are not exposed to contaminated groundwater; drinking water is supplied and monitored by a municipal source that meets Washington, D.C., and federal and state drinking water standards. There is no known public exposure to groundwater contaminants.
Although groundwater underlying Washington, D.C., generally occurs in unconfined conditions, the fill and underlying clay at WNY result in semi-confined conditions (CH2MHILL 1999). On a small scale, the heterogeneous nature of the fill material likely influences the direction of groundwater flow beneath WNY. Due to WNY topography, the underlying watertable level varies, from approximately 55 feet above sea level in the northeast corner to slightly above sea level along the waterfront (NFEC 1996). The Anacostia River is believed to be the ultimate discharge point for shallow groundwater (CH2MHILL 1999).
The surrounding community obtains drinking water from the Washington, D.C., municipal drinking water system, a routinely-monitored municipal source that meets Washington, D.C., and federal and state drinking water standards (e.g., EPA's maximum contaminant levels [MCLs]). There are no private wells in the vicinity (Miller 1999). Current and future exposures to contaminated groundwater are unlikely because there are no known users of groundwater as a drinking water supply in the vicinity of WNY. There are no future plans to place potable wells in the area (ATSDR 1999a). WNY and the surrounding community will continue to receive water from the Washington, D.C., municipal drinking water system (ATSDR 1999a).
Past activities at WNY affected groundwater underlying military property. VOCs, SVOCs, and metals were detected in groundwater underlying WNY, most at concentrations below ATSDR comparison values for drinking water. Methylene chloride (detected up to 17 parts per billion [ppb]) and chloroform (up to 12 ppb), were the only VOCs exceeding their respective ATSDR comparison values for drinking water (5 ppb and 6 ppb, respectively) (NFEC 1996, NFEC 1999). Methylene chloride also exceeded EPA's MCL of 5 ppb. Both methylene chloride and chloroform are believed to be laboratory contaminants and not site-specific contaminants (Navy 2001). Acetone, 1,2-dichloroethene (total), acenaphthene, fluorene, butylbenzylphthalate, bis(2-ethylhexyl)phthalate, and di-n-octylphthalate were frequently detected at concentrations at or below ATSDR comparison values. Seventeen metals were detected in the groundwater. Metals detected at levels above ATSDR comparison values included aluminum, arsenic, barium, beryllium, iron, lead, manganese, and vanadium. Lead in groundwater underlying WNY exceeded the federal action level of 15 ppb at the following sites: 2, 4, 5, 6, 7, 17, and the Navy Yard monitoring wells (NFEC 1996). Petroleum-impacted groundwater at Site 16 is being remediated (Navy 2000).
Appendix A summarizes sampling results for each site. Groundwater samples were collected from Sites 2, 3, 4, 5, 6, 7, 9, 14, 16, 17, and the Navy Yard monitoring wells. No groundwater samples were collected from Sites 1, 8, 10, 11, 12, or 13.
Evaluation of Potential Public Health Hazards
Even though groundwater contamination has been detected, there is no known public exposure to groundwater underlying or in the vicinity of WNY. Private wells are not located in the WNY vicinity (Miller 1999) and local residents receive water from the Washington, D.C., municipal drinking water system. Because there is no known exposure, ATSDR concludes that groundwater poses no apparent public health hazard.
Could exposure to surface water and sediment contaminants in the WNY vicinity result in adverse health effects for local employees, residents, or visitors?
- PCBs, PAHs, metals, and pesticides contributed from different sources have been detected in surface water and/or sediment of the adjacent Anacostia River, some at levels above ATSDR's health-based comparison values.
- People do not swim or drink from WNY runoff, outfalls, or the Anacostia River. Therefore, any exposure to contaminants in surface water and sediment is minimal, limited to infrequent dermal contact that may occur while boating, fishing, or other recreational activities. This type of exposure is not expected to lead to adverse health effects.
Surface Water Hydrology and Sediment Characteristics
The Anacostia River is the major surface water body that flows near WNY. Upstream from WNY and outside of Washington, D.C., agricultural and forested areas remain and drain into the large Anacostia River watershed. Around WNY, surface water runoff from urbanized areas and groundwater that flows beneath the WNY empty into river (CH2MHILL 1999). Less than two miles downstream from WNY, the Anacostia River converges with the Potomac River and flows into the Chesapeake Bay. Because of increasing local urbanization, and the resulting loss of wetland, the lower Anacostia River in the vicinity of the WNY is experiencing growing problems with erosion, sedimentation, and the delivery of excess nutrients. The effects of these problems have diminished the river's natural capacity to filter contaminants and increased chemical contaminant transport via sediment movement.
Much of the WNY property bordering the Anacostia River is filled land. Because of the fill's flat topography, water tends to flow slowly, especially at the mouth of the Anacostia River. This sluggish movement, coupled with the fact that the river in the WNY area is tidally influenced (from the Chesapeake Bay), creates an environment in which pollutants linger instead of being washed out by quickly moving water. On average, it takes materials in the lower Anacostia River 20 to 40 days (and over 100 days in times of drought) to reach the Chesapeake Bay (Interstate Commission 1996, USACE 1990). This flow pattern appears to deposit and concentrate significant amounts of upstream contaminants and sediments in the WNY vicinity (EPA and Chesapeake Bay Program Office 1999).
About 450 storm sewer outfalls owned by the Washington, D.C., Water and Sewer Authority discharge stormwater runoff directly into the area's local surface waters. Of these, 136 stormwater outfalls discharge into the Anacostia watershed. During large rain storms, 15 combined sewer overflow (CSO) outfalls discharge raw sewage and stormwater directly to the D.C. portion of the tidal Anacostia River. One hundred and ten storm sewer outfalls discharge into the tidal Anacostia River. Four outfalls--two CSO and two storm sewer outfalls--underlie WNY property and have discharged to the River. Washington, D.C., has not yet tested or monitored its municipal outfalls, but the Navy conducted some preliminary sampling.
Surface Water Use
No one drinks surface water from the Anacostia River in the WNY vicinity. Local anglers and boaters use the Anacostia River for recreational purposes. There is a small boat yard and two yacht clubs immediately upstream from WNY. Across the river is a public park accessed by anglers. Park visitors and local residents do not normally swim in the Anacostia River in the WNY vicinity, but swimming in this area is not prohibited.
Surface Water and Sediment Quality
The Anacostia River's health has been impacted by numerous interrelated forces. Because of a number of possible pollutant sources along the river, the extent to which WNY has contributed to the Anacostia River contamination remains unknown (Washington, D.C. RA 1996). However, shallow groundwater from WNY does discharge to the Anacostia River. Few industries or other point sources today continue to discharge effluent to the lower Anacostia River. Most existing river contamination stems from continued releases from past discharges, off-site non-point sources, and/or releases from storm and combined sewers.
Recent surveys of the sediments in the Anacostia River (Velinsky et al. 1994, Wade et al. 1994) reveal significant concentrations of PCBs, PAHs, metals, and the pesticide chlordane (Velinsky et al. 1992). The highest concentrations of many contaminants have been located in the sediments at the lower reaches of the Anacostia River near the confluence with the Potomac River. Most of the major contaminants in the river adhere to particles and traveled to WNY area via sediment transport.
During the Navy's Site Investigation, six sediment samples were collected from the Anacostia River adjacent to WNY and one sediment sample was collected upstream to serve as a background sediment sample. Methylene chloride (up to 3 parts per million [ppm]), acetone (up to 100 ppm), and toluene (trace amounts) were the only VOCs detected in the seven sediment samples, all at concentrations below ATSDR comparison values for soil (no comparison values exist for sediments). A total of 24 SVOCs were detected in the sediment samples collected from the Anacostia River, all at concentrations below ATSDR comparison values for soil (NFEC 1996). In general, the SVOC concentrations appear to be increasing downstream. Metals detected in the sediment samples included copper (up to 260 ppm), lead (up to 234 ppm), nickel (up to 40.7 ppm), and zinc (up to 415 ppm). Pesticides were not detected in the seven sediment samples. However, PCBs (Aroclor 1260) were detected in all the sediment samples with the exception of the background sediment sample. The concentrations of PCBs ranged from 0.085 ppm to 12 ppm and appear to be increasing downstream (NFEC 1996). ATSDR does not have a comparison value for Aroclor 1260, so the maximum detected concentration was screened against the most conservative Aroclor compound comparison value for soil, which is Aroclor 1254. Detected PCB levels slightly exceeded the Aroclor 1245 comparison value of 10 ppm for an adult and 1 ppm for a child. The highest PCB concentrations in sediment were measured adjacent to and downstream from WNY.
During another investigation, concentrations of PAHs in Anacostia River sediment in the WNY vicinity ranged from 5.6 to 28.3 ppm (Wade et al. 1994). These concentrations exceed upstream Anacostia River and typical urban background concentrations, both reported in the range of ppb rather than ppm (ATSDR 1995). The elevated PAH concentrations in the lower Anacostia River likely result from sediment transport and deposition in the WNY vicinity rather than from WNY activities (Wade et al. 1994; Coffin et al. 1998; ATSDR 1999a).
During a 1996 sediment removal action from two stormwater sewer lines (outfall 5 and outfall 10 [Figure 3]), the Navy found that the stormwater line sediments contained high levels of metals (arsenic [up to 52.6 ppm], lead [up to 567 ppm], mercury [up to 1.2 ppm]) and PCBs (Aroclor 1260 [up to 38 ppm]). Elevated levels of PAHs were also detected in the sediment (see Appendix A for details). Sediment samples collected at the WNY waterfront in 1996 contained lower contaminant concentrations. The contamination in the outfalls may have originated from Sites 4 and 6 or from the off-site Southeast Federal Center (SEFC), which formerly was part of WNY during its industrial period. Surface water samples were then collected at Sites 6 and 14, both of which contained metals (arsenic [up to 65.4 ppb], cadmium [up to 7.2 ppb], iron [up to 42,000 ppb], and lead [up to 305 ppb]) and PCBs (Aroclor 1260 [up to 2.2 ppb]) (NFEC 1996). Even though the maximum detected contaminant concentrations occurred in proximity to WNY, the portion attributable to WNY activities is unknown. Contaminant deposition is concentrated in the WNY area due to the natural flow, tidal, current, and mixing patterns of the river (Coffin et al. 1998).
In July 1998, the Navy issued an addendum to the Final Interim Measures Work Plan to address dioxin concerns. One sediment sample was collected for each of the storm sewers leading to outfalls 1 and 5 which release into the Anacostia River. Sediment from two storm sewers lines, at outfalls 1 and 5, contained dioxin. The toxicity equivalency factors for dioxin were detected above EPA's residential screening level. Again, the sediment from the storm sewer lines has been removed and the storm sewer system has been repaired (Navy 2001).
Evaluation of Potential Public Health Hazards
No one is exposed to contaminants in on-site surface water or sediment at the WNY. The storm water outfalls underlying WNY are buried underground and discharge into the Anacostia River at locations where there is no public access. Nor is anyone who uses the Anacostia River expected to come in contact with harmful levels of contaminants. Most importantly, people do not swim or drink from the Anacostia River. Exposure, if any, to contaminated surface water and sediment is minimal and limited to infrequent dermal contact that might occur during fishing, boating, or other recreational activities. Furthermore, skin contact with the contaminants detected above CVs is not expected to cause health problems since the contaminants do not easily absorb into or pass through the skin. ATSDR concludes that such infrequent, short-duration exposure to chemical contaminants in surface water and sediment near WNY does not pose any apparent public health hazards.
Could exposure to surface soil at WNY result in adverse health effects for local employees, residents, or visitors?
- Lead is the primary contaminant of concern in WNY surface soils. Admiral's Row (Site 10) surface soils contained lead concentrations above ATSDR comparison values for residential areas. In the past, people may have been exposed to elevated lead levels in surface soil that had the potential to pose a heath hazard to children.
- Current and potential future exposure to contaminated soil at WNY is largely prevented because the majority of the land's surface is either paved, covered by buildings, or lies in restricted land use areas.
WNY lies on terrace deposits and filled areas of the Anacostia River and slopes generally from the northern part of the facility southward to the river. The ground surface elevation ranges from a high of approximately 55 feet above mean sea level in the northeast part of the facility to just above mean sea level along the bulkhead adjacent to the Anacostia River. The soil appears to consist primarily of poorly sorted silt, sand, and gravel, mixed with variable amounts of construction materials, such as brick, concrete, and wood (CH2MHILL 1999).
Admiral's Row (Site 10) is the area of primary concern for public contact with surface soil contamination (CH2MHILL 1998, NFEC 1996). Admiral's Row lies along Warrington Avenue and consists of a group of residential buildings and Luetze Park. Some Admiral's Row residences have fenced yards and gardens. Luetze Park is the only non-paved area accessible to public visitors and WNY residents.
Nature and Extent of Soil Contamination
The main source of surface soil contamination at WNY is lead paint from site buildings, specifically buildings in Admiral's Row (Navy 1999). Surface soil samples collected from the yards of the residences located on Admiral's Row had detected lead concentration up to 18,700 ppm and above EPA's residential soil level of 400 ppm (Navy 1999). Ten surface soil samples were also taken from Luetze Park located on Admiral's Row. The maximum detected lead concentration in Luetze Park was 441 ppm, which just slightly exceeded EPA's residential soil level of 400 ppm. The other nine samples contained detected lead levels below 400 ppm. In all Luetze Park samples, the testing laboratory indicated that analytes were present, that the reported values may be biased high, and that the actual lead values are expected to be lower. Therefore, actual lead levels are probably below 400 ppm.
Soil contaminants were also found at other WNY sites, but the detected contaminant concentrations are low, infrequent and/or located in publically inaccessible areas. For example, surface soil underlying Building 292 at Site 14 was tested for PCBs. Aroclor 1260 was detected at a maximum concentration of 20 ppm, which exceeds ATSDR's comparison values of 10 ppm (adult) and 1 ppm (child) for Aroclor 1254 (NFEC 1996). (ATSDR does not have a comparison value for Aroclor 1260, so the maximum detected concentration of 20 ppm was screened against the most conservative Aroclor compound comparison value, which is Aroclor 1254.) The Navy also found visible liquid mercury at Site 16 in the subsurface soil of a confined area, approximately 5 or 6 feet below the surface and close to the water table. In June 1999, the Navy removed about 1 cubic foot (5 gallons) of the mercury-contaminated soil from above the water table (Navy 2001). Twelve cubic feet of soil were removed from this area during remedial activities (Navy 2000). Although site 16 contains soil contamination in fill, the soil is not accessible to the public.
Evaluations of Potential Public Health Hazards
Contamination is present in soil at certain locations of the WNY. The likelihood, however, that workers in their routine responsibilities (e.g., landscaping, gardening, or construction), or residents and visitors during their infrequent access to Admiral's Row soils, will contact the most contaminated soil for an extended period is remote. If workers or trespassers do contact contaminated soil, exposure most likely is intermittent and brief. Moreover, workers entering these areas must wear protective clothing, which further reduces exposure and any associated health effects. Such minimal, infrequent exposure to on-site contaminants, if it occurs at all, would not be expected to result in adverse health impacts. Appendix A provides a detailed evaluation of potential public health hazards associated with soil contamination at each WNY site. All sites, except for Admiral's Row, are not associated with any known public health hazards because: 1) no site-related contaminants are present where exposure to the public could occur; 2) contaminant concentrations detected are too low to pose a health hazard; and/or 3) past and current exposures to the general public have been prevented. In addition, most WNY sites (including Admiral's Row) are surrounded by perimeter fencing and covered surfaces (e.g., vegetative growth, paved areas)--both of which prevent and/or reduce potential exposure to contaminated soil. In other locations, contamination occurs in inaccessible subsurface soils where exposure is not possible.
Admiral's Row surface soil is a completed past exposure pathway for on-site workers, residents, and visitors. Historically, public contact with Admiral's Row surface soil was not deterred or restricted. Therefore, dermal contact with and incidental ingestion of lead concentrations above EPA's residential soil level of 400 ppm likely occurred. Because all age groups have accessed WNY, either as residents or museum visitors, ATSDR evaluated potential health hazards at WNY for both adult and child past exposures (Appendix C). Based on ATSDR's estimated exposure doses, past blood lead levels for children living at WNY may have been elevated above the recommended action level of 10 µg/dL in blood. Due to conservative assumptions in the calculation made by ATSDR, this estimated dose probably overestimates actual past exposure levels. Due to insufficient historical data on the extent of and exposure to this lead contamination, the exact health implications from past exposures cannot be assessed.
To deter people from contacting contaminated surface soils at Admiral's Row, the Navy currently enforces several interim measures. The Navy has constructed yard and garden fences surrounding contaminated areas of soil, posted signs warning the public about the contaminated surface soils, and implemented and enforced stringent land use restrictions to stop residents and the general public from contacting yard and garden soils. Only trained contractors following Occupational Safety and Health Administration safety requirements and wearing protective gear are currently permitted to dig, garden, and/or landscape in the Admiral's Row vicinity. The Navy also initiated a public education program. This lead-awareness initiative alerts WNY construction workers, employees, and residents about the hazards of working and living in a 200-year old military base. These interim measures have effectively deterred WNY residents, employees, and visitors from contacting Admiral's Row contaminated surface soils (ATSDR 1999a). ATSDR concludes that current and potential future exposures to on-site soil pose no apparent public health hazards. Past soil exposure is a completed exposure pathway with the potential for adverse health effects to children, but, due to the lack of historical data, the health implications from past exposure can not be assessed.
Will eating fish caught from the lower Anacostia River near WNY cause adverse health effects?
- Fish in the lower Anacostia River have been impacted by chemical contaminants from a variety of sources. How much of the pollution originates from WNY operations is unknown. Historically, PCB and chlordane in certain fish of the river have exceeded FDA guidance levels (for commercial fish). People who eat contaminated fish in sufficient quantities could develop adverse health effects.
- A fish consumption advisory posted in Anacostia Park recommends that people refrain from eating certain fish caught from this river. People who fish from the lower Anacostia River can best protect themselves against potential harmful effects from contaminants in fish by following the recommendations in the advisory.
The Anacostia River is a popular recreational fishing spot for Washington, D.C. area anglers, particularly in the vicinity of Hains Point (at the confluence of the Anacostia and Potomac Rivers). Most anglers fishing along the river practice catch and release fishing (82%), although some anglers still cook and eat their catches. Of fish inhabiting the river, catfish and bass are the most commonly caught species (D.C. Department of Health 1999). A public park (Anacostia Park) lies across the lower Anacostia River from WNY with posted signs to warn anglers against cleaning their catch on the picnic benches. During the site visit, however, ATSDR noted that public signs warning anglers against eating their catch are not readily apparent. ATSDR recommends that the National Park Service improve the fish consumption advisory signs so that they are more easily seen in Anacostia Park. The Washington, D.C., Department of Public Health (D.C. Health Department) has posted signs in the area since it issued a fish consumption advisory for the Anacostia and Potomac Rivers in July 1989. ATSDR recommends additional fish consumption advisory warning signs in visible locations along the lower reaches of the Anacostia River.
As noted in the Surface Water and Sediment discussion, the Anacostia River water quality and the sediment have been impacted by a number of pollutant sources, including urban development, untreated sewage from combined sewer overflows, non-point source surface runoff from agricultural activities and storm drains, and chemical releases from industrial and federal facilities (DC DCRA 1996). Fish take in and accumulate the contaminants over time as a result of a very slow rate of elimination. Larger, older fish tend to accumulate the highest levels of contaminants (EPA 1994). Some Anacostia River fish have accumulated contaminants to levels high enough to pose a health risk to certain people who eat fish.
In 1989, the Washington, D.C., Department of Health issued a public health advisory, urging anglers to limit their consumption of channel catfish, carp, and eel caught in the D.C. waters of the Anacostia and Potomac Rivers. This advisory was primarily based on elevated levels of PCBs and chlordane in certain bottom-dwelling fish species (see Fish Tissue Data section below). (Bottom-dwelling fish were targeted because, as bottom feeders, they are in frequent contact with sediment and they generally have a high body fat content where organic contaminants, such as PCBs, are stored.) The Washington, D.C., Department of Health advised citizens to consume no more than one locally caught meal (one-half pound) per week and to eat only skinless, boneless fillets. The advisory discouraged women of childbearing age, nursing mothers, and pre-schoolers from eating any locally caught fish (DC DCRA 1994a).
Based on annual fish tissue data gathered by the Washington, D.C., Environmental Regulation Administration (DC ERA), the Washington, D.C., Department of Health reviewed and updated their fish consumption advisory with stronger language in 1994 (Interstate Commission 1996). The upgraded advisory called for a total ban on the consumption of locally caught catfish, carp, and eel (bottom-feeding species). Additionally, it advised citizens to eat only one-half pound per week of sunfish, one-half pound per month of largemouth bass, and 1 to 4 meals per month of other fish from the Anacostia and Potomac Rivers. The Washington D.C., Department of Health advised people to choose younger and smaller fish of legal size and encouraged catch-and-release fishing over consumption (DC DCRA 1994b). According to Washington, D.C., Department of Health officials, the vast majority of the public adheres to these posted warnings, but some community members continue to eat fish and eel caught from the lower Anacostia River.
A number of fish tissue studies have been conducted by various agencies and groups to determine whether and to what extent contaminants were accumulating in fish caught from the Anacostia and Potomac Rivers. Many studies compared contaminant levels to action or tolerance levels designed by the U.S. Food and Drug Administration (FDA) to protect consumers of commercial fish. FDA's action or tolerance levels, used by ATSDR as screening values and referenced in the following discussion, are (all wet weight): PCBs, 2.0 ppm; chlordane, 0.3 ppm; dieldrin, 0.3 ppm; 1,1,1-trichloro-2,2-bis(p-chlorophenyl)ethane (DDT), 5.0 ppm; and mercury, 1.0 ppm(1). Table 2 summarizes the results the maximum contaminant concentrations detected in fish fillet samples reported in these studies.
The collection of studies provide information on fish samples caught from many different points along the Anacostia and Potomac Rivers and for a range of fish sample types (i.e., fillet, carcass, and whole fish). For the purposes of this public health assessment, however, ATSDR was most interested in assessing data for fish most likely affected by WNY-related contaminants and for fish types most relevant to local population's eating habits. Therefore, ATSDR evaluated:
- Fish samples collected from the lower Anacostia River (defined as the portion of the river that lies between the railroad bridge just downstream of Kingman Lake and Hains Point at the Potomac River). This area includes the WNY and a sufficient upstream and downstream reach to account for fish migration from the water adjacent to WNY.
- Fish fillets, which represent the "edible" portion of fish. ATSDR is not aware of any groups of people visiting the lower Anacostia River whose diet consistently rely on other portions of fish (i.e., bones, fat, viscera).
Early fish tissue studies (1987 and 1992) of the Anacostia River were conducted by the U.S. Fish and Wildlife Service (1987), DC ERA (1989), the D.C. Environmental Control Division (1991), and Velinsky and Cummins (1989 to 1992). Fish, predominantly channel catfish, common carp, largemouth bass, brown bullhead, sunfish, and American eel, were collected and analyzed for PCBs and pesticides; selected samples were also analyzed for metals, VOCs, SVOCs, polychlorinated dibenzodioxins, and polychlorinated dibenzofurans. The data collected from these studies indicated that detectable levels of many chemicals were present in edible portions of fish from the Anacostia River. The chemicals ranged from trace levels of metals, such as mercury and lead, to higher levels of organic compounds. Of the organic contaminants tested, PCBs (up to 2.4 ppm) and the pesticide chlordane (0.633) were found in the highest concentrations and at levels above their respective FDA action or tolerance level. Fish most affected by the contaminants were channel catfish and other bottom-dwelling species. (Even higher PCB levels were reported for carcass [up to 2.9 ppm] samples.) Contaminants were also detected in top-level predatory species, such as the largemouth bass and sunfish, but at lower levels than those observed in the bottom-dwellers. The collective results of these early studies suggested that PCBs and chlordane were present at concentrations of public health concern (Velinsky and Cummins 1994).
Since the issuance of the fish consumption advisory, several restoration and source control measures were instituted to help reduce tissue residue levels in the local fish population (DC Department of Health 2001). To assess the effects of these measures, Velinsky and Cummins examined the trends in contaminant levels over time. They analyzed 20 fish composite samples obtained from the D.C.'s Environmental Regulation Administration [DC ERA] archived inventories of samples collected in 1993, 1994, and 1995. (It should be noted that not all species were collected from the same location each year.) The samples were analyzed for more than 129 chemical contaminants. PCBs, chlordane, dieldrin, DDT, and mercury were detected in one or more species of fish, but generally at concentrations below FDA action or tolerance levels and lower than values reported for earlier studies. Velinsky and Cummins concluded, however, that unacceptable levels of contaminants were still present in Anacostia River fish (Velinsky and Cummins 1996).
The most recent fish tissue study was conducted in November 2000, by DC Department of Health (DC Department of Health 2001). Results confirm earlier findings, mainly that PCBs (up to 2.49 ppm) and chlordane (up to 0.338 ppm) are the primary contaminants of concern and are present at concentrations above FDA action or tolerance levels, especially in bottom-dwelling fish species that include channel catfish, carp, and American eel (Velinsky 2000; DC Department of Health 2001).
Evaluation of Potential Public Health Hazards
Of all contaminants detected in fish from the lower Anacostia River, only PCBs and chlordane exceeded their respective FDA action or tolerance levels. These FDA action or tolerance levels serve as preliminary screening values, but ATSDR did not consider them to be very conservative for several reasons: (1) the FDA action levels apply only to fish sold in interstate commerce, (2) the FDA action levels factor in economic considerations and are not as conservative as health-based action levels, and (3) fish consumption rates for sport anglers could be higher than those assumed for the consumption of commercially bought fish. Therefore, ATSDR conservatively estimated exposure doses for individuals who eat fish contaminated with PCBs and chlordane from the lower Anacostia River. Appendix C describes the method and conservative assumptions ATSDR used to estimate exposure doses and potential health effects. The estimated exposure dose for PCBs exceeded levels considered acceptable for the general population. Based on current, available information, ATSDR concludes the consumption of local fish could pose a public health hazard and the fish consumption advisory for the Anacostia River should continue to be observed. Additional signs urging people to adhere to the advisory may be needed at key public access points along the Anacostia River.
The community surrounding WNY has concerns regarding the Anacostia River's surface water quality. No public health hazards were specifically attributed to chemical contaminants in local surface water, nor were WNY operations implicated as the primary source of Anacostia River pollutants. The public, however, expressed a desire for WNY to immediately address the issue by implementing remediation efforts to improve local surface water quality. The status of the various sites at WNY and remediation projects are outlined in the Public Health Action Plan section and Appendix A of this report.
On September 26 and 27, 2000, ATSDR held sessions for the public to express their health and environmental concerns with respect to WNY. Our public sessions were held at Van Ness Elementary School and the Anacostia Park Pavilion. During these sessions, ATSDR spoke with representatives from the National Park Service, the Student Conservation Association, Inc., and the Navy. ATSDR also spoke with eight fishermen in Anacostia Park on September 27. Three fishermen said they were catching-and-releasing while five others indicated they eat the fish sometimes and were not aware of consumption advisories.
A draft of this public health assessment was released for public comment from September 28 through October 31, 2001. Although no public comments were received, ATSDR revised this assessment based on comments from the Navy (Navy 2001) and the DC Department of Health (DC Department of Health 2001b).
ATSDR recognizes that infants and children may be more sensitive to exposures than adults in communities with contamination in their water, soil, air, or food. This sensitivity is a result of a number of factors. Children are more likely to be exposed to soil or surface water contamination because they play outdoors and often bring food into contaminated areas. For example, children may come into contact with and ingest soil particles at higher rates than adults do; also, some children with a behavior trait known as "pica" are more likely than others to ingest soil and other nonfood items. Children are shorter than adults, which means they can breath dust, soil, and any vapors close to the ground. Also, they are smaller, resulting in higher doses of chemical exposure per body weight. The developing body systems of children can sustain permanent damage if toxic exposures occur during critical growth stages. Because children depend completely on adults for risk identification and management decisions, ATSDR is committed to evaluating their special interest at sites such as WNY, as part of the ATSDR Child Health Initiative.
ATSDR has attempted to identify populations of children in the vicinity of WNY and any completed exposure pathways to these children. During the 1999 ATSDR site visit, one adolescent child under the age of 18 lived on WNY property. The community surrounding WNY contains residential neighborhoods with children and schools, but children cannot easily trespass on to WNY property due to perimeter fencing and military security measures. Children, however, may infrequently visit WNY during group tours and visits to the on-site museum. The tours and museum do not expose WNY visitors to contaminated areas or public health hazards.
Residential and visiting children may access Luetze Park, but not Admiral's Row gardens and house lawns which contain lead levels above 400 ppm. Access to Admiral's Row is prevented by site fencing and land use restrictions. Luetze Park is grass covered and contains surface soil lead concentrations too low to pose a public health hazard to children.
In the past, prior to Naval land use restrictions, there may have been limited child exposure to the lead-contaminated surface soil. Currently, all child exposures to on-site contaminants are prevented because children do not drink the underlying groundwater, contact Admiral's Row surface soils, access subsurface soils, or come into contact with any other known contaminated areas.
The Anacostia River has been impacted by chemical pollutants released into the river from a variety of sources. Children should not come into prolonged direct contact with the pollutants since children do not swim in the Anacostia River. Children may eat fish from the river, however, if parents do not follow the Washington, D.C., fish consumption advisory for the Anacostia and Potomac Rivers. If children do eat locally-caught fish, the chemical residues in the fish could pose a public health hazard for children. ATSDR recommends that children and parents observe the Washington, D.C., fish consumption advisory. We also recommend raising awareness about the fishing advisory among residents and health care providers.
ATSDR concludes that past, current, and future exposures to groundwater, surface water, and sediment do not pose a public health hazard for children because exposure is minimal, if it occurs at all. Past exposure to lead in surface soil at Admiral's Row is a completed exposure pathway with the potential to have adversely affected child health. The consumption of local fish poses a potential child health hazard and children should not eat fish caught in the lower reaches of the Anacostia River.
1. The FDA action or tolerance levels were established for seafood sold through interstate commerce. They were developed to protect humans from harmful substances in commercial foods. Although the FDA levels were not developed as regulatory standards for freshwater fish, they are often used by states as guidance when setting freshwater fish consumption advisories. | 1 | 14 |
<urn:uuid:6a509049-382f-4d02-8440-464384987ca5> | Note: This FAQ is relevant for users of releases prior to Stata 8.
How do I make my graphs come out square?
Making graphs come out square
Nicholas J. Cox, Durham University, UK
Stata’s graphs are produced wider than high, “landscape”
in one terminology. Naturally, this shape may not always be best.
You might want a specific shape of graph for a
report or a presentation; for example, with
only a few variables or groups the box plots produced by graph, box
look too fat to many eyes. These choices depend on your purposes and your tastes.
In particular, some users may prefer a square
shape, so this FAQ explains how to get square graph images. Once you know
how to do that, you can produce other rectangular shapes.
The manual entry [G] gph and the on-line help for
gph explain the
bbox() option (think “bounding box”). This option takes a
comma-separated list of seven numbers, which specify top, left, bottom,
right, text height, text width, and rotation of the graph. Here we are only
concerned with the first four. These specify the positions of the corners of
the whole graph image, using a row and column system in which row numbers
increase from top to bottom and column numbers increase from left to right.
That is, a table or matrix convention is used, rather than a Cartesian
The whole graph image includes everything, including whatever titles you put
on the top, left, bottom, and right of the data region, which is the
rectangle defined by the graph axes. The precise size and shape of the data
region will be sensitive to titles and other marginal material, including
the space tuned by the gap() option, see help on
graxes. Note it is not easy to control the exact position and
shape of the data region without low-level programming using gph.
The default value of bbox() is (0,0,23063,32000,923,444,0),
which sets the extent of the graph image to the maximum possible. From this,
you can see that a square shape can be achieved by, for example,
(0,0,23063, 23063, 923,444,0). (As said earlier, we are not concerned
here with the last three arguments, controlling text height, width, and
So, very simply, try this with the auto data:
. sort foreign
. graph mpg, by(foreign) box
. graph mpg, by(foreign) box bbox(0,0,23063,23063,923,444,0)
Clearly, you could produce other sizes and shapes by changing the four
corner positions. If you wanted to do this repeatedly, you might want to
keep the option arguments you desire in a global macro with a more concise
and more memorable name, and then invoke it as needed:
. global sq "bbox(0,0,23063,23063,923,444,0)"
. graph mpg, by(foreign) box $sq
. graph price, by(foreign) box $sq
You could have such a global macro automatically available if you defined it
in a profile.do, as explained in the FAQ How can I automatically
execute certain commands every time I start Stata?
The user-written program sqr, available from SSC-IDEAS, allows you to
produce graphs that are square, or with any other aspect ratio within
Stata for Windows
Stata for Unix
Stata for Mac | 1 | 8 |
<urn:uuid:51236dfd-64e4-444f-9401-bd7da4dd5ebb> | TCP Timestamping - Obtaining System Uptime Remotely
14 Mar. 2001
TCP Timestamping can be used to retrieve information about your system that you may not wish to be public. It turns out that TCP Timestamping is equal to the uptime (after a fashion) of many systems, and as such can give you extra information about the running system.
The information has been provided by Bret McDanel.
The NMap port scanner includes the ability to scan hosts TCP timestamping and determining uptime information remotely.
What is Timestamping? How can it be used to gain information about a running system? Timestamping is a TCP option, which may be set, and if set takes 12 bytes in the header (for each packet) in addition to the 20 bytes a TCP header normally takes. This is exclusive of any other options. What good is this overhead? According to RFC1323:
"The timestamps are used for two distinct mechanisms: RTTM (Round Trip Time Measurement) and PAWS (Protect Against Wrapped Sequences).".
Anyone interested in TCP Timestamps should read RFC1323 (these are not the IP timestamping options). The fact that timestamping exists isn't anything special in itself, but how the value is populated and how the value is set is somewhat interesting.
4.4BSD increments the timestamp clock once every 500ms and this timestamp clock is reset to 0 on a reboot - TCP/IP ILLUS v1, p349.
The timestamp value to be sent in TSval is to be obtained from a (virtual) clock that we call the "timestamp clock". Its values must be at least approximately proportional to real time, in order to measure actual RTT. - RFC1323 May 1992
Note that the RFC does not dictate that the timestamp clock be tied to system uptime, so any system that doesn't conform to this is perfectly valid (i.e. Windows 2000). Additionally, the rate at which each system increments the clock need not be disclosed either, as the timestamp value is only echoed back to the sender for the sender to process.
This means that in 4.4BSD we can use this number to directly tell the time that a system has been up. All we have to do is make a connection and record the received timestamp. Not everyone implements timestamping this way however. This yields various results on different operating systems. Linux for instance increments every 1 ms, Cisco IOS increments every .1 ms. Windows 95/98/NT4 do not support Timestamping (although rumor has it that there is a patch to enable RFC1323 functionality on 95/98/NT4) Win2k does, but this value does not appear to be directly related to uptime.
This means that in order to tell the uptime we need to know what OS we are looking at, or at the very least make multiple connections and try to guess what the increment is based on elapsed time vs. increment.
There are some limitations to using this method for recording uptime. Certain systems have a maximum limit on how long their 'uptime' can be.
The timestamp is a 32-bit number (signed), and as such, it will overflow into the sign bit after 2147483647 ticks. Based on the number of ticks per second, you can easily determine when this will roll over.
(leap year included)
OS Ticks/sec Rollover time
4.4BSD 2 34 years, 8 days, 17:27:27
Solaris 2 10 6 years, 293 days, 22:53:00
Linux 2.2+ 100 248 days, 13:13:56
Cisco IOS 1000 24 days, 20:31:23
One can also map out the number of systems in a load-balanced environment by connecting repeatedly to the group of machines, and inspecting the Timestamps. For each different time you have a different machine.
RFC1323 talks about the frequency the 'timestamp clock' should be updated. The receiver algorithm does place some requirements on the frequency of the timestamp clock.
(a) The timestamp clock must not be "too slow".
It must tick at least once for each 2**31 bytes sent. In
fact, in order to be useful to the sender for round trip
timing, the clock should tick at least once per window's
worth of data, and even with the RFC-1072 window
extension, 2**31 bytes must be at least two windows.
To make this more quantitative, any clock faster than 1
tick/sec will reject old duplicate segments for link
speeds of ~8 Gbps. A 1ms timestamp clock will work at
link speeds up to 8 Tbps (8*10**12) bps!
(b) The timestamp clock must not be "too fast".
Its recycling time must be greater than MSL seconds.
Since the clock (timestamp) is 32 bits and the worst-case
MSL is 255 seconds, the maximum acceptable clock frequency
is one tick every 59 ns.
However, it is desirable to establish a much longer
recycle period, in order to handle outdated timestamps on
idle connections (see Section 4.2.3), and to relax the MSL
requirement for preventing sequence number wrap-around.
With a 1 ms timestamp clock, the 32-bit timestamp will
wrap its sign bit in 24.8 days. Thus, it will reject old
duplicates on the same connection if MSL is 24.8 days or
less. This appears to be a very safe figure; an MSL of
24.8 days or longer can probably be assumed by the gateway
system without requiring precise MSL enforcement by the
TTL value in the IP layer.
Based upon these considerations, we choose a timestamp clock frequency in the range 1 ms to 1 sec per tick. This range also matches the requirements of the RTTM mechanism, which does not need much more resolution than the granularity of the retransmit timer, e.g., tens or hundreds of milliseconds.
As you can see all of these systems are within the RFC in their timings, however varied.
If you want to quickly get the Timestamp value, you can fire up tcpdump, and watch for it. Here is an example of what you may see and how to interpret the data:
> myhost.12345 > theirhost.22: . 1:1(0) ack 1 win 5840 (DF)
The timestamps are located near the end of the line, where the TCP Options are printed. The first timestamp is sent by 'myhost', the second is what 'theirhost' last sent us (we are expected to return that to them). The numbers are the number of ticks that have accumulated in the 'timestamp clock' and if the OS supports it, can reveal an uptime.
The information below was obtained by the author and several people running various OSs that were scanned and the results compared against the actual uptime.
If you are considering disabling timestamping on your system, please read RFC1323 for more information (especially if you are on a fast network).
Win2k sends the timestamp after the syn/ack handshake is complete (sends 0 TS during the 3-way handshake) and increment every 100ms initial random number.
95/98 does not support TS
NT 3.5/4 does not support TS
Sends TS on first packet replied to - default always get TS
To disable do:
echo 0 >/proc/sys/net/ipv4/tcp_timestamps
To enable do:
echo 1 >/proc/sys/net/ipv4/tcp_timestamps
Increments 100 ticks/sec
2.0.x does not support TCP Timestamps
2.1.90+ Supports Timestamps
2.2.x Supports Timestamps
2.4.x Supports Timestamps
5.3+ Support Timestamps
/var/sysgen/master.d/bsd contains the kernel variables. After editing you must use /etc/autoconfig and reboot
Under 6.5 edit /var/sysgen/mtune/bsd or use systune (like BSDs sysctl) tickrate 2/sec
9.x No (9.05 and 9.07 have patches to support Timestamps)
To enable you must poke the kernel variable tcp_dont_tsecho to 0
10.00,01,10,20,30 Support Timestamps
11 Enabled by default
3.2 & 4.1 Support Timestamps
Tunable via the 'no' command
4.1.4 No (May be purchased as a Sun Consulting Special)
2.5 No (May be purchased as a Sun Consulting Special)
2.6 may be uptime but rolls over quickly, increments 1000 ticks/second
2.7 tickrate 100/sec (its not exactly uptime there was a 5 minute
skew on a 112 day uptime)
8 it is uptime, 100 ticks/second
ndd /dev/tcp tcp_tstamp_always 1
If the parameter is set (non-zero), then the TCP timestamp option will always be negotiated during connection initiation. The scale option will always be used if the remote system sent a timestamp option during connection initiation. To use the timestamp, both hosts have to support RFC 1323.
By default disabled.
To change :
[no] ip tcp timestamp
Only a Cisco 2524 running 12.0(9) was tested.
cisco 2524 (68030) processor (revision J) with 14336K/2048K bytes of memory.
Updates 1000 ticks/sec resets to 0 at boot
comos (livingston/lucent portmasters)
Do not support TS
Does not support TS
11.0 Supports Timestamps
8.0 Supports Timestamps
(Compaq) Digital Unix
3.2 & 4.0 do not support Timestamps | 1 | 4 |
<urn:uuid:058cd36d-2185-47a7-b5af-766ddec7b65f> | Working in a basement (the source article actually says that), researchers are building on the age-old principle of using finely ground charcoal (a form of carbon) to increase the surface area holding the charge of a capacitor. Using custom-built/grown carbon nanotubes instead of coarse charcoal, a system begins to emerge that can accept a charge very quickly and then release it at a predictable rate. And while capacitors themselves are not as good at providing long-term power release like a battery, putting an incredibly high number of them together in a battery-like package will allow that package to charge up almost instantly and then provide power for the long haul.
The physical construction of the devices allows the carbon nanotubes to be applied to a piece of silicon in acetylene, the gas commonly used in oxyacetelyne torches for welders. The acetylene acts as a vehicle that allows the carbon nanobutes to be deposited onto a silicon substrate in a particular order. Once deposited they are then capable of holding a small electric charge and releasing it. Put enough of these tubes together and you have the makings of a battery.
This technology is still in its R&D phase, and the lead researcher hopes to have the research completed by Fall, with viable products coming out sometime after that. Of course, I'm sure politics will play a role as well, as it seems very unlikely that by now researchers haven't found better battery technology than the traditional lithium-ion and its cousins.
Read more at The Boston Globe.
USER COMMENTS 14 comment(s)
|Um… (10:07am EST Tue Jun 27 2006)
…I submitted this story to you guys about a week ago. Not enough links with the submission?
Either way, this seems like a really good idea. Capacitors may not have the lifespan in terms of storing a charge as a battery. However the majority of battery uses where rechargeable batteries are used are short to medium term anyway. However if it only takes a few seconds to recharge the 'battery' then the fact that it can lose it's charge over a period of months is not important.
The 'battery' can always be recharged in seconds anyway, so flash charge it at every opportunity. Unlike a standard rechargeable cell there is likely to be zero memory effect and charging a partially discharged capacitance battery will have no degredation on it's ability to take or hold a charge. It will simply charge to it's maximum each time it is charged.
Sometimes blue sky research like this reminds folks why it's important for universities and big corporates to do fundamental research and not simply product development.
|wow (11:15am EST Tue Jun 27 2006)
toyota would benefit greatly with this. electrical motors has such high torque… with this much capacity, we could be seeing Lexus in 0-60 in less than 3 seconds. - by yep
|HighlandCynic (12:03pm EST Tue Jun 27 2006)
You're not in our submission area. I searched for “Carbon” “nanotube” “battery” and (just now) “Highland” and “Cynic”.
It got lost in the shuffle. Our apologies.
- by RickGeek
|Lol (12:32pm EST Tue Jun 27 2006)
Geek.com stole another article and claimed it as their own. Lol, Geek.com is becoming like a certain other content-stealing site, especially now with those obnoxious Blu-ray ads that cripple firefox.
Grats, Geek.com. I used to love this site so damn much, but it's gone downhill way too much. - by Shaun T.
|conspiracy theory (12:46pm EST Tue Jun 27 2006)
Of course the batteries will be expensive.
Companies have to watch out for the profit margin
Have to have something to sell
|Rick (4:21pm EST Tue Jun 27 2006)
NP, I had sent a few submissions in around the same time and saw none appear, I figured that they were too off the wall for y'all. I've had some problems getting comments to post properly too, they appear to post and don't show up. On several occasions I have had to repost three or four times before the comment finally appears.
Ah well. The one thing I wish these guys at MIT could do is give us a really concrete ETA for these 'batteries' to make it to market. They would be fantastic in the electric or Hybrid car market, not to mention the almost limitless portable electronics market.
|Damn those scientist (4:34pm EST Tue Jun 27 2006)
They are wasting time and money! Please send EE to solve this problem. - by calling EE
|re: calling EE (9:05pm EST Tue Jun 27 2006)
we are the ones that make the world. - by EE
|The y could go anywhere (2:05pm EST Wed Jun 28 2006)
Could you build these into the LCD display and other devices? Are there other technologies that would bundle with this nanotube technology. Maybe a photocell technology would work with it. - by DS
|Dear guid, (5:37pm EST Wed Jun 28 2006)
Batterys will not be expensive if you use mind power. This new un expensive piece of technology can be used if you study the mutagens of the neuro terminal in the brain of a mammal, modified to the specifications of
e230.258509299999 which is equal to 9.999999006×10^99. Trust this formula i am an experienced scientist. - by Matty
|carbon nantube batteries (5:38pm EST Wed Jun 28 2006)
i am not happy with them i want to put them in my car for it to go faster but they are not that good they wont muck my car fly it is say i drove my car off a brige and i did not fly i hurt my nose. - by jon smith
|carbon nantube batteries (5:43pm EST Wed Jun 28 2006)
that is not true Batterys will not be expensive if you use mind power. This new un expensive piece of technology can be used if you study the mutagens of the neuro terminal in the brain of a mammal, modified to the specifications of
e230.258509299999 which is equal to 9.999999006×10^99. Trust this formula i am an experienced - by jon smith
|Im A Smart Gurl…?? (6:04pm EST Wed Jun 28 2006)
so like wat r u talking bout?? hmmmm i dont get this?? but what i do know….
Systematicly Speaking From A Dybolical Point Of View Your Fundamental Facilties Are Not Suffisently Sophsitcated to Colaberate with My FolIshpies… (dont know how to spell…) - by *SmArTiE PaNtZ GuRl*
|Yes, very smart . . . (6:07pm EST Wed Jun 28 2006)
u actually cannot spell at all - by * ChiLd PrOdiGy* | 1 | 2 |
<urn:uuid:35a606bc-962d-48bb-b354-cdd13a3776cc> | HISTORY OF FLIGHT Use your browsers 'back' function to return to synopsisReturn to Query Page
On January 2, 2000, at approximately 0950 mountain standard time, a Cessna 421B, N421CF, was destroyed following impact with terrain near Telluride, Colorado. The non-instrument rated private pilot, the sole occupant in the airplane, was fatally injured. The airplane was being operated by the pilot under Title 14 CFR Part 91. Instrument meteorological conditions prevailed for the cross-country personal flight that originated from Montrose, Colorado, approximately 35 minutes before the accident. The pilot had not filed a flight plan; family members said the pilot was en route to Las Cruces, New Mexico.
The family reported the pilot missing on January 4, and a search was commenced. Search and rescue team members located the airplane at approximately 1430 on January 6. There was approximately 18 to 24 inches of snow on the ground at the accident site.
Federal Aviation Administration (FAA) radar documented the airplane's departure from Montrose at approximately 0915 on January 2. At 0946:19, the airplane began a 1,792 feet per minute (fpm) rate of climb from 14,300 feet msl to 16,600 feet msl. The radar shows that 19 seconds later, the airplane lost 4,000 feet of altitude, or 12,631 fpm rate of descent. The airplane then climbed back to 13,300 feet msl at a rate of 1,448 fpm. One more primary radar return was recorded at 0948:34 (no altitude was documented), and then the airplane disappeared from radar.
The pilot's old flight logbook (his current logbook was never found) indicated that he received his private pilot license on March 19, 1970. The pilot purchased N421CF in November of 1998, and he attended a Cessna 421B ground and flight training school, Double Eagle Aviation, Tucson, Arizona, in January of 1999. On his application for the school, he reported that he had 1,500 hours of single-engine flight time, and 1,500 hours of multiengine flight time. Instructors at the school reported that the pilot had good natural flying skills and was a quick learner. They did report that he was "somewhat weak with instrument reference."
The pilot reported on an insurance application, on December 11, 1999, that he had 3,700 hours of flight experience, and 200 hours of flight experience in N421CF. The pilot's last FAA medical certificate was dated March 31, 1999. The pilot did not have an instrument rating.
The airplane was a twin engine, propeller-driven, pressurized aircraft, which was manufactured in 1974 by Cessna Aircraft Company. It could seat eight people. The airplane was powered by two Teledyne Continental GTSIO-520-H turbocharged, six cylinder, reciprocating, horizontally opposed, fuel injected engines which had a maximum takeoff rating of 375 horsepower at sea level. The last annual inspection was performed in Montrose, Colorado, on May 11, 1999. At the time of the accident, the aircraft maintenance records and hour meter suggest that the airframe had accumulated approximately 3,154 hours.
Fuel purchase records from Montrose Regional Airport indicate that N421CF received 108 gallons of 100LL aviation fuel on December 23, 1999.
At 0953, the weather conditions at the Cortez Municipal Airport, Cortez, Colorado (elevation 5,914 feet), 170 degrees 22 nautical miles (nm) from the accident site, were as follows: wind 240 degrees at 5 knots; visibility 5 statute miles (sm) with snow showers; cloud condition broken 2,400 feet, overcast 3,200 feet; temperature 28 degrees Fahrenheit; dew point 28 degrees Fahrenheit; altimeter setting 29.84 inches of mercury.
At 0953, the weather conditions at the Animas Air Park, Durango, Colorado (elevation 6,684 feet), 110 degrees 45 nm from the accident site, were as follows: wind 110 degrees at 4 knots; visibility 1 sm with snow showers; cloud condition broken 800 feet, overcast 1,800 feet; temperature 25 degrees Fahrenheit; dew point 25 degrees Fahrenheit; altimeter setting 29.84 inches of mercury.
Snowmobilers, who were in the vicinity of the impact site, said snow showers made visibility less then 1/2 sm at approximately 0950. Telluride Regional Airport (elevation 9,078 feet), 045 degrees at 33 nm, reported having 6 to 8 inches of snow throughout the day. A pilot departing Telluride Regional Airport, on a heading of 300 degrees, at approximately 1015, said that it was clear right over Telluride. He said that as he climbed out he got into weather at 12,000 feet mean sea level (msl), and didn't break out until 22,000 feet msl. He also said that he experienced no icing or turbulence during his climb out.
WRECKAGE AND IMPACT INFORMATION
The airplane crashed in rolling mountainous terrain (elevation 8,250 feet) partially covered with 5 to 20 foot tall trees (N37 degrees, 43.50 minutes; W108 degrees, 25.20 minutes). Missing branches from the trees on a ridge line (elevation 8,500 feet) overlooking the first impact point suggest that the airplane was approximately 30 degrees nose low and in a 25 degrees right bank. The missing branches on the northwest side of the ridgeline were longitudinally oriented 320 degrees.
Descending down the ridgeline towards a small valley below was a scattered debris path comprised of components of the right outboard wing: the right wing auxiliary (inboard) fuel tank, a 4 foot wing spar section, the right wing aileron, and the right wing tip main fuel tank. As the debris path crossed the 300-foot wide meadow, its ground track changed to 334 degrees. At this point, the terrain began to rise, and two 4x10 foot craters were located (860 feet from the debris field start point). Each crater contained propeller blades (five of the six blades were found, the sixth was found after the snow melted in the spring). Several small red plastic lens fragments were found approximately 10 to 14 feet to the right of the right hand crater.
The left engine was found on the right side of the debris path, at the 990-foot point, and the right engine was found on the left side of the debris path, at the 1,150 foot point. Physical evidence at the accident site suggested that the airplane impacted the terrain, at the 860-foot point, inverted.
The fuselage and empennage were found 1,550 feet from the debris path start point. The last piece of wreckage, a wheel, was found 1,600 feet from the debris path start point.
All the major components of the airplane were accounted for at the accident site. The flight control surfaces were all identified, but control cable continuity could not be established due to impact damage. Both engines were severely impact damaged; neither crankshaft could be rotated. There was no evidence of pre or postimpact fire. No preimpact engine or airframe anomalies, which might have affected the airplane's performance, were identified.
MEDICAL AND PATHOLOGICAL INFORMATION
An autopsy was performed on the pilot by the Southwest Memorial Hospital, Cortez, Colorado, on January 7, 2000.
The FAA's Civil Aeromedical Institute (CAMI) in Oklahoma City, Oklahoma, performed toxicology tests on the pilot. According to CAMI's report (#200000009001), carbon monoxide and cyanide tests were not performed. No volatiles or drugs were detected in the muscle samples.
The airplane, including all components and logbooks, was released to a representative of the owner's insurance company on August 28, 2000. | 1 | 2 |
<urn:uuid:2b7eafe1-a054-48a6-b9ea-adeb08826d0d> | The Dream of Gerontius
The Dream of Gerontius, Op. 38, is a work for voices and orchestra in two parts composed by Edward Elgar in 1900, to text from the poem by John Henry Newman. It relates the journey of a pious man's soul from his deathbed to his judgment before God and settling into Purgatory. Elgar disapproved of the use of the term "oratorio" for the work, though his wishes are not always followed. The piece is widely regarded as Elgar's finest choral work, and some consider it his masterpiece.
The work was composed for the Birmingham Music Festival of 1900; the first performance took place on 3 October 1900, in Birmingham Town Hall. It was badly performed at the premiere, but later performances in Germany revealed its stature. In the first decade after its premiere, the Roman Catholic dogma in Newman's poem caused difficulties in getting the work performed in Anglican cathedrals, and a revised text was used for performances at the Three Choirs Festival until 1910.
Elgar was not the first composer to consider setting John Henry Newman's poem "The Dream of Gerontius". Dvořák had considered it fifteen years earlier, and had discussions with Newman, before abandoning the idea. Elgar knew the poem well. He had owned a copy since at least 1885, and in 1889 he was given another copy as a wedding present. This contained handwritten copies of extensive notes that had been made by General Gordon, and Elgar is known to have considered the text in musical terms for several years. Throughout the 1890s, Elgar had composed several large-scale works for the regular festivals that were a key part of Britain's musical life. In 1898, based on his growing reputation, he was asked to write a major work for the 1900 Birmingham Triennial Music Festival. He was unable to start work on the poem that he knew so well until the autumn of 1899, and did so only after first considering a different subject.
Composition proceeded quickly. Elgar and August Jaeger, his editor at the publisher Novello, exchanged frequent, sometimes daily, letters, which show how Jaeger helped in shaping the work, and in particular the climactic depiction of the moment of judgment. By the time Elgar had completed the work and Novello had printed it, there were only three months to the premiere. The Birmingham chorus, all amateurs, struggled to master Elgar's complex, demanding and somewhat revolutionary work. Matters were made worse by the sudden death of the chorus master and his replacement by an elderly musician who found the music beyond him. The conductor of the premiere, Hans Richter, received a copy of the full score only on the eve of the first orchestral rehearsal. The soloists at the Birmingham Festival on 3 October 1900 were Marie Brema, Edward Lloyd and Harry Plunket Greene. The first performance was, famously, a near disaster. The choir could not sing the music adequately, and two of the three soloists were in poor voice. Elgar was deeply upset at the debacle, telling Jaeger, "I have allowed my heart to open once – it is now shut against every religious feeling & every soft, gentle impulse for ever." However, many of the critics could see past the imperfect realisation and the work became established in Britain once it had had its first London performance in 1903, at the Roman Catholic Westminster Cathedral.
Shortly after the premiere, the German conductor and chorus master Julius Buths made a German translation of the text and arranged a successful performance in Düsseldorf on 19 December 1901. Elgar was present, and he wrote "It completely bore out my idea of the work: the chorus was very fine". Buths presented it in Düsseldorf again on 19 May 1902 in conjunction with the Lower Rhenish Music Festival. The soloists included Muriel Foster, and Elgar was again in the audience, being called to the stage twenty times to receive the audience's applause. Buths's festival co-director Richard Strauss was impressed enough by what he heard that at a post-concert banquet he said: "I drink to the success and welfare of the first English progressive musician, Meister Elgar". This greatly pleased Elgar, who considered Strauss to be "the greatest genius of the age".
The strong Roman Catholicism of the work gave rise to objections in some influential British quarters; some Anglican clerics insisted that for performances in English cathedrals Elgar should modify the text to tone down the Roman Catholic references. There was no Anglican objection to Newman's words in general: Arthur Sullivan's setting of his "Lead, Kindly Light", for example, was sung at Westminster Abbey in 1904. Disapproval was reserved for the doctrinal aspects of "The Dream of Gerontius" repugnant to Anglicans, such as Purgatory. Elgar was unable to resist the suggested bowdlerisation, and in the ten years after the premiere the work was given at the Three Choirs Festival with an expurgated text. The Dean of Gloucester refused admission to the work until 1910. This attitude lingered until the 1930s, when the Dean of Peterborough banned the work from the cathedral. Elgar was also faced with many people's assumption that he would use the standard hymn tunes for the sections of the poem that had already been absorbed into Anglican hymn books: "Firmly I believe and truly", and "Praise to the Holiest in the Height".
The Dream of Gerontius received its U.S. premiere on 23 March 1903 at The Auditorium, Chicago, conducted by Harrison M. Wild. It was given in New York, conducted by Walter Damrosch three days later. It was performed in Sydney, Australia, in 1903. The first performance in Vienna was in 1905; the Paris premiere was in 1906; and by 1911 the work received its Canadian premiere in Toronto under the baton of the composer.
In the first decades after its composition leading performers of the tenor part included Gervase Elwes and John Coates, and Louise Kirkby Lunn, Elena Gerhardt and Julia Culp were admired as the Angel. Later singers associated with the work include Muriel Foster, Clara Butt, Kathleen Ferrier, and Janet Baker as the Angel, and Heddle Nash, Steuart Wilson and Richard Lewis as Gerontius.
The work has come to be generally regarded as Elgar's finest choral composition. The Grove Dictionary of Music and Musicians rates it as "one of his three or four finest works", and the authors of The Record Guide, writing in 1956 when Elgar's music was comparatively neglected, said, "Anyone who doubts the fact of Elgar's genius should take the first opportunity of hearing The Dream of Gerontius, which remains his masterpiece, as it is his largest and perhaps most deeply felt work." In the Oxford Dictionary of National Biography, Michael Kennedy writes, "[T]he work has become as popular with British choral societies as Messiah and Elijah, although its popularity overseas did not survive 1914. Many regard it as Elgar's masterpiece. ... It is unquestionably the greatest British work in the oratorio form, although Elgar was right in believing that it could not accurately be classified as oratorio or cantata."
Newman's poem tells the story of a soul's journey through death, and provides a meditation on the unseen world of Roman Catholic theology. Gerontius (a name derived from the Greek word geron, "old man") is a devout Everyman. Elgar's setting uses most of the text of the first part of the poem, which takes place on Earth, but omits many of the more meditative sections of the much longer, otherworldly second part, tightening the narrative flow.
In the first part, we hear Gerontius as a dying man of faith, by turns fearful and hopeful, but always confident. A group of friends (also called "assistants" in the text) joins him in prayer and meditation. He passes in peace, and a priest, with the assistants, sends him on his way with a valediction. In the second part, Gerontius, now referred to as "The Soul", awakes in a place apparently without space or time, and becomes aware of the presence of his guardian angel, who expresses joy at the culmination of her task (Newman conceived the Angel as male, but Elgar gives the part to a female singer). After a long dialogue, they journey towards the judgment throne.
They safely pass a group of demons, and encounter choirs of angels, eternally praising God for His grace and forgiveness. The Angel of the Agony pleads with Jesus to spare the souls of the faithful. Finally Gerontius glimpses God and is judged in a single moment. The Guardian Angel lowers Gerontius into the soothing lake of Purgatory, with a final benediction and promise of a re-awakening to glory.
The work calls for a large orchestra of typical late Romantic proportions, double chorus with semichorus, and usually three soloists. Gerontius is sung by a tenor, and the Angel is a mezzo-soprano. The Priest's part is written for a baritone, while the Angel of the Agony is more suited to a bass; as both parts are short they are usually sung by the same performer, although some performances assign different singers for the two parts.
The choir plays several roles: attendants and friends, demons, Angelicals (women only) and Angels, and souls in Purgatory. They are employed at different times as a single chorus in four parts, or as a double chorus in eight parts or antiphonally. The semichorus is used for music of a lighter texture; usually in performance they are composed of a few members of the main chorus; however, Elgar himself preferred to have the semi-chorus placed near the front of the stage.
The required instrumentation comprises two flutes (II doubling piccolo), two oboes and cor anglais, two clarinets in A and bass clarinet, two bassoons and contrabassoon, four horns, three trumpets, three trombones, tuba, timpani plus three percussion parts, harp, organ, and strings. Elgar called for an additional harp if possible, plus three additional trumpets (and any available percussionists) to reinforce the climax in Part II, just before Gerontius's vision of God.
Each of the two parts is divided into distinct sections, but differs from the traditional oratorio in that the music continues without significant breaks. Elgar did not call the work an oratorio, and disapproved when other people used the term for it. Part I is approximately 35 minutes long and Part II is approximately 60 minutes.
- Jesu, Maria – I am near to death
- Rouse thee, my fainting soul
- Sanctus fortis, sanctus Deus
- Proficiscere, anima Christiana
- I went to sleep
- It is a member of that family
- But hark! upon my sense comes a fierce hubbub
- I see not those false spirits
- But hark! a grand mysterious harmony
- Thy judgment now is near
- I go before my judge
- Softly and gently, dearly-ransomed soul
Part I
The work begins with an orchestral prelude, which presents the most important motifs. In a detailed analysis, Elgar's friend and editor August Jaeger identified and named these themes, in line with their functions in the work.
Gerontius sings a prayer, knowing that life is leaving him and giving voice to his fear, and asks for his friends to pray with him. For much of the soloist's music, Elgar writes in a style that switches between exactly notated, fully accompanied recitative, and arioso phrases, lightly accompanied. The chorus adds devotional texts in four-part fugal writing. Gerontius's next utterance is a full-blown aria Sanctus fortis, a long credo that eventually returns to expressions of pain and fear. Again, in a mixture of conventional chorus and recitative, the friends intercede for him. Gerontius, at peace, submits, and the priest recites the blessing "Go forth upon thy journey, Christian soul!" (a translation of the litany Ordo Commendationis Animae). This leads to a long chorus for the combined forces, ending Part I.
Part II
In a complete change of mood, Part II begins with a simple four-note phrase for the violas which introduces a gentle, rocking theme for the strings. This section is in triple time, as is much of the second part. The Soul's music expresses wonder at its new surroundings, and when the Angel is heard, she expresses quiet exultation at the climax of her task. They converse in an extended duet, again combining recitative with pure sung sections. Increasingly busy music heralds the appearance of the demons: fallen angels who express intense disdain of men, mere mortals by whom they were supplanted. Initially the men of the chorus sing short phrases in close harmony, but as their rage grows more intense the music shifts to a busy fugue, punctuated by shouts of derisive laughter.
Gerontius cannot see the demons, and asks if he will soon see his God. In a barely accompanied recitative that recalls the very opening of the work, the Angel warns him that the experience will be almost unbearable, and in veiled terms describes the stigmata of St. Francis. Angels can be heard, offering praises over and over again. The intensity gradually grows, and eventually the full chorus gives voice to a setting of the section that begins with Praise to the Holiest in the Height. After a brief orchestral passage, the Soul hears echoes from the friends he left behind on earth, still praying for him. He encounters the Angel of the Agony, whose intercession is set as an impassioned aria for bass. The Soul's Angel, knowing the long-awaited moment has come, sings an Alleluia.
The Soul now goes before God and, in a huge orchestral outburst, is judged in an instant. At this point in the score, Elgar instructs "for one moment, must every instrument exert its fullest force." This was not originally in Elgar's design, but was inserted at the insistence of Jaeger, and remains as a testament to the positive musical influence of his critical friendship with Elgar. In an anguished aria, the Soul then pleads to be taken away. A chorus of souls sings the first lines of Psalm 90 ("Lord, thou hast been our refuge") and, at last, Gerontius joins them in Purgatory. The final section combines the Angel, chorus, and semichorus in a prolonged song of farewell, and the work ends with overlapping Amens.
Dedication and superscription
Elgar dedicated his work "A.M.D.G." (Ad maiorem Dei gloriam, "To the greater glory of God", the motto of the Society of Jesus or Jesuits), following the practice of Johann Sebastian Bach, who would dedicate his works "S.D.G." (Soli Deo gloria, "Glory to God alone"). Underneath this he wrote a line from Virgil: "Quae lucis miseris tam dira cupido?" together with Florio's English translation of Montaigne's adaptation of Virgil's line: "Whence so dyre desire of Light on wretches grow?"
At the end of the manuscript score, Elgar wrote this quotation from John Ruskin's Sesame and Lilies:
- This is the best of me; for the rest, I ate, and drank, and slept, loved and hated, like another: my life was as the vapour and is not; but this I saw and knew; this, if anything of mine, is worth your memory.
Richter signed the autograph copy of the score with the inscription: "Let drop the Chorus, let drop everybody—but let not drop the wings of your original genius."
Sir Henry Wood made acoustic recordings of four extracts from The Dream of Gerontius as early as 1916, with Clara Butt as the angel. Edison Bell issued the work in 1924 with Elgar's tacit approval (despite his contract with HMV); acoustically recorded and abridged, it was swiftly rendered obsolete by the introduction of the electrical process, and soon after withdrawn. HMV issued live recorded excerpts from two public performances conducted by Elgar in 1927, with the soloists Margaret Balfour, Steuart Wilson, Tudor Davies, Herbert Heyner, and Horace Stevens. Private recordings from radio broadcasts ("off-air" recordings) also exist in fragmentary form from the 1930s.
The first complete recording was made by EMI in 1945, conducted by Malcolm Sargent with his regular chorus and orchestra, the Huddersfield Choral Society and the Liverpool Philharmonic. The soloists were Heddle Nash, Gladys Ripley, Dennis Noble and Norman Walker. This is the only recording to date that employs different singers for the Priest and the Angel of the Agony. The first stereophonic recording was made by EMI in 1964, conducted by Sir John Barbirolli. It has remained in the catalogues continuously since its first release, and is notable for Janet Baker's singing as the Angel. Benjamin Britten's 1971 recording for Decca was noted for its fidelity to Elgar's score, showing, as the Gramophone reviewer said, that "following the composer's instructions strengthens the music's dramatic impact". Of the other dozen or so recordings on disc, most are directed by British conductors, with the exception of a 1960 recording in German under Hans Swarowsky and a Russian recording (sung in English) under Yevgeny Svetlanov made in 1983.
The BBC Radio 3 feature "Building a Library" has presented comparative reviews of all available versions of The Dream of Gerontius on three occasions. Comparative reviews also appear in The Penguin Guide to Recorded Classical Music, 2008, and Gramophone, February 2003. The recordings recommended by all three are Sargent's 1945 EMI version and Barbirolli's 1964 EMI recording.
See also
- Moore, p. 291
- Moore, p. 290
- Moore, p. 256
- Moore, p. 296
- Moore, pp. 302–316
- Moore, p. 322
- Moore, p. 325
- The Musical Times, 1 November 1900, p. 734
- Moore, p. 331
- Reed, p. 60
- Farach Colton, Andrew, "Vision of the Hereafter", Gramophone, February 2003, p. 36
- McVeagh, Diana. "Elgar, Sir Edward." Grove Music Online. Oxford Music Online. Accessed 21 October 2010. (subscription required)
- Moore, p. 357
- Kennedy, Michael, "Elgar, Sir Edward William, baronet (1857–1934)". Oxford Dictionary of National Biography, Oxford University Press, 2004, accessed 22 April 2010 (subscription required).
- Moore, p. 362
- Reed, p. 61
- Moore, p. 368
- Liner notes to Salome, Decca Records, 2006, oclc 70277106
- The Times, 13 February 1904, p. 13
- "A Dean's Objections to The 'Dream of Gerontius'," The Manchester Guardian, 17 November 1903. p. 12
- The Times, 11 September 1903 and 13 September 1905
- McGuire, Charles Edward, "Measure of a Man", in Edward Elgar and his World, ed. Byron Adams, Princeton University Press, 2007 p. 6
- Lewis, Geraint, "A Cathedral in Sound", Gramophone, September 2008, p. 50. The Gloucester performance in 1910 was described in The Manchester Guardian (5 September 1910, p. 7) as "given unmutilated for the first time in an Anglican cathedral".
- "Cathedral Ban on 'Gerontius'," The Manchester Guardian, 19 October 1932, p. 9
- Hodgkins, p. 187
- McColl, Sandra, "Gerontius in the City of Dreams: Newman, Elgar, and the Viennese Critics", International Review of the Aesthetics and Sociology of Music, Vol. 32, No. 1 (Jun., 2001), pp. 47–64
- The Musical Times, April 1934, p. 318
- Sackville-West, p. 254
- The name "Gerontius" is not sung in the work, and there is no consensus on how it is pronounced. The Greek "geron" has a hard 'g'; but English words derived from it often have a soft 'g', as in "geriatriac"
- "Dream of Gerontius, The", Oxford Companion to Music, Oxford Music Online, accessed 22 October 2010 (subscription required)
- Jaeger's analysis is summarised at Burton, James "The Dream of Gerontius, British Choirs on the Net, accessed 22 October 2010
- Grove; Jenkins, Lyndon (1987), notes to EMI CD CMS 7 63185 2; and Moore, Jerrold Northrop (1975), notes to Testament CD SBT 2025
- Moore, p. 317
- "The Elgar Birthday Records", The Gramophone, June 1927, p. 17
- Essex, Walter, "The Recorded Legacy", The Elgar Society, accessed 22 October 2010
- Building a Library BBC Radio 3; Building a Library, BBC Radio 3
- March, pp. 438–40
- Hodgkins, Geoffrey (1999). The Best of Me – A Gerontius Centenary Companion. Rickmansworth: Elgar Editions. ISBN 0953708209.
- Kennedy, Michael. Portrait of Elgar. Oxford University Press. ISBN 0-19-315414-5.
- March, Ivan (ed.). The Penguin Guide to Recorded Classical Music. London: Penguin Books. ISBN 978-0-14-103336-5.
- Moore, Jerrold Northrop. Edward Elgar: a creative life. Oxford University Press. ISBN 0-19-315447-1.
- Reed, W H (1946). Elgar. London: Dent. OCLC 8858707.
- Sackville-West, Edward; Desmond Shawe-Taylor (1955). The Record Guide. Collins. OCLC 474839729.
Further reading
- Byron Adams "Elgar's later oratorios : Roman Catholicism, Decadence and the Wagnerian Dialectic of Shame and Grace" in The Cambridge Companion to Elgar (Daniel Grimley and Julian Rushton, eds.) Cambridge University Press, Cambridge, 2004 ISBN 0-521-53363-5
- Charles Edward McGuire Elgar's Oratorios: The Creation of an Epic Narrative Ashgate, Aldershot, 2002 ISBN 0-7546-0271-0
- The Dream of Gerontius: Free scores at the International Music Score Library Project
- The full text of the poem (Note: Elgar used about half the poem in his libretto.)
- Elgar - His Music : The Dream of Gerontius - the Libretto
- Elgar - His Music : The Dream of Gerontius - A Musical Analysis – first of a set of six pages on the work
- A comparative review of the available recordings, at least up to 1997
- The Dream of Gerontius (1899–1900)
- The Dream of Gerontius: Synopsis | 1 | 5 |
<urn:uuid:db28ed78-9206-4309-a153-764299223351> | NACUBO’s Accounting Principles Council presents a toolkit that links key performance indicators to organizational goals.
By Teresa Gordon, Roger Patterson, and Jennifer Taylor
Performance measurement reporting in higher education has been a topic of ongoing interest and debate. Many institutions lack a practical, easily accessible example of how an institution might most effectively and efficiently approach the measurement and reporting of performance. At the same time, growing attention from outside higher education suggests that decisions on when and how to report performance ultimately may be made for us. This has spurred renewed interest in defining an easy and meaningful way to meet performance reporting goals.
Performance Measurement’s Progression
American higher education has a long history of measuring performance. The first attempt dates back to the early 1900s when an effort was made to rank colleges by reputation, a harbinger of what was to come later in the century. Often performance measures were used as a way to market an institution rather than as a tool that could help assess progress toward meeting institutional goals and improve the college or university.
Over time, the higher education community began to reflect on its core values. In that environment, performance measurement assumed a more important role as institutions grappled with such questions as how to define and measure quality and how best to measure and improve efficiency and effectiveness.
As campuses wrestled with such questions, external stakeholders began to press for performance reporting programs at public institutions. Tennessee implemented a comprehensive performance measurement program in the 1970s. The program tied funding to performance and developed institutional standards, core performance indicators, and peer comparisons.
Since then, higher education administrators and state lawmakers around the country have followed Tennessee’s lead. While the efforts in some states were successful, the results were less satisfying in others. The linking of performance to funding in some states made many institutions unhappy and led them to view performance measures negatively. They felt that states were dictating measures with little or no input from campuses. That led to resistance to the performance management concept. Some institutions even discounted the benefits of using performance measurement to achieve their own goals.
At the same time, national efforts to identify important measures and collect institutional data as useful tools for decision making began to emerge. These efforts included the National Center for Higher Education Management System and the federal government’s Integrated Postsecondary Education Data System. U.S. News and World Report created rankings of colleges and universities using both input and output measures. The rankings initially elicited negative reactions from colleges and universities, but they quickly realized the power of performance metrics and the public’s growing desire for tools to assess performance and quality.
A major expansion in the use of performance measurement occurred toward the end of the 20th century as part of a larger movement to reinvent business and government. A new approach for achieving strategic management devised in the early 1990s by Robert Kaplan and David Norton called balanced scorecard focuses performance measures on achieving an organization’s mission and vision. The University of California was one of the first to implement the balanced scorecard approach in higher education, and the institution used the methodology to assess its administrative areas under what it called the Partnership for Performance Initiative.
Business, government, and higher education institutions increasingly used performance measures to assess results. Special associations and consortia emerged, including the National Consortium for Continuous Improvement in Higher Education. A notable development was higher education’s inclusion in the Malcolm Baldrige National Quality Award program, which recognizes institutions for excellence in quality and efficiency. In 2001, the University of Wisconsin–Stout became the first university to win the Baldrige award. Even more significantly, several of the regional accrediting agencies now include as part of their process an assessment of an institution’s performance measurement program.
Clearly, the public sees the need for and value of assessment in higher education. Citizens are asking: What is the return on my investment? Since it is difficult for an institution to systematically report and improve what hasn’t been measured, more institutions are using performance measurement to assess their progress in achieving strategic goals.
A Project Is Born
Both the Governmental Accounting Standards Board and the Financial Accounting Standards Board (FASB) have launched long-term projects to address performance measurement. GASB issued concepts statements in 1987 and again in 1994 emphasizing public accountability and linking it to external financial reporting. At one point it seemed likely that GASB would require audited performance measures as a component of public institutions’ financial statements. Recently GASB published suggested criteria for effective performance reporting that included a direct request for input and experimentation by various branches of government as well as public higher education.
As a result, NACUBO’s Accounting Principles Council, in its ongoing efforts to inform GASB on behalf of higher education, created a project team to study performance measurement. The APC project seeks to identify what is and is not being done in performance measurement and reporting in higher education. It attempts to provide a straightforward, practical approach to starting or revising a performance reporting format. The goals of the project include:
- developing a simple, visual template for reporting meaningful performance information both internally and externally;
- developing a model useful to both public and independent institutions;
- addressing GASB-suggested criteria for performance reporting and related issues (see sidebar, “GASB Reporting Criteria”); and
- informing GASB of, and proactively representing, higher education’s performance measurement practices.
To help accomplish the project goals, the team conducted a performance measurement survey, analyzed the survey results, reviewed samples of existing performance measurement reports and prior research results, and developed a key performance indicator matrix. At various stages of the project, the team conducted focus groups at NACUBO professional development events to gain feedback essential for the development of a set of reporting tools.
Assessing the Survey Results
The Web-based survey, conducted in December 2003, went to all NACUBO member institutions. There were 262 respondents (a response rate of 12 percent) representing 129 independent and 133 public institutions. Eighty-two percent were from four-year institutions; the rest were two-year institutions. The majority of respondents were at institutions that had between 1,000 and 10,000 students. Seventeen percent were at institutions with fewer than 1,000 students; 23 percent at institutions with more than 10,000 students. The respondents were primarily involved with financial reporting or institutional planning and budgeting. The institutions they represented matched the general characteristics of the postsecondary environment in the United States.
Respondents were asked whether their institution reported performance indicators externally and/or internally. More than three-quarters answered affirmatively (201 of 262), with public institutions reporting greater use than independent institutions (see Table 1). About half of the 201 institutions reported performance measures externally and used them internally, while the other half were split almost evenly between those that did external reporting without using the measures internally and those that used measures internally without reporting them externally.
Table 1 Independent and Public Institutions With Performance Indicators
The respondents from about half of the institutions that externally report performance indicators said that their reports included or were based on data from audited financial statements. Twenty-eight percent of the institutions involved in external reporting said that a version or portion of their reports was mandated by an external entity. But only 11 percent said they included performance indicators in their annual reports or audited financial statements.
The list of performance indicator categories that respondents evaluated included four types of input measures, two types of process measures, two types of output measures, and four types of outcome measures. Enrollment statistics (an input category) was the indicator most commonly reported externally (92 percent) and used internally (100 percent) (see Tables 2 and 3). An overwhelming majority of respondents agreed that enrollment should be reported both externally (88 percent) and internally (97 percent).
Table 2 Performance Indicators Most Commonly Reported Externally
Table 3 Performance Indicators Most Commonly Used Internally
Persistence and graduation outcomes—an output indicator—was the second most commonly reported indicator externally (76 percent) and was widely used internally (88 percent). Graduation statistics, another output measure, were often reported externally (64 percent) and used internally (87 percent). Efficiency and financial ratios were reported externally by 69 percent of institutions and used internally by 81 percent.
Public institutions reported and used persistence and graduation outcomes more often than independent institutions. In contrast, independent institutions more commonly reported and used selectivity measures. The reporting was both external and internal.
Quality and outcome measures were less frequently reported, although most respondents said they should be used internally. For example, student satisfaction measures were used internally by more than 70 percent of institutions but were only included in one-third of the external reports. Studies of faculty and staff morale and of comparative salaries were reported externally by just a quarter of institutions even though more than half said they were using such measures internally.
When asked what should be reported, the indicators that were least likely to be recommended for external reporting were quality of faculty measures (17 percent) followed by alumni or employer survey results (38 percent) and student satisfaction or graduating senior survey results (46 percent). However, at least 80 percent of respondents said all categories of performance indicators should be reported internally.
A Tailored Reporting Tool
The survey and subsequent focus group research supported the development of a reporting toolkit. All types of institutions expressed interest in performance measurement reporting and, although many institutions report performance measures because of some external mandate, the format and content of the reporting is often not mandated and therefore could be flexible. There is a gap between what many institutions feel is effective reporting and their current mode of reporting. A surprising commonality of reporting goals exists between independent and public institutions, providing an opportunity to make performance reporting an integrating tool; financial reporting, conversely, has taken diverse directions on select key operational issues. Many institutions lacked meaningful metrics directly linked to measurable strategic goals and objectives. But there was a desire for an easily accessible data reporting format with content that would satisfy GASB and would be consistent with other recommended reporting criteria.
The toolkit, which is available at http://www.nacubo.org/x2840.xml includes:
- Goal and Metric Outline—a sample of common higher education goal statements and corresponding metrics.
- Web Template—an interactive tool that takes the content from the goal and metric outline and allows for providing greater detail for metrics as described in the GASB performance measurement reporting criteria.
- Goal Appendix—a listing of goal statements extracted from institution Web sites.
- Metric Appendix—a listing of metrics in use in higher education extracted from Web sites.
The toolkit is designed to provide a format and suggested content for an internal managerial report linking key performance indicators to strategic budgeting and planning goals. Based on input received during the project, the APC team has included in the goal and metric outline sample goals in common areas of institutional performance monitoring that lend themselves to measurement.
The goals statements and performance measures included in the goal and metric outline were selected based on their conformance with a number of theoretical and practical objectives. Performance measurement metrics should 1) be focused more on outcomes and efficiencies than on straight measurement of institutional inputs and outputs; 2) be easily obtainable from common national sources to support ease of compilation and benchmarking of data; and 3) address the core missions of the institution. To help support this latter objective, the sample goals and related metrics were further categorized under a series of mission areas commonly identified across many institutions currently reporting mission and goal data.
The Web template development is rooted in the idea that a picture is worth a thousand words. The APC created a simple, visual template for reporting key performance information, drawn from theory, checklists, and actual higher education performance reports in use today that both public and independent institutions can reference for ideas. The template seeks to incorporate GASB’s criteria for performance measurement reporting, which affects both content and presentation. The format also provides information in a manner that would easily support aggregation of data for presentation to public bodies.
The project is not designed to develop a standardized performance report card for all of higher education. Although benchmarking is a valid and desirable use of performance measurement, the most important aspect of the project is linking relevant measures to an institution’s strategic goals. The Web template is not prescriptive; it is a starting place for institutions that lack formal strategic performance measurement reporting and a point of comparison for those with reporting systems already in place. Institutions are encouraged to customize the goals and metrics. To help make this possible, the template is augmented by appendices that include a “shopping list” of commonly used metrics and goals statements, categorized for ease of reference.
At the same time, the team felt that the project could eventually help lead higher education toward some degree of standardization in an approach to performance monitoring. Ultimately, that may be required under accountability initiatives launched by GASB, FASB, or other oversight bodies that impact both public and independent institutions. This project allows for being proactive in offering a methodology that resembles what many institutions are already doing. The alternative—waiting for regulatory or funding bodies to impose reporting standards and a specific format—is not desirable for higher education.
Continuing the Conversation
While performance measurement efforts in higher education are increasingly more prevalent and robust, challenges remain. For example, a standard definition for specific metrics can be lacking within a given institution. And some institutions tend to select too many measures, which can lead to focusing on data collection rather than on improvement itself. Moreover, there is often a missing link between strategic goal statements and metrics, as well as a tendency on the part of some campus leaders to set forth unrealistic or unmeasurable goal statements. When it comes to communicating performance measurement results, there can be a real reluctance to share data with stakeholders. Many institutions worry that too much emphasis will be placed on efficiency, to the detriment of effectiveness, particularly when dealing with external stakeholders such as trustees, banks, and rating agencies.
|GASB Reporting Criteria|
The APC team hopes that the availability of a common toolkit will remedy some of these issues. The toolkit draws from strategies that are working effectively for a variety of institutions operating under similar constraints and it encourages flexibility and customization. Historically, there has been disagreement over whether to include performance reporting in audited financial statements. The template design assumes that it will be separate from the financial statements; however, some or all of the performance metrics could be included as supplementary information within the financial statements. The toolkit can help to address comparability of public and independent institutions, a process complicated by different standards of financial reporting. Enhanced performance reporting, with a shared approach among institutions, will improve transparency of reporting operational data.
The toolkit is a living document. The APC hopes that it will be regularly updated with contributions drawn from the best practices of NACUBO member institutions. To share your input and comments, e-mail Kimberly Dight at [email protected].
Author Bios Teresa Gordon is an accounting professor at the University of Idaho, Moscow; Roger Patterson is associate vice chancellor of finance at the University of North Carolina–Chapel Hill; and Jennifer Taylor is assistant vice president for business and finance at New Mexico State University, Las Cruces.
E-mail [email protected]; [email protected]; [email protected]
- IRS Releases Final Report on College and University Compliance Project
- NACUBO Seeks Input on Ethics, Fraud, and Controls
- FY13 Implementation Reminder for Independent Institutions
- CAO and CBO Collaborations: Leveraging Institutional Capacity to Impact Effectiveness
August 5-6, 2013
- 2013 Planning and Budgeting Forum
September 16-17, 2013
- WEBCAST: The Higher Education Accounting Forum Online
Wednesday, May 29, 2013 10:00 AM - 5:00 PM ET
- WEBCAST: Improve Your NFP Audit and Accounting Guide IQ
Wednesday, June 26, 2013 1:00 PM ET
- ON-DEMAND: OD: The Cashless and Paperless Business Office
- ON-DEMAND: Affordable Care Act: Implementation Roadmap for Colleges and Universities
- A Guide to College and University Budgeting: Foundations for Institutional Effectiveness, 4th ed. - by Larry Goldstein
- NACUBO's Guide to Unitizing Investment Pools - by Mary S. Wheeler
- Managing and Collecting Student Accounts and Loans - by David R. Glezerman and Dennis DeSantis | 1 | 3 |
<urn:uuid:f38b7e5a-3ceb-46d7-b77d-18cb4c59acc8> | In computing, a shebang (also called a sha-bang, hashbang, pound-bang, hash-exclam, or hash-pling), when it occurs as the initial two characters on the initial line of a script, is the character sequence consisting of the characters number sign and exclamation mark (that is, "#!").
Under Unix-like operating systems, when a script with a shebang is run as a program, the program loader parses the rest of the script's initial line as an interpreter directive; the specified interpreter program is run instead, passing to it as an argument the path that was initially used when attempting to run the script. For example, if a script is named with the path "path/to/script", and it starts with the following line:
then the program loader is instructed to run the program "/bin/sh" instead (usually this is the Bourne shell or a compatible shell), passing "path/to/script" as the first argument.
The shebang line is usually ignored by the interpreter because the "#" character is a comment marker in many scripting languages; some language interpreters that do not use the hash mark to begin comments (such as Scheme) still may ignore the shebang line in recognition of its purpose.
- #! interpreter [optional-arg]
The interpreter must usually be an absolute path to a program that is not itself a script. The optional‑arg should either not be included or it should be a string that is meant to be a single argument (for reasons of portability, it should not contain any whitespace).
Some typical shebang lines:
#!/bin/sh— Execute the file using sh, the Bourne shell, or a compatible shell
#!/bin/csh -f— Execute the file using csh, the C shell, or a compatible shell, and suppress the execution of the user’s
.cshrcfile on startup
#!/usr/bin/perl -T— Execute using Perl with the option for taint checks
#!/usr/bin/php— Execute the file using the PHP command line interpreter
#!/usr/bin/python -O— Execute using Python with optimizations to code
#!/usr/bin/ruby— Execute using Ruby
Shebang lines may include specific options that are passed to the interpreter (see the Perl example above). However, implementations vary in the parsing behavior of options; for portability, only one option should be specified (if any) without any embedded whitespace.
Interpreter directives allow scripts and data files to be used as system commands, hiding the details of their implementation from users and other programs, by removing the need to prefix scripts with their interpreter on the command line.
Consider a Bourne shell script that is identified by the path "some/path/to/foo" and that has the following as its initial line:
If the user attempts to run this script with the following command line (specifying "bar" and "baz" as arguments):
some/path/to/foo bar baz
then the result would be similar to having actually executed the following command line instead:
/bin/sh -x some/path/to/foo bar baz
If "/bin/sh" specifies the Bourne shell, then the end result is that all of the shell commands in the file "some/path/to/foo" are executed with the positional variables
$2 set to "bar" and "baz", respectively. Also, because the initial number sign is the character used to introduce comments in the Bourne shell language (and in the languages understood by many other interpreters), the entire shebang line is ignored by the interpreter.
However, it is up to the interpreter to ignore the shebang line; thus, a script consisting of the following two lines simply echos both lines to standard output when run:
#!/bin/cat Hello world!
When compared to the use of global association lists between file extensions and the interpreting applications, the interpreter directive method allows users to use interpreters not known at a global system level, and without administrator rights. It also allows specific selection of interpreter, without overloading the filename extension namespace, and allows the implementation language of a script to be changed without changing its invocation syntax by other programs.
Shebangs must specify absolute paths to system executables; this can cause problems on systems that have a non-standard file system layout. Even when systems have fairly standard paths, it is quite possible for variants of the same operating system to have different locations for the desired interpreter. Python, for example, might be in /usr/bin/python, /usr/local/bin/python, or even something like /home/username/bin/python if installed by an ordinary user.
Because of this it is common to need to edit the shebang line after copying a script from one computer to another because the path that was coded into the script may not apply on a new machine, depending on the consistency in past convention of placement of the interpreter. For this reason and because POSIX does not standardize path names, POSIX does not standardize the feature.
Often, the program /usr/bin/env can be used to circumvent this limitation by introducing a level of indirection. #! is followed by /usr/bin/env, followed by the desired command without full path, as in this example:
This mostly works because the path /usr/bin/env is commonly used for the env utility, and it invokes the first sh found in the user's $PATH, typically /bin/sh, if the user's path is correctly configured.
On a system with setuid script support this will reintroduce the race eliminated by the /dev/fd workaround described below. There are still some portability issues with OpenServer 5.0.6 and Unicos 9.0.2 which have only /bin/env and no /usr/bin/env.
Another portability problem is the interpretation of the command arguments. Some systems, including Linux, do not split up the arguments; for example, when running the script with the first line like,
#!/usr/bin/env python -c
That is, python -c will be passed as one argument to /usr/bin/env, rather than two arguments. Cygwin also behaves this way.
Another common problem is scripts containing a carriage return character immediately after the shebang, perhaps as a result of being edited on a system that uses DOS line breaks, such as Microsoft Windows. Some systems interpret the carriage return character as part of the interpreter command, resulting in an error message.
POSIX requires that
sh is a shell capable of a syntax similar to the Bourne shell, although it does not require it to be located at
/bin/sh; for example, some systems such as Solaris have the POSIX-compatible shell at
/usr/xpg4/bin/sh. In many Linux systems and recent releases of Mac OS X,
/bin/sh is a hard or symbolic link to
/bin/bash, the Bourne Again shell.
Magic number
The shebang is actually a human-readable instance of a magic number in the executable file, the magic byte string being
0x23 0x21, the two-character encoding in ASCII. This magic number is detected by the "exec" family of functions, which determine whether an image file is a script or an executable binary. The presence of the shebang will result in the execution of the specified executable, usually an interpreter for the script's language. It has been claimed that some old versions of Unix expect the normal shebang to be followed by a space and a slash ("
#! /"), but this appears to be untrue.
The shebang characters are represented by the same two bytes in extended ASCII encodings, including UTF-8, which is commonly used for scripts and other text files on current Unix-like systems. However, UTF-8 files may begin with the optional byte order mark (BOM); if the "exec" function specifically detects the bytes
0x23 0x21, then the presence of the BOM (
0xEF 0xBB 0xBF) before the shebang will prevent the script interpreter from being executed. Some authorities recommend against using the byte order mark in POSIX (Unix-like) scripts, for this reason and for wider interoperability and philosophical concerns. Additionally, a byte order mark is not necessary in UTF-8, as that encoding does not have endianness issues; it serves only to identify the encoding as UTF-8.
Security issues
||This article may contain original research. (October 2010)|
On some systems, scripts can be marked with the setuid attribute, set-user-ID, a Unix feature which means that a program is executed with the access rights of the program file's owner instead of the rights of the user running it. Although this mechanism may be safe for compiled code, the extra step introduced by the interpreter directive provides an extra window of opportunity of attack along the following lines:
- An attacker makes a symbolic link in, say, /tmp/sneaky to a system shell script with setuid enabled, say /usr/bin/admintool (a hypothetical example).
- The attacker then runs /tmp/sneaky, but pauses its execution immediately
- If the new process had already gotten as far as opening sneaky, stop and start over, otherwise:
- The new process has already set its user ID to the owner of /usr/bin/admintool, so it's probably now running as root with full system rights (if not, start over)
- The attacker now removes the symbolic link pointing to /usr/bin/admintool
- The attacker creates a new script at /tmp/sneaky but with their own illicit commands therein
- The attacker now resumes the paused process, and the shell then opens sneaky and executes the illicit command file with root access rights.
This problem has been corrected on some modern systems, namely those supporting the /dev/fd filesystem can support the change, by opening the script first, producing a file descriptor which is safe from attack, then invoking the interpreter with that safe file descriptor as input. However, the discovery of the problem led many system administrators and developers to the conclusion that scripts couldn't be made secure, a case made more compelling by issues with the shell's internal field separator (also since corrected on modern systems); as a result, setuid functionality is often made unavailable to scripts.
As a result of these issues, setuid scripts are unsafe on older Unix-like systems, which once comprised the majority of such installations. Appropriate research into the security implications of setuid scripts is therefore necessary before permitting their use.
An executable file starting with an interpreter directive is simply called a script, often prefaced with the name or general classification of the intended interpreter. The name shebang for the distinctive two characters comes from an inexact contraction of SHArp bang or haSH bang, referring to the two typical Unix names for them. Another theory on the sh in shebang is that it is from the default shell
sh, usually invoked with shebang. This usage was current by December 1987, and probably earlier.
When asked about what he would call his feature (i.e. "What do you personally call that first line"), Dennis Ritchie answered:
From: "Ritchie, Dennis M (Dennis)** CTR **" <dmr@[redacted]> To: <[redacted]@talisman.org> Date: Thu, 19 Nov 2009 18:37:37 -0600 Subject: RE: What do -you- call your #!<something> line? I can't recall that we ever gave it a proper name. It was pretty late that it went in--I think that I got the idea from someone at one of the UCB conferences on Berkeley Unix; I may have been one of the first to actually install it, but it was an idea that I got from elsewhere. As for the name: probably something descriptive like "hash-bang" though this has a specifically British flavor, but in any event I don't recall particularly using a pet name for the construction. Regards, Dennis
The shebang was introduced by Dennis Ritchie between Edition 7 and 8 at Bell Laboratories. It was also added to the BSD releases from Berkeley's Computer Science Research (present at 2.8BSD and activated by default by 4.2BSD). As AT&T Bell Laboratories Edition 8 Unix, and later editions, were not released to the public, the first widely known appearance of this feature was on BSD.
The lack of an interpreter directive, but support for shell scripts, is apparent in the documentation from Version 7 Unix in 1979, which describes instead a facility of the Bourne shell where files with execute permission would be handled specially by the shell, which would (sometimes depending on initial characters in the script, such as ":" or "#") spawn a subshell which would interpret and run the commands contained in the file. In this model, scripts would only behave as other commands if called from within a Bourne shell. An attempt to directly execute such a file via the operating system's own exec() system trap would fail, preventing scripts from behaving uniformly as normal system commands.
In later versions of Unix-like systems, this inconsistency was removed. Dennis Ritchie introduced kernel support for interpreter directives in January 1980, for Version 8 Unix, with the following description:
From uucp Thu Jan 10 01:37:58 1980 >From dmr Thu Jan 10 04:25:49 1980 remote from research The system has been changed so that if a file being executed begins with the magic characters #! , the rest of the line is understood to be the name of an interpreter for the executed file. Previously (and in fact still) the shell did much of this job; it automatically executed itself on a text file with executable mode when the text file's name was typed as a command. Putting the facility into the system gives the following benefits. 1) It makes shell scripts more like real executable files, because they can be the subject of 'exec.' 2) If you do a 'ps' while such a command is running, its real name appears instead of 'sh'. Likewise, accounting is done on the basis of the real name. 3) Shell scripts can be set-user-ID. 4) It is simpler to have alternate shells available; e.g. if you like the Berkeley csh there is no question about which shell is to interpret a file. 5) It will allow other interpreters to fit in more smoothly. To take advantage of this wonderful opportunity, put #! /bin/sh at the left margin of the first line of your shell scripts. Blanks after ! are OK. Use a complete pathname (no search is done). At the moment the whole line is restricted to 16 characters but this limit will be raised.
Kernel support for interpreter directives spread to other versions of Unix, and one modern implementation can be seen in the Linux kernel source in fs/binfmt_script.c.
This mechanism allows scripts to be used in virtually any context normal compiled programs can be, including as full system programs, and even as interpreters of other scripts. As a caveat, though, some early versions of kernel support limited the length of the interpreter directive to roughly 32 characters (just 16 in its first implementation), would fail to split the interpreter name from any parameters in the directive, or had other quirks. Additionally, some modern systems allow the entire mechanism to be constrained or disabled for security purposes (for example, set-user-id support has been disabled for scripts on many systems).
Note that, even in systems with full kernel support for the
#! magic number, some scripts lacking interpreter directives (although usually still requiring execute permission) are still runnable by virtue of the legacy script handling of the Bourne shell, still present in many of its modern descendants.
See also
- Crunchbang Linux distribution, commonly referred to as "#!".
- interpreter directive
- File association
- fragment identifier in URLs
- Special Characters
- "Advanced Bash Scripting Guide". Retrieved 2012-01-19.
- "The #! magic, details about the shebang/hash-bang mechanism". Retrieved 2012-01-19.
- Cooper, Mendel (November 5, 2010). Advanced Bash Scripting Guide 5.3 Volume 1. lulu.com. p. 5. ISBN 978-1-4357-5218-4.
- MacDonald, Matthew (2011). HTML5: The Missing Manual. Sebastopol, California: O'Reilly Media. p. 373. ISBN 978-1-4493-0239-9.
- Lutz, Mark (September 2009). Learning Python (4th ed.). O'Reilly Media. p. 48. ISBN 978-0-596-15806-4.
- Lie Hetland, Magnus (October 4, 2005). Beginning Python: From Novice to Professional. Apress. p. 21. ISBN 978-1-59059-519-0.
- Schitka, John (December 24, 2002). Linux+ Guide to Linux Certification. Course Technology. p. 353. ISBN 978-0-619-13004-6.
- "execve(2) - Linux man page". Retrieved 2010-10-21.
- SRFI 22
- "Details about '#!'". In-ulm.de.
- "/usr/bin/env behaviour". Mail-index.netbsd.org. 2008-11-09. Retrieved 2010-11-18.
- "The Open Group Base Specifications Issue 7". 2008. Retrieved 2010-04-05.
- pixelbeat.org: Common shell script mistakes "It's much better to test scripts directly in a POSIX compliant shell if possible. The `bash --posix` option doesn't suffice as it still accepts some 'bashisms'"
- "32 bit shebang myth". In-ulm.de. Retrieved 2010-06-16.
- "FAQ - UTF-8, UTF-16, UTF-32 & BOM: Can a UTF-8 data stream contain the BOM character (in UTF-8 form)? If yes, then can I still assume the remaining UTF-8 bytes are in big-endian order?". Retrieved 2009-01-04.
- Markus Kuhn (2007). "UTF-8 and Unicode FAQ for Unix/Linux: What different encodings are there?". Retrieved 20 January 2009. "Adding a UTF-8 signature at the start of a file would interfere with many established conventions such as the kernel looking for “#!” at the beginning of a plaintext executable to locate the appropriate interpreter."
- "Jargon File entry for shebang". Catb.org. Retrieved 2010-06-16.
- http://www.catb.org/~esr/jargon/html/S/shebang.html The Jargon File: shebang
- "Perl didn't grok setuid scripts that had a space on the first line between the shebang and the interpreter name", USENET posting by Larry Wall
- http://www.mckusick.com/csrg CSRG Archive CD-ROMs
- "extracts from 4.0BSD /usr/src/sys/newsys/sys1.c". In-ulm.de. Retrieved 2010-06-16.
- http://cm.bell-labs.com/7thEdMan/v7vol2a.pdf UNIX TIME-SHARING SYSTEM: UNIX PROGRAMMER’S MANUAL Seventh Edition, Volume 2A, January, 1979
- http://www.in-ulm.de/~mascheck/various/shebang/sys1.c.html The '#!' magic - details about the shebang mechanism on various Unix flavours
- http://www.linuxjournal.com/article/2568 Playing with Binary Formats, January 1998 | 1 | 3 |
<urn:uuid:5cf579b7-4431-4015-b812-914069a4c5a7> | A thin display screen for computer and TV usage. The first flat panels appeared on laptop computers in the mid-1980s, and the LCD technology became the standard. Stand-alone LCD screens became available for desktop computers in the mid-1990s and exceeded sales of CRTs for the first time in 2003. For TV viewing, LCD and plasma are the two competing technologies, and many flat panel TVs can also display computer output (see flat panel TV).|
Reflection - No Reflection - Reflection
You can see yourself in the glass of a traditional CRT-based computer monitor or TV. The same is true of a plasma TV. However, LCDs used to be non-reflective, a significant advantage in a brightly lit room. In 2003, laptop screens began to include a clear, rigid overlay that makes colors richer, but causes the screen to be reflective once again. LCD TVs, on the other hand, are generally not reflective (see flat panel TV).
Digital Computer to Digital Display
Unlike analog CRTs, flat panel screens are digital. However, although almost all new flat panel monitors accept digital inputs, many PCs continue to offer only analog outputs. Going directly to the digital input of the display creates a sharper image (see flat panel connections for details).
Know Your "Native" Resolution
Flat panel screens have a precise matrix of rows and columns based on the highest resolution supported, and this "native" resolution looks the best. If you want to view a 1280x1024 resolution on a flat panel with a native resolution of 1600x1200, the 1280x1024 image will be scaled up to fill the screen. The quality of scaling algorithms between brands can differ substantially; therefore, you are better off viewing a flat panel at its native, maximum resolution. Otherwise, before you buy, be sure to view the panel at the non-native resolution you desire and see if you like it. See DVI, LCD, plasma display, EL display and FED. See also flat screen.
The L66 was Eizo's first 18" desktop LCD display. Sitting next to its CRT counterpart, the flat panel not only took up less space, but used less energy and emitted less radiation. It was also glare proof. Formerly selling in the U.S. under the Nanao brand, Eizo is known for its high-quality monitors. (Image courtesy of EIZO Nanao Technologies Inc.)
In 1999, SGI introduced the first high-resolution, wide screen, flat panel monitor. At 1600x1024 resolution, the 1600SW was revolutionary and ideal for displaying two documents or Web pages side by side. With this dual monitor configuration, four documents could be seen at once.
At the end of 2003, this 42" panel from LG was the top size for an LCD screen, but two years later, LCDs reached 60". At the 2007 Consumer Electronics Show (CES) in Las Vegas, Sharp proudly displayed a 108" LCD TV, eclipsing Panasonic's 103" plasma the year before.
LCD screens were always noted for their lack of glare from overhead lights and sunlight. However, around 2003, screens emerged with a rigid overlay for color enhancement. The overlay screens appear more vivid, and the non-glare LCD looks dull by comparison. But, the overlay causes reflection; witness the editor of this encyclopedia taking a photo of a laptop screen. By 2007, manufacturers found ways to tone down the glare, reverting to a more traditional matte finish, especially in desktop monitors. | 1 | 3 |
<urn:uuid:0e7b3cfe-0722-4367-a315-41194c36794c> | Differences between the sexes regarding the prevalence, psychopathology and natural history of psychiatric disorders have become the focus of an increasingly large number of epidemiological, biological and psychological studies. A fundamental understanding of sex differences may lead to a better understanding of the underlying mechanisms of diseases, as well as their expression and risks.
Community studies have consistently demonstrated a higher prevalence of posttraumatic stress disorder (PTSD) in females than in males. Recent epidemiologic studies conducted by Davis and Breslau and summarized in this article have begun to elucidate the causes of this higher prevalence of PTSD in women.
Davis and Breslau's studies addressing this issue include Health and Adjustment in Young Adults (HAYA) (Breslau et al., 1991; 1997b; in press) and the Detroit Area Survey of Trauma (DAST) (Breslau et al., 1996).
In the HAYA study, in-home interviews were conducted in 1989 with a cohort of 1,007 randomly selected young adult members, between the ages of 21 and 30, of a 400,000-member HMO in Detroit and surrounding suburban areas. Subjects were reevaluated at three and five years post-baseline interview. The DAST is a random digit dialing telephone survey of 2,181 subjects between the ages of 18 and 45, conducted in the Detroit urban and suburban areas in 1986. Several national epidemiologic studies that report sex differences in PTSD include the NIMH-Epidemiologic Catchment Area survey (Davidson et al., 1991; Helzer et al., 1987) and the National Comorbidity Study (Bromet et al.; Kessler et al., 1995).
Epidemiologic studies, particularly those focusing on the evaluation of risk factors for illness, have a long and distinguished history in medicine. However, it is important to understand that the proposition that there are factors predisposing individuals to the risk for PTSD was controversial in the early phase of characterizing this diagnosis. Many clinicians believed that a highly traumatic stressor was sufficient for the development of PTSD and that the stressor alone "caused" the disorder. But even early studies demonstrated that not all, and often a small number of, individuals exposed to even highly traumatic events develop PTSD.
Why do some individuals develop PTSD while others do not? Clearly, factors other than exposure to adverse events must play a role in the development of the disorder. In the late 1980s, a number of investigators began to examine risk factors that might lead not only to the development of PTSD, recognizing that the identification of risk factors should lead to a better understanding of the pathogenesis of the disorder, but also to a better understanding of the commonly comorbid anxiety and depression in PTSD and, most importantly, to the development of improved treatment and prevention strategies.
Since the diagnosis of PTSD is dependent upon the presence of an adverse (traumatic) event, it is necessary to study both the risk for the occurrence of adverse events and the risk for developing the characteristic symptom profile of PTSD among exposed individuals. One fundamental question addressed by the analysis of both types of risk is whether differential rates of PTSD could be due to differential exposure to events and not necessarily to differences in the development of PTSD.
Early epidemiologic studies identified risk factors for exposure to traumatic events and subsequent risk for the development of PTSD in such exposed populations (Breslau et al., 1991). For example, alcohol(Drug information on alcohol) and drug dependence was found to be a risk factor for exposure to adverse events (such as automobile accidents), but was not a risk factor for the development of PTSD in exposed populations. However, a prior history of depression was not a risk factor for exposure to adverse events but was a risk factor for PTSD in an exposed population. | 1 | 2 |
<urn:uuid:a3794a5b-3e3a-4a0a-a3b5-c37862866e77> | Bernard Maybeck is one of those architects whose
place in architectural history will be unjustly denied
like that of many great architects, being overshadowed
by the success of more renown architects and by the
new course that architecture took during his lifetime.
To many, Maybeck's creativity in architecture is
thought tohave reached .the greatness of Frank Lloyd
Wright's. This point is, I believe, subject to argument
both ways but Maybeck's name will undoubtedly be brought
up any time that native architectural genious is the I
issue. Native Californians will be as proud to call'
Maybeck the grandfather of California's Bay Region Style
as the people of the Midwest will be to term Frank Lloyd
Wright as the author of the Prairie Style
It has been said that Wright's praise of his own
work brought him international fame while Maybeck's
reticent character kept his no less remarkable architec-
ture unknown outside San Francisco,however, a highly
individualistic type of architecture which almost reaches
the point of eccentricity is more likely to find a very
limited and special clientele than wide acceptance in
different regions. Thus Maybeck's architecture grew in
an almost local context, remaining virtually unknown
outside its region.
Whatever the reasons were that kept Maybeck's archi-
tecture known only to a few might never be all too clear.
It OS only lamentable that It so happened, for I see in
the present approach to architecture no room for the
development of a creative genious of the kind of Maybeck.
1862 - 1957
Bernard Maybeck was born on February 7, 1862 in New
York city. His father, a wood carver by trade emigrated
from Germany in 1848, his mother had decided beforehand that
her first son would become anartist. She died before Maybeck
reached the age of three, but her father was determined to
carry out her wishes.
From childhood Bernard was guided into the first steps
that would eventually develop his creative genius. He would
always remenber how other boys would play ball while he was
forced to learn drawing .
As a child Maybeck attended public school while enrolled
in two private ones where he studied French, German and phi-
losophy, but his heart was never in any of these subjects
as he found more joy in designing intricate structures for
In short, Maybeck made it clear early in his life that
he was not cut out to be a scholar and at the age of seventeen
he was apprenticed to a wood carver that would pay him
$3 a month to learn his father's craft, Maybeck did not last
long as an apprentice, for he could not very well be depended
upon to carry out the orders of his employer, he seemed to know
more than his master.
Following his short lived apprenticeship, Bernard went to
work for his father, who was in charge of a fine furniture
shop of fifty men on lower Broadway. At this job, he kept his
father uneasy by introducing his own design into the furniture
he was supposed to be copying. This and perhaps the idea of
carrying out his wife's wishes made Bernard's father send him
off to Paris to study furniture design.
At the age of eighteen, Bernard was sent to Paris as an
apprentice in a furniture shop across the street from the
Beaux Arts. His interest in architecture soon began to flourish
as he passed every day by the old romanesque of St. Germain des
Pres and sat down to listen to the people singing, eventually
becoming aware of the emotional qualities created by the.
architecture of the church.
It wasn't long before Bernard wrote to his father asking
him permission to study at the Beaux Arts, which he was given
thus opening the door to his creative abilities.
Upon entering the Ecole, Maybeck's spontaneous and creative
view of the world around him was set free and he began to
become aware of the endless source of inspiration that the
architecture of the past offered. He drew from this inspiration
of the past an approach to building that was unique even at a
time when the use of elements from the past was the unquestioned
approach to architecture.
Maybeck loved the Ecole from the begining and never wavered
from his initial feelings, however, to the students of the
Ecole Maybeck was no more than a clown, unable to follow the
simplest of their approaches to design. Maybeck's creative
spirit was too independent and joyous for the strict system
of the Beaux Arts.
In 1886 Maybeck passed his examination from the Beaux Arts
and returned to the States Where he started to work in the
office of a former roommate at school? Thomas Hastings who
had just opened an office in partnership with Carrere and they
were at the time designing the Ponce de Leon Hotel in St.
Thus, at the age of 25 both Hastings and Maybeck were
at their first job. The result was a magnificent building that
although following some principles of "Mexican Romanesque"
architecture managed to convey the personal "stamp" of
Maybeck and Hastings. When construction started Maybeck was
assigned the job of superintendent in Florida. Maybeck's
father came down with him to do some of the wood carvings in
the hotel. The hand of Maybeck and his father's craftmanship
turned the Ponce de Leon Hotel into the most successful
project ever to come out of the office of Carrere and Hastings.
Upon finishing the Ponce de Leon Hotel in 1888, Maybeck
joined James Russel, another former classmate, setting up a
practice in Kansas city, Russel'.s home town. Unable to get
any commissions, their partnership was brief.
During his stay in Kansas city, Maybeck met Mark White,
a school teacher, and his sister Annie. In I889, Maybeck and
White set out for San Francisco and a year later, Maybeck
married Annie White.
In San Francisco, Maybeck worked for a brief period of
time in the office of Ernest Coxhead, an Englishman, who
like Maybeck, had also developed a sympathetic feeling for
the finely detailed houses of the region,
Maybeck then went to work for a furniture manufacturer
designing and carving furniture, and when the Crocker Building
in San Francisco was being designed in 1891, Maybeck was hired
by A, Page Brown, the architect, as a draftsman. Here Maybeck
started to use the entwined initials A. and M. for Annie Maybeck
as a decorative detail; possibly the start of his personal touch
in architecture. Later when doing the Swedenborgian Church in
1894, his influence was more apparent.
In 1894, Maybeck took a teaching position at the University
of California in the department of drawing, teaching descriptive
geometry, a subject in which he exceled.ln no time, Maybeck
had started to teach a course in architecture, which met at
his home. This was the begining of the School of Architecture
at the University of California.
In I896, Mrs. Phoebe Apperson Hearst.made an offer to
the University of California of funds for a mining building
to be built in memory of her husband. The president of the
university accepted the generous offer and immediatly called
on the engineering department to produce preliminary skeches
of the building within 24 hours..Failing to find any talent
capable of meeting such a challenge everyone turned to Maybeck
the only architect in the University.
Maybeck's proposed plans for the mining building were
accepted by Mrs. Hearst, but Maybeck was already one step
ahead when the time came for placing the building on campus;
foseeing the problems that haphazardly placed buildings could
create, Maybeck proposed that a master plan be created for the
University. The master plan, he thought, should be done by
someone outside the University so as to avoid any involve-
ment in thedesign of unimportant details which would unavoid-
ably hinder a good general conception of the design.
The sugestion of an international competition by Maybeck
for a university plan greatly pleased Mrs. Hearst, who
immediately agreed to sponsor the project. Thus the Phoebe
Apperson Hearst Plan was soon underway, described as the most
lavishly endowed architectural competition in history.
Maybeck, of course, was the director of the competition
and with the help of the telegraph and professor Gaudet in
Pards, the program for the plan came into being. At a cost of
over $100,000 six thousand programs, including contour maps and
photographs of the site, were mailed out. The competition was
an splendid pportunity for architects all over the world, and
in no time at all, the University of California was the main
concern of all architectural offices everywhere.
The winner of the competition was Emile Benard, a french
architect who refused to leave his country to carry out the
project, forcing Maybeck to invite John Galen Howard from New
York to supervise the execution of the plan. Howard soon firmly
took over all the work to be done and Maybeck lost the commission
to the Mining Building. Howard did not regard Maybeck's work
very highly and soon after he moved from New York to California
and was appointed professor of architecture at the University
Maybeck was out of his job as a teacher as well.
Although Maybeck'vS career at the University of California
was a rather short one, he profited very well from the very
same events that led to its termination by making several
important acquaintances and obtaining very significant
In 1899» Mrs. Hearst commissioned Maybeck to design
Hearst Hall, a building where Maybeck could entertain the
women students as well as the university community in general.
The result of this commission was one of Maybeck's most innova-
tive buildings as well as a masterpiece in architecture. The
building's central space was contained by a series of wood
laminated arches which rose to a height of 54' at their apex ;
a very daring acomplishment at the time since it was the first
time that laminated wood arches were ever used.
At the front of the building two square towers stood on
either side, exposing the arch form. The sides of the building
were flanked by an outer structure which covered the arches
up to two thirds of their height. This outer structure created
bays to the sides of the central space and its roof served
as a promenade from which one could look down into the central
dance floor. Under the dance floor there was a banquet hall as
well as the housing for the mechanical equipment.
Some years after it was built, Hearst Hall was cut up
in sections and moved into the campus. The laminated arch sections
proved to be the product of a great engineering genius^ when one
bay section broke loose and rolled into a ditch without suffering
the least damage.
The years of work at the university gave Maybeck not only
commissions to a few important buildings but also the oportunity''
extensive domestic design. It was during these years that
Maybeck proved his extraordinary talent in architecture and
set the grounds for the most important works in his career.
In 1910, Maybeck almost 50 years of age was asked to
design the First Church of Christ Scientist. With his very
special ways of handling materials Fvlaybeck created here another
building with no precedent. His skillfull and imaginative way
of integrating building elements and architectural ornamentation
made the Christian Science Church a masterpiece of architecture.
After Maybeck finished the design of the Christian Science
Church, his practice started to decline, and not being as bright
in the handling of money as he was at the drawing table, Maybeck
and his family found themselves almost near bankrupfcy.
In a period of 35 years, Maybeck had set up three different
offices in San Francisco and had to commute there daily from his
house in Berkeley. Needless to say, he profited little frpm
his work. Mrs. Maybeck, who had taken over the bookeeping several
years after their marriage took care of all the bills while
her brother, Mark White, superintended the construction of the
buildings. The 8% fee was usually spent before a job had been
completed and a raise to 10^ made little difference; Maybeck
would continually make changes in a building as it was being
built, and the fee would disappear before the job was finished.
In 1912 plans for the Panama Pacific International Exposition
were made, but Maybeck had not been invited to participate
because he had never designed any large buildings, however Mrs.
Maybeck who was not about to give up on the oportunity sent
numerous letters to the architectural committe asking that
they give Bernard a job. Fortunatelly, the head of the head of
the directing commitee was Willis Polk, a former student of
Maybeck who hired him on an hourly basis as a drafstman.
The design of the Palace of Fine Arts for the Exposition
had been given to Polk, who being short of time to undertake
the commission to give some thought to a plan. Maybeck had
been asigned to coordinate the Joy Zone and knowing the area
well from previous experience he remembered a depression in
the land in which water had collected making a lagoon. His
romantic imagination soon conceived of a scheme that would make
use of this splendid site and in a quick charcoal rendering of
a gallery, an elliptical colonade and a rotunda, he froze> the
feeling of melacholy he thought the Palace of Fine Arts ought
The rendering impressed Polk, who passed it around to the
other members of the architectural commission. Henry Bacon
of New York was also greatly impressed by Maybeck'* s scheme and
as a result it was adopted as the design for the Exhibition.
Maybeck, given full charge of the project by Polk carried
out the buildings in the same spirit of the charcoal rendering
and the completed building was a remarkable achievement. Maybeck,
however, continued to earn a draftsman's salary while working
on the Palace of Fine Arts; his success lied in the recognition that
the building brought him over the years not in the money he made
The Palace of Fine Arts gave Maybeck a citation from the
American Institute of Architects and before the Exposition closed
there was a movement to save it from demolition. The people of
San Francisco succeded in saving Maybeck's buildings which in
defiance of their non-permanent construction stood for years,
slowly crumbling away, creating the feeling of ruins that Maybeck
At the end of the First World War, Willis Polk was given
a post with the city's Memorial and Monument Commitee and suggested
that the Palace of Fine Arts be rebuilt with permanent materials
as a war memorial, but nothingbame of his efforts.
With time, nature took its toll on Maybeck's buildings and
by the late 50's all that remained of the Palace of Fine Arts
were some ruins along with the memories of many native San Franciscans
who came think of the Palace of Fine Arts as part of San Francisco.
In 1958, a bond issue to rebuild the Palace was voted down, but
in 1959» a resident of San Francisco, Walter Johnson gave $2 .'
irdllion in order to save the buildings. The State of California
gave another $2 million and by 1977» at a total cost of $8.5 million
the Palace of Fine Arts was rebuilt with poured in place
concrete from the original drawings, stored in the University
of California's Architectural Library.
Before the Panama Pacific Exposition opened, Maybeck
received an important commissioni the layout of a town,
Brookings, Oregon, for Brookings Lumber Company. The war in
Europe had caused a spur in the Ixmiber industry and Maybeck
went as far as designing a few buildings for the small town,
and a temporary wooden dormitory for workmen was built, how-
ever, with the end of the War the project was never completed
In 1917» as a result of being appointed as a supervising
architect to the United States Shipping Board, Maybeck was
commissioned to do the layout of a new town, Clyde, near
Port Chicago in order to handle the enormous increase in
employees from the shipyards. Maybeck went as far as designing
a hotel . for the town and about 200 houses. The hotel today
stands abandoned and deteriorating as the Palace of Fine Arts
did for many years.
The end of the War, brought Maybeck several small commissions
however, they were all significant in that they all displayed
his inventive genius in architecture. An outstanding example
of Maybeck's work during this time the mountain cabins designed
for Glen Alpine Lodge on lake Tahoe in 1923* Massive stone
butresses framed series of industrial steel windows, creating
an impressive contrast between the supports and the walls.
The roof was covered with corrugated steel panels which were
bent over the roof joists as to use their inherent qualities
to their full potential.
In 1923 a disastrous fire destroyed many of Maybeck's
houses in Berkeley and looking for a solution to this serious
problem Maybeck started to use concrete in the design of his
In his search for better methods of construction, Maybeck
experimented with a lightweight concrete mixture that a Berkeley
man had invented by mixing chemicals with cement. Maybeck dipped
wet burlap sacks into the mixture called "Buble Stone" and
nailed these to the framework of a building thus creating a
very inexpensive method of construction.
At the age of 65, in 1-926, Maybeck met Earle C» Anthony,
a man of wealth for whom Maybeck would design some of his
most important buildings. Earle C. Anthony commissioned Maybeck
to do two Packard showrooms; one in San Francisco and another
in Oakland. In these buildings which were built around 1928
Maybeck was able to capture once again the romantic spirit
which was such an important element of the Palace of Fine Arts.
For the Anthony's Maybeck also designed a house in 1927?
a twentieth century imitation of a medieval castle. The half
million dollar building required the unlimited use of the
most expensive materials in order to create the desired
effects. At the same time, Maybeck was working on a small
library in Carmel, a building which unlike the .Anthony's
house, forced Maybeck to make use of his ingenuity.
In 1927 Maybeck did the Hearst Memorial Gymnasium in
association with Julia Morgan and in 1929 worked with Henry
Gutterson on a Sunday school for his Christian Science Church.
The last important commission Maybeck was given was for
the layout and the buildings of the Principia College Campus
which had originally been planned in 1923 for St. Louis but
were later changed to Elsa, Illinois where construction started
in 1938. Eight buildings were completed, Maybeck had designed
them in his own Tudor Style in reinforced concrete and although
he did not personally supervise the work, the buildings showed
the unmistakable hand of Maybeck.
In 1942, when Maybeck was 80 years old, he was succeded in
his office by William Gladstone Merchant who had worked as an
assistant to Maybeck in the Palace of Fine Arts.
The Maybecks moved to Twainhart during the war years and
lived in a cedar bark cabin which Maybeck designed and built
at a cost of only $1,500. In Twainheart, Maybeck worked on
drawings of a proposed boulevard for San Francisco, he had
at one time been a.member of the Berkeley City Planning Board
and his drawings of a proposed boulevard for Berkeley hung in
the office of the Planning Board for years.
After the war,Maybeck returned to Berkeley and spent the
last years of his life there where he could sit in the eucalyptus
grove by his house and look at many of the houses he had designed
years before, in the prime of his architectural practice. Maybeck
kept himself busy building models of buildings and airplanes and
talking to visitors interested in his experiences while studying
During his late years, Maybeck received honors for many of
his works. In 1951. at the age of 89, he received the Gold Medal
of the American Institute of Architects.
Bernard Maybeck died on October 3, 1957 at the age of 95•
William Gladstone Merchant, the man that succeded Maybeck at
his office still keeps his name in the directory of the lobby
of the Rust Building. He says; Maybeck didn't like the idea of
retiring, so I promised him that he would have an office for
as long as I was alive.
The architecture of Bernard Maybeck is as unique
as the man himself. If there is no precedent to be found
in many of his buildings it is because they were the
product of a man who could in many ways be considered
an eccentric. Self styled clothes with trousers so high
waisted as to eliminate the need for a vest, the prac-
ticing of vegetarianism and other ideas on healthful
living as well as the use of his wife's initials in the
conice of a San Francisco office building, sire diversions
from the norm that can only be attributed to an unusual
if not unique soul.
His eccentricity, however was coupled to tremendous
ingenuity. Thus his talent in dealing with materials and
his unique perception of the world airound him served as
the vehicle that allowed him to express his spirit in
In the realm of highly personalized architecture
within the last one hundred and fifty yearsj Maybeick
has been associated with names such as Claude-Nicholas
Ledoux, Sir Joane Soane, William Butterfield, Antonio
Gaudi, Victor Horta, Charles Rennie Mac Intosh and the
Philadelphicin Frank Furness. However, as the term ec-
centric, given to this category of architects indicates
it would be totally erroneous to try to find great.
likeness between the architecture of these men, and it
is questionable wether they should at all be grouped
together. Their eccentric personalities produced buildings
so different in character that their similarity for the
most part can only be attributed to the fact that they
deviated from the accepted ways of approaching architec-
tural design in their respective epochs.
It is likewise inapprpiate and contrary to its na-
ture to try to find, in the architecture of many of these
men any specific consistance from one building to another.
Maybeck's way of approaching a design seems to vary con-
siderably from one building to another and the only
apparent commitment seeems to have been that of creating
an extremely plaasant and beautiful environment to suit
the function that the building was to serve.
Sometimes, as it is the case with his Christian
Science Church, Maybeck's ability to design a building
by inventive improvisation and by ecclectic borrowing,
created what' at first seems to be a series of contra-
dictions in architectural philosophy,,:
In the Christian Science Church Maybeck makes an
unprecedented and imaginative mixture of past architec-
tural styles. Elements from the Byzantine, Romanesque,
Gothic, Renaissance, Japanese, Swiss and wooden verna-
cular find themselves as part of a harmonious composi-
tion of which metal factory windows and asbestos sheeting
were also part.
As if designed by amother architect, Maybeck's
cabins near lake Tahoe display tremendous talent in exe-
cuting a straight forward design. Here boulders of gra-
nite found around the site were used to build the massive
piers which act as butresses in holding the roof load.
Factory metal windows reduce the effect of mass between
the piers to a minimum making a strong and beautiful
contrast between mass and void. The way in which the
roof is teated in these cabins shows yet another touch
of creative geniousj the light, corrugated metal sheets
are bent over substancial framing members in such a way
that their inherent structural characteristics are used
to their full. The effect achieved here by the apparent
lightness of the roof is one of inbalance between two
structural building elementst the piers and the roof
structure. However, the strange feeling of disproportion
that the buildings give at first glance comes from from
the hindering association with the image that a building
is expected to prtray. At once it becomes apparent that
the whole scheme is born strictly out of a functional
approach and the buildings coWe through as masterpieces
in construction and architecture.
However, Maybeck's unique £tnd inventive approach
to architecture, in which no apparent discipline guided
his design, would at times show its weaknesses as as
William Jordy points out in this quote from his excellent
essay on Maybeck:
"Extreme originality and empericism imguided
by principle more substantial than a misty
loftiness rarely acomplishes much. When it
does, the achievement depends less on intui-
tive logic than on intuitive equilibrium
among conflicting inclinations - an internal
equilibrium tending by its very nature to be
transitory, hence incapable of invariable
achievement at a high level. Thus Maybeck's
v;orst buildings are confused and saccharine"
To say that Maybeck had no guiding principles, on
the other hand, becomes somewhat of a superficial asump-
tion for upon analysing his architecture in some depth,
the more basic principles that guided his design come
to light. His main comitment, as I stated before seems
to have been that of creating beauty in his buildings?
beauty, that is, within the context of the particular
building, its materials, its site, its function and
also within its social and economic conditions.
His principles of design were, therefore true
principles, having very little to do with specifics,
thus no rule of ornamentation or design pertaining to
a particular "style" was sacred to him, he seemed in
this respect to go a step further than most of his
contemporaries. His great understanding of materials
and his ingenuity along with his little regard for
the architectural "recipes" of the Beaux Arts gave
Maybeck the freedom to create great architecture; not
always consistent, but definitely superior to that of
the more comformist architects of his time.
(1) AMERICAN BUILDINGS AND THEIR ARCHITECTS by William
Jordy, Anchor Books, 1976.
(2) CALIFORNIA'S ARCHITECTURAL FRONTIER, STYLE AND
TRADITION IN THE NINETEENTH CENTURY.The Huntington
(3) FIVE CALIFORNIA ARCHITECTS by Esther Mc. Coy
Praeger Publishers, 1975'
(1) Bangs, Jean Murray. "Bernard Ralph Maybeck, Architect,
Comes Into his Own", The Architectural Record. V.103
Jan 1948, pp. 72-79.
(2) "Bernard Maybeck, A Parting Salute To A Great Romantic"
House and Home. V. 12. Dec. 1957i pp.124-129.
(3) Besinger, Curtis. "After 50 Years, This House is Newer
Than Many Moderns". House Beautiful. V. 104. May 1962,
(4) Flamn, Roy. "Maybeck". Interiors. V. 119.Jan I960,
(5) Harris, Jean. "Bernard Ralph Maybeck" American
Institute of Architects, Journal. V.15 May 1951
(6) Morrow, Irving F. "The Packard Buildings at Oakland"
California Art and Architecture. V.35, Feb. 1929
(7) Nichols, Frederick D. "A Visit with Bernard Maybeck"
Society of Architectural Historians. V. 11, Oct. 1952
LIST OF SLIDES
■Note- All the slides listed, except for the numbered
ones, come from FIVE CALIFORNIA ARCHITECTS (see bibliography)
Hearst Hall, exterior view.
Hearst Hall,interior of main hall.
Hearst Hall, laminated wood arches.
Hearst Hall, interior of main hall.
David Boyden house, exterior view. 34784
D. Allen house, rear view. 3477
Coslinsky house, strret facade. 4l852
Goslinsky house, detail. 41853
Christian Science Church, street facade.
Christian Science Church, Street facade. 3478I
Christian Science Church, detail. 3894
Christian Science Church, side view.
Christian Science Church, entrance. 41886
Christian Science Church, portico entrance.
Christian Science Church, auditorium. 4l884
Christian Science Church, auditorium. 41883
Christian Science Church, truss detail. 4l88l
Christian Science Church, side aisle. 41889
(20) Christian Science Church, trellis at entrance.
(21) Christian Science Church, reader's table. 4l882
(22) Christian Science Church, Sunday school entrance. 4l888
(23) Christian Science Church, school addition. 4l885
(24) Christian Science Church, floor plan. 41909
(25) Panama-Pacific Exposition, Horticultural Hall. 25214
(26) Panama-Pacific Exposition, Rotunda, 34842
(27) Palace of Fine Arts across lagoon.
(28) Palace of Fine Arts, propylaeum. 3484l
(29) Palace of Fine Arts, rotunda. 48125
(30) A.E. Bingham'house. 41856
(31) A.E. Bingham house, floor plan. 4l855
(32) Glen Alpine Cabins, Lake Tahoe.
(33^ Residence, corinthian columns. 3898
(34) The Packard Agency, Oakland.
(35) Hearst Memorial Gymnasium. | 1 | 16 |
<urn:uuid:e6c74281-4cb3-4871-8872-bcbf87e47db2> | Electrical drafting is primarily done on a computer today, with software such as EAGLE or KiCAD. This wasn’t the case back when tube radios ruled the airwaves, though – schematics were drawn up by engineering draftsmen by hand. And as with any process with a human element, they didn’t always get it right.
I’m working on a 1934 Philco 66. It came to me in excellent original condition with little evidence of having been service, and throughout the process, I’d been relying on the schematics to guide me in the right direction. Unfortunately, along with a laundry list of other issues, my reliance on the schematic to be “the truth” led me around in circles longer than I needed to be to resolve a power supply problem.
Below is a schematic snippet of the power supply and audio sections of the 1934 Philco 66, with the RF chain to the left of the #75 Detector/1st Amplifier tube hidden for simplicity’s sake.
In green, I’ve highlighted the path B+ (high voltage) is supposed to flow from the rectifier cathode to the plate of the first audio amplifier. It’s a very straightforward path…if the draftsman had indicated that tube was supposed to be connected to the power supply. In red, I’ve indicated a missing connection symbol. Without it, there was no power being supplied to the first tube in the audio amplifier stage and the audio signal was being killed at that point before it could make it to the final output amplifier. Using an alligator clip, I restored that connection to test, and the radio sprang to life making noise on the next power-up.
The second filter capacitor should have been connected to both B+ and to the plate path for the #75 tube, rather than just the plate path. (Incidentally, the two capacitors are both at the same potential, so under the correct connection scheme could have been replaced with a single capacitor of a larger value.)
It’s not done yet, but I’m inclined to believe the final wiring issue has been corrected, and it’s on to performance.
A local friend is building a rat rod out of 1920s-1950s parts, a custom collection that ultimately will turn into a very fast car powered by a huge V8. He found a vintage car radio to go with it, the perfect addition and gave it to me to fix up. He requested to leave the metal cabinet alone so he could paint it to match after the car’s color scheme is finalized, so don’t worry too much about the finish.
This radio, the 4-B-31 “Roamer” was built by Firestone Tire & Rubber, the same company that today makes tires interestingly enough – they used to have a bigger product line when consumer buying habits favored combination stores. It’s a six-tube radio with a broad RF amplifier stage. Most likely the radio bolted up under a pickup truck’s dash and connected in the back to the firewall.
The tubes are 6SK7GT 6SA7GT 6SK7GT 6SQ7 6V6GT 6X5GT. The radio operates off a 6V car battery. With the low voltages it’s only about 1.2W of output power so will never be that loud, but when highways were new it was a lot quieter on the road and probably sounded better.
The battery directly powers the 6.3V filaments of the tubes, and the high voltage is provided with the help of a vibrator power supply. The 6V is fed into the electromechanical device which rapidly vibrates between two contact points turning the DC into a square-wave AC which is fed through a transformer to step the voltage up, then into a conventional rectifier power supply.
One of the pins was broken on this original vibrator, so it was the first to go. I replaced it with a solid-state replacement that uses a few transistors in a multivibrator circuit to accomplish the same effect, and should never need to be replaced again. I also replaced the 6X5 with a pair of 1N4007 diodes in an octal tube base, although this isn’t shown in any photos.
The chassis was decent to work on. It had open sides which made it easier to get things in with tight tolerances. The resistors tested decently, but all caps did need to be replaced as always. Several had blown their ends off already.
This radio was of course designed to be used in a car, and that means used with a car radio antenna which is a specific length and has certain transmission line characteristics – not quite as simple as just stringing out a long-wire. It’s a standard antenna, though, so I ordered a replacement that cost something like $10 with free shipping from Crutchfield.
It arrived in interesting packaging. The box was clearly broken in half, but both halves made it to my door without actually being connected somehow.
The antenna was in the bigger section. Go UPS?
A terminal strip in the radio was broken. This was a problem because the broken terminal happens to be the positive power lead-in and it couldn’t be salvaged. Only one terminal broke, though, so I improvised, screwing a screw lead to the mounting bracket and securing as shown, then running the wire out of the case.
Reassembled and testing with a bench power supply that was okay to check functionality. The switching power supply introduces too much hash to receive any stations, but it was good enough to do an alignment with a signal generator by injection. I then switched to a lantern battery for final tweaks which had a disappointing life of about 10 minutes. Clearly these were meant to be run off lead-acid batteries or linear power supplies only. It draws around 4A.
I also rewound the dial indicator. The dial tuning drum was still wound properly but the dial indicator string had broken so the pointer no longer moved. I used string that was a bit too thick but it worked out okay and is perfectly functional. No photos of that available though, it was pretty quick. The service manual had a full dial string diagram and pointer adjustment procedure. Unfortunately I ran into a problem as I was reassembling everything: the volume suddenly dropped off massively even with the control maxed out and it wasn’t coming back for anything. A check of the voltages showed that I had tens of volts on the screens of most all the tubes, where there was supposed to be a few hundred. I was at a loss about why this happened and finally resorted to the poke test.
The poke test is what it sounds like: poking or tapping on pretty much every part in the radio. I gave decent raps on all of the solder joints, tube pins, tie points and finally came to one that would make the volume cut back and forth: R7, the B+ dropping resistor for the screen voltages, a 15K 1W carbon resistor. Apparently it was internally cracked or otherwise defective. I replaced it with two 30K resistors in parallel to form a 15K 2W resistor, and a few others that shared the same tie point or were otherwise looking rattier than I really like even if they were in spec.
With that repair completed, the radio fired up perfectly with loud volume. This was a fun project, but power supply issues mean I don’t think I’ll take on too many of these in the future.
A little while ago, I picked up a set of seven LCD monitors in various states of not working to work on repairing and maybe eventually reselling. The first one was quite easy – it just needed a wire reconnected internally and works perfectly. I grabbed the second one, a ViewSonic VP191b. It’s nothing hugely special, 19″ with two VGA and a DVI offering resolutions up to 1280×1024, a 16:10 aspect resolution.
Nothing I’d use as a main monitor, but a decent consumer device. And it turns out this one’s a little more complicated to repair than the last few I took care of.
Taking the case off, you can see the high voltage power supply which takes 12V DC and converts it to a thousand or so volts AC to power the backlights; in the center is the logic board and on the right the switching power supply.
The power supply is encased in a plastic insulator to keep it from shorting to the case.
Unfortunately, this one wasn’t in as good of shape as the others. In addition to having a few bad capacitors, it turns out that the resonant transformer is also bad (the yellow square to the right in the photo above.) If this supply lost regulation when a part failed as it was running, it could cause a nasty cascade taking out transistors, transformer windings, anything really and that looks like what happened.
The capacitors used in this model are:
- 470uF 25V
- 1000uF 16V x2
- 470uF 16V
- 120uF 400V
I don’t have a spare resonant transformer, and wasn’t able to locate another one online…maybe the end of the road? Nope! I checked the voltage ratings on some of the logic board components and they were all rated 16V…the rule of thumb for capacitors is you overrate their voltage by sqrt(2), or 1.414 times. These were rated 16V, so I estimated from this the logic board wants a 12V input. That’s handy, and pretty easy to supply.
I need a 12V bench supply for a few other projects I have coming up, so I ordered one from eBay. This one’s inexpensive and considerably bigger than I need, but it’ll be good in the future. The ViewSonic draws ~35W, and the eBay power supply can supply up to 120W. It came without connectors, so I hooked up a line cord socket that I’d scavenged out of a dead Ethernet switch. As always, when you’re working with electricity, take proper safety precautions – don’t touch power things while they’re energized and double-check your connections.
I’ve removed the power supply from the back so it doesn’t get in the way, then depopulated it to save the remaining good components for something else:
I ended up recovering 4 small signal transistors, a bridge rectifier, several misc. resistors and small capacitors, two choke coils, an unidentified standard transformer and an NTC thermistor.
Here’s a shot of the back with the power supply removed, ready for other connections:
For the first trial, I’ll just run jumper wires.
And let’s see…
Looks like it works! My estimate about the voltage proved correct. I mount up a terminal strip just like I do with a radio and wire the new power connection to that so it can be accessed from outside the case later. I’m using a computer power molex as the new connector as that’s what I have on hand, preserving the coloring.
Testing out one more time before reassembly:
It joins the ranks of my other spares I’m not sure what to do with yet:
Mission accomplished. I’ll just get it a power supply of its own, a power brick this time, and it’ll be finished! You can only tell it’s been worked on by the dangling wires hanging out the bottom.
After seeing the repair I made on the Samsung LCD monitor, a friend gave me a few-years-old Westinghouse LCD/TV that had quit working – it wouldn’t power on anymore. It’s a Westinghouse SK-19H210S, 19″ LCD accepting VGA or HDMI up to 1440×900 resolution (somewhat smaller than true 1080P) and can also tune ATSC and NTSC television signals to receive HDTV over the air.
It’s apparently a very known fact this one has a weak power supply – all over the web. I opened it up and grabbed the power board:
Tucked away all in the back is one capacitor that’s visibly failed, which means it’s likely several are bad or will be soon.
New parts arrived from Mouser.com:
Using my trusty Hakko, I replaced six capacitors. 4 caps in total showed signs of leaking from the bottom as well (discolored board below), 2 seemed okay but I replaced anyway because why not. I’m getting better at using the Hakko and doing this kind of PCB rework in general, the entire process from start to finish only took about 15 minutes this time.
1000uF 25V x 2
Interestingly (or maybe not), these bad caps were the same brand as the bad caps from the Samsung: CapXon. Obviously those have reliability problems, or are just the cheapest they could buy.
Reassembled and powered on. The first power-up would come online but drop off immediately and it was making a hissing noise; it turns out I hadn’t firmly connected the backlight leads. After fixing that, I snapped everything back into place. Consumer electronics these days aren’t made to be opened up, so the case doesn’t quite fit back together the way I’d like it to around the control panel on the side, but it’s not visible unless you look for it fortunately.
Another one fixed! This one was about $12 of parts. Looks like this one goes for around $80 these days, so I’m half-way to getting my money’s worth out of that rework station already.
My next TV repair will be somewhat more ambitious. I got this Samsung HL-P4663W, a 46″ DLP (720p) HDTV for free from Craigslist. It needs a new bulb, and some other rework, and it’ll be worth a few hundred dollars after I get it sorted. I don’t intend to keep this one (as I already have a 46″ Samsung LCD that does full HD resolution) but just to repair and sell most likely.
I found a Samsung 225BW LCD sitting on top of my apartment’s dumpster, and figured I’d drag it upstairs. It’s a few year old model but it’s better than the current older Dell LCD that I’ve been using (1680×1050 versus 1440×900). A quick check showed that it would power on, sort-of, but the power light would flicker constantly and there was no backlight.
I popped it open, suspecting a problem in the power supply – and turns out that was right. Several capacitors on the board were showing signs of failure. Capacitors are the main component I replace in the vintage radios but cost-cutting OEMs are often known to use caps that fail after only a few years when new to save a few cents on each part that goes out the door on new things as well. In this case their 330uF and 820uF @ 25V caps had failed and the logic board was no longer getting good power.
Modern electrolytic caps fail by bulging and leaking out the top and/or the bottom, it’s easy to see at a glance. The top two are bulging and leaking; the bottom ones are bulging only which is a bit difficult to make out in the photo.
This project is one of the reasons I bought a Hakko 472D desoldering tool. It’s made for reworking through-hole and point-to-point boards, and works by melting the solder and then applying a strong vacuum through the center of the nozzle sucking it out of the way and cleaning the connection. It wasn’t cheap, but I thought it’d be important to have one of these as I do more types of electronics hobby work. I tested it out on an antique radio and it works perfectly for the annoying old joints.
This board is pretty easy to work on, the components are widely spaced and marked.
Even though it’s not bad, I’m replacing the large main filter as well – just in case. It’s the same brand as the failed ones.
Here I’ve depopulated the bad components from the board and have placed the main filter back in position, with the old one above it for comparison.
The new caps are larger than the old ones – for the same ratings, a larger size capacitor is going to be a bit more durable. For example these 330uF 25V models:
Slid the components through the top, spread the leads to hold them in position while soldering and reattaching:
Bad planning on my part meant I forgot to take a photo of the board post-repair, but it only took about 30 minutes to do the entire thing – most of which was spent figuring out how to adjust the Hakko. And for the power-up:
Success! Back to life. This LCD goes for around $150 online even today and I’ve been meaning to add a second monitor to my desk anyway, so I’m about 1/3 of the way to recovering the cost of that desoldering station after the first project. One down, two to go….This project required 3x330uF 25V capacitors, 2x820uF 25V capacitors and 1x150uF 450V capacitor which came out to $9.83. | 1 | 3 |
<urn:uuid:30855abe-61b0-44d3-8671-2c7be8bf8808> | Dorothea Salo‘s syllabus for her course Digital Tools, Trends and Debates provides many valuable links regarding technology in the LIS. You may click the link provided to view the full syllabus in PDF format, or read more below for reading listings (it’s a bit of mess below..).
Unit 1: Fundamentals
Week 1 (September 6): What is technology? Managing technology and technology projects in libraries. Jobs in library
Learning objectives: Technology, technology “stacks,” technology “affordances.” Attitudes toward technology and change. Project
management tools and techniques. Technology-centered information-agency jobs. Technology in other information-agency jobs.
Weekly assignment (due 9/13): Build your personal learning network.
Trithemius, In praise of scribes (excerpts). Translated by Dorothea Salo. http://misc.yarinareth.net/trithemius.html
Click a few links in my Trithemius linkstore: http://pinboard.in/u:dsalo/t:trithemius
Wamsley, “Controlling project chaos: project management for library staff.” PNLA Quarterly 73:2 (2009). http://
http://www.pnla.org/quarterly/Winter2009/PNLA_Winter09.pdf (pp. 5-6, 27)
Leon, “Project management for humanists.” #alt-academy http://mediacommons.futureofthebook.org/alt-ac/pieces/
Lefurgy, “What skills does a digital archivist or librarian need?” http://blogs.loc.gov/digitalpreservation/2011/07/whatskills-does-a-digital-archivist-or-librarian-need/ (please read the comments also)
Wilder, “The New Library Professional.” Chronicle of Higher Education. http://chronicle.com/article/The-New-LibraryProfessional/46681/
Week 2 (September 13): The innards of computers and networks. Technology standards.
Learning objectives: Parts of a computer. Network stacks. (cable, router, switch, DNS, TCP/IP, IPv4 and IPv6 addressing). What
standards are for. Standards bodies (W3C, OASIS, ISO, NISO, IETF), library standards and standards work (RDA,
“BibFrame,” controlled vocabularies), “open standard.”
Weekly assignment: Upgrade the lab.
Tyson and Crawford, “How PCs Work.” (pages 2-3, 5) http://computer.howstuffworks.com/pc2.htm
Erdman, “TCP/IP.” http://www.networkclue.com/routing/tcpip/
Mathew, “Explaining SOPA.” http://meta.ath0.com/2011/12/21/explaining-sopa/ (read this for how DNS works, DNS
Cargill, “Why standardization efforts fail.” Journal of Electronic Publishing. http://dx.doi.org/
Taylor and Williams, “RDA: Resource Description and Access.” Ariadne. http://www.ariadne.ac.uk/issue63/rda-briefingrpt (sections 1-3, 6)
Coyle, “Bibliographic Framework Transition Initiative.” http://kcoyle.blogspot.com/2011/08/bibliographic-frameworktransition.html
“About the World Wide Web Consortium (W3C).” http://www.w3.org/Consortium/
“Overview of the IETF.” http://ietf.org/overview.html
Week 3 (September 20): Technology, the law, and libraries
Learning objectives: Patriot Act. DOPA, S.49, COPA, CIPA, Do Not Track Kids Act. Terms of service agreements. CDA,
filtering, E-Rate. Copyright and attempts to enforce copyright strictures on the Internet (ACTA, “three strikes” laws, SOPA,
PIPA, RWA). Net neutrality.
Weekly assignment: Write a bug report.
ALA, “The USA Patriot Act and Libraries.” http://www.ala.org/advocacy/advleg/federallegislation/theusapatriotact (stop
at “Reauthorization History” section)
“Gagged for 6 Years…” http://www.democracynow.org/seo/2010/8/11/gagged_for_6_years_nick_merrill
“Carol Brey-Casiano tells a Patriot Act story.” http://americanlibrariesmagazine.org/print/4390
Carr, “Library Filtering Remains Controversial.” Baseline. http://www.baselinemag.com/c/a/IT-Management/LibraryFiltering-Remains-Controversial-581401/
Anderson, “Libraries dying for bandwidth.” http://arstechnica.com/tech-policy/news/2009/11/libraries-dying-forbandwidthwheres-the-fiber-and-cash.arsMarwick, “To catch a predator?” First Monday. http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/
2152/1966 (Abstract and introduction required; the rest is optional, but fascinating)
MacDonald, “SOPA and PIPA Infographic” http://pinboard.in/cached/c1e67492336a/
Smith, Kevin. “ACTA and the Embrace of Big Government.” http://blogs.library.duke.edu/scholcomm/2010/10/25/actaand-the-embrace-of-big-government/
Lifehacker. “An introduction to net neutrality.” http://lifehacker.com/5720407/an-introduction-to-net-neutrality-what-it-iswhat-it-means-for-you-and-what-you-can-do-about-it
Karr, Tim. “Comcast Busted.” http://www.savetheinternet.com/blog/10/11/30/comcast-busted-new-tolls-netflix-arent-allyou-should-worry-about
Unit 2: Living on the network
Week 4 (September 27): Security on the network
Learning objectives: software threats (virus, trojan, worm), malware (adware, spyware, hijackers), phishing, pharming, social
engineering, denial of service attack. Spam (email, web-comment, referrer; botnets). Server and network attacks (denial-of-service
attack, “man-in-the-middle” attack, cross-site-scripting attack, dictionary attack, brute-force attack), vulnerabilities and patches
(zero-day exploit), firewalls, privileges and privilege-based attacks (rootkit), password guidelines. Identity management
(authentication, attribution, authorization).
Weekly assignment: A reflection on personal digital security.
ObXKCD: http://xkcd.com/350/ and http://xkcd.com/936/
Plum, “User Authentication.” http://www.arl.org/bm~doc/spec267web.pdf (pp 9-13)
Granier, “SPAM and AntiSpam.” http://www.sans.edu/student-files/presentations/Spam-AntispamBattlefield.pdf (pp 1-21)
“What’s the difference between…” http://lifehacker.com/5560443/whats-the-difference-between-viruses-trojans-wormsand-other-malware
Hruska, “IRS easily baited, vulnerable to social engineering-based attacks.” Ars Technica. http://arstechnica.com/
“All About Phishing.” http://www.webopedia.com/DidYouKnow/Internet/2005/phishing.asp
Delio, “Pharming Out-Scams Phishing.” Wired. http://www.wired.com/techbiz/it/news/2005/03/66853
“Denial of Service attacks.” http://www.cert.org/tech_tips/denial_of_service.html
Piscitello, “Anatomy of a cross-site scripting attack.” http://www.watchguard.com/infocenter/editorial/135142.asp
Bradley, “Zero day exploits.” http://netsecurity.about.com/od/newsandeditorial1/a/aazeroday.htm
Baekdal, Thomas. “The usability of passwords.” http://www.baekdal.com/tips/password-security-usability
Canavan, “Information Security Policy.” http://www.sans.org/reading_room/whitepapers/policyissues/informationsecurity-policy-development-guide-large-small-companies_1331 (Sections 1-3. Skim sections 5 and 6.)
For consultation: Data Security and Compliance Terms. http://www.imperva.com/resources/glossary/glossary.html
Week 5 (October 4): Websites and their care and feeding. Mobile websites and apps.
Learning objectives: weblog, wiki, content management system, content transclusion (via RSS, Twitter, etc). Usability and user
testing. Writing for the web. Common errors in library website design. Search-engine optimization. Responsive design.
Smartphones, apps, web development for mobile devices, texting/SMS, mobile demographics, geolocation, privacy. QR codes.
Weekly assignment: Rewrite a library web page.
Reidsma, “Your library website stinks and it’s your fault.” http://matthew.reidsrow.com/ltc2012/ (watch the entire video)
Marty and Twidale, “Usability@90mph.” First Monday. http://www.firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/
“User Testing in the Wild: Joe’s First Computer Encounter.” http://jboriss.wordpress.com/2011/07/06/user-testing-in-thewild-joes-first-computer-encounter/ (beware the comments; some are good, some are stunningly creepy)
Fulton, “Library perspectives on Web content management systems.” First Monday. http://firstmonday.org/htbin/
cgiwrap/bin/ojs/index.php/fm/article/view/2631/2579 (Pay attention to the politics of CMS migration.)
“About Drupal.” http://drupal.org/about
Pettit, “Beginner’s guide to responsive web design.” http://thinkvitamin.com/design/beginners-guide-to-responsive-webdesign/
Schmidt, “Writing for the Web: Save the Time of the Reader” http://www.walkingpaper.org/5225
“Library Accessibility: What You Need To Know.” http://www.ala.org/ascla/asclaprotools/accessibilitytipsheets/ (read all;
pay special attention to “Management” and “Assistive Technology”)Wikipedia, “Search engine optimization.” http://en.wikipedia.org/wiki/Search_engine_optimization
Enis, “Patrons expect more mobile services.” The Digital Shift. http://www.thedigitalshift.com/2012/08/mobile/patronsexpect-more-mobile-services-handheld-librarian-conference/
Reidsma, “Libraries and the myth of mobile phone use.” http://matthew.reidsrow.com/articles/21
MIT Libraries, “Apps for Academics.” http://libguides.mit.edu/apps (click through the tabs, skim the pages)
Tynan, “Who’s tracking your cell phone?” http://www.pcworld.com/businesscenter/article/236456/
Suda, “Designing for the Mobile Web.” http://articles.sitepoint.com/article/designing-for-mobile-web
Week 6 (October 11): Information agencies and the social web
Learning objectives: Online audio/video, Twitter, Facebook, Google+, LinkedIn, chat, Wikipedia and libraries, geolocation,
crowdsourcing, professional networking online, social bookmarking/citation management, tagging, folksonomy, mashups (AJAX)
and widgets, APIS and protocols.
Weekly assignment: Evaluate the advocacy potential of a social-media tool for a particular information-agency type.
“Application programming interface.” http://en.wikipedia.org/wiki/Application_programming_interface
Miller, “So what’s a mashup anyway?” http://blogs.talis.com/panlibus/archives/2006/06/so_whats_a_mash.php
Lamb, “Folksonomies and Rich Serendipity.” http://www.greenchameleon.com/gc/blog_detail/
“Chat reference.” http://www.teachinglibrarian.org/oldsite/chat.htm
Hickey, “Back to school: an Evernote scavenger hunt.” http://blog.evernote.com/2012/08/16/back-to-school-an-evernotescavenger-hunt-education-series/
Potter and Woods, “Escaping the echo chamber.” http://www.netvibes.com/nedpotter#The_Echo_Chamber (at minimum,
click through the Prezi presentation OR read the article)
Simon, “An open letter to museums on Twitter.” http://museumtwo.blogspot.com/2008/12/open-letter-to-museums-ontwitter.html
Madrigal, “What Big Media could learn from the NYPL.” http://www.theatlantic.com/technology/archive/2011/06/whatbig-media-can-learn-from-the-new-york-public-library/240565/
Halpern, “Walking a fine line: You 2.0 vs. well, You.” http://hacklibschool.wordpress.com/2011/07/25/walking-a-fine-lineyou/
Week 7 (October 18): Teaching and learning on the network
Learning objectives: “Digital natives” and other (faux or real) technology demographics. Distance education, digital research
guides, MOOCs. Teaching technology to non-users, the digital divide. Gamification, badges.
Weekly assignment: SQL Quiz 1
Coombes, “Generation Y: Are they really digital natives or more like digital refugees?” http://www.slav.schools.net.au/
“Information behaviour of the researcher of the future.” http://www.bl.uk/news/pdf/googlegen.pdf
Dworschak, “Logging Off: The Internet Generation Prefers the Real World.” http://www.spiegel.de/international/
“Keeping an electronic eye on Johnny.” http://host.madison.com/ct/news/local/education/local_schools/
“Game-Based Learning.” http://www.nmc.org/publications/horizon-report-2012-higher-ed-edition (download the PDF
and read pp. 18-21)
Look at at least two tags and at least two questions on http://libraries.stackexchange.com/ . Now look through the badges
Poke through UW-Madison’s LibGuides at http://researchguides.library.wisc.edu/ and read through the information
about Library Course Pages http://www.library.wisc.edu/lcp/index.html
West and Engstrom, “Touring the Digital Divide.” http://www.librarian.net/talks/sxsw10/ (read the slides at least)
“Guidelines for distance learning library services.” http://www.ala.org/acrl/standards/guidelinesdistancelearning (Part I)
“MOOCs from Here.” http://www.insidehighered.com/blogs/confessions-community-college-dean/moocs-here
West, “On the Fly Tech Support” http://www.librarian.net/talks/iowa2009/index.html (read the slides, click some links)
Kelly and Hibner, “Thingamabobs and Doodads: why tech support IS reference.” http://www.slideshare.net/hhibner/
Grussell, “Introduction: The Database Approach.” http://db.grussell.org/section002.html (NOT the rest of the page.)Unit 3: Library-specific technology
Week 8 (October 25): The Integrated Library System and related software. N.B. Dorothea is presenting at WLA this
week. Class will NOT MEET IN PERSON. Project groups are welcome to use the classroom space at normal class time
to meet if they wish. Lecture video will be posted to Learn@UW.
Learning objectives: Software development models (off-the-shelf, customized, homegrown, open-source) and their pros and cons.
Software selection processes. Protocols and APIs (recap). ILS modules. ILS vendors. “Resource discovery” landscape. Metasearch
versus local indexing. Electronic-resource managers. Proxy servers. Link resolvers (the “appropriate copy” problem). OpenURL.
The future of MARC.
Weekly assignment: SQL quiz 2
ObXKCD: http://xkcd.com/225/ and http://xkcd.com/743/
“Comparison of open source and closed source.” Wikipedia. http://en.wikipedia.org/wiki/
Askey, “Yes, we love open-source software. No, you can’t have our code.” http://journal.code4lib.org/articles/527
Lown, Sierra, and Boyer, “How users search the library from a single search box.” http://crl.acrl.org/content/early/
Coco, “Convenience and its discontents.” http://acrlog.org/2012/01/27/convenience-and-its-discontents-teaching-webscale-discovery-in-the-context-of-google/
Dempsey, “Outside-in and inside-out.” http://orweblog.oclc.org/archives/002047.html
Watters, “The search for a minimum viable record.” http://radar.oreilly.com/2011/05/minimum-viable-record.html
Rochkind, Jonathan. “article search, and catalog search.” http://bibwild.wordpress.com/2011/08/08/article-search-andcatalog-search/
Coyle, Karen. “From MARC to principled metadata.” http://kcoyle.blogspot.com/2011/05/from-marc-to-principledmetadata.html
Taylor, Mike. “Bibliographic data, part 1: MARC and its vile progeny.” http://reprog.wordpress.com/2010/09/02/
Apps and MacIntyre, “Why OpenURL?” http://www.dlib.org/dlib/may06/apps/05apps.html
Farkas, “What’s the deal, JSTOR?” http://meredith.wolfwater.com/wordpress/2010/08/24/whats-the-deal-jstor/
w3schools.com, SQL tutorials. http://www.w3schools.com/SQL/sql_syntax.asp, http://www.w3schools.com/SQL/
sql_select.asp, and http://www.w3schools.com/SQL/sql_where.asp
Week 9 (November 1): Metadata and search engines.
Learning objectives: Metadata types (descriptive, administrative, structural, preservation). Common metadata standards and
other XML languages in information agencies (METS, MODS, Dublin Core, TEI, EAD). What is a markup language? XML.
XML well-formedness. XML validity (DTDs, schemas, validators, tag libraries and other documentation). Index, spider/crawler,
TF/IDF, search engine optimization. Relevance ranking, deduplicating, and faceted browsing. Linked data and RDF.
Weekly assignment: SQL Quiz 3
Franklin, “How Internet Search Engines Work.” http://computer.howstuffworks.com/search-engine.htm (Parts 1-4)
Rochkind, Jonathan. “Information retrieval and relevance ranking for librarians.” http://bibwild.wordpress.com/
Antelman, Lynema, and Pace. “Toward a 21st Century Library Catalog.” http://eprints.rclis.org/archive/00007332/
“A Gentle Introduction to XML.” http://www.tei-c.org/release/doc/tei-p5-doc/en/html/SG.html (Through “An example
schema,” but keep going if you like.)
SAA. “What is EAD?” http://www.archivists.org/saagroups/ead/aboutead.html
Dempsey, Lorcan. “Metadata sources.” http://orweblog.oclc.org/archives/002009.html
Riley, “Seeing Standards.” http://www.dlib.indiana.edu/~jenlrile/metadatamap/ (Download the poster and read the
legend and definitions carefully.)
Kennedy, “Nine questions to guide you in choosing a metadata schema.” https://journals.tdl.org/jodi/article/viewArticle/
Cundiff and Trail, “Using METS and MODS…” http://www.loc.gov/standards/mods/presentations/mets-mods-morganala07/
Chapple, “Database keys.” http://databases.about.com/od/specificproducts/a/keys.htmWeek 10 (November 8): Digitization and file formats
Learning objectives: Classifying and evaluating file formats. Lossy vs. lossless formats. Image formats (JPEG, TIFF, JPEG 2000,
PNG, GIF). Audio and video formats (codecs, sampling rate/bitrate, WAV, AIFF, mp3, MPEG4). Planning and managing
digitization projects. OCR.
Weekly assignment: SQL quiz 4
Search for some of your favorite file formats on http://wotsit.org/.
Matthews, “Digital image file types.” http://www.wfu.edu/~matthews/misc/graphics/formats/formats.html
Read through Rutgers’ opinions on archival file formats at http://rucore.libraries.rutgers.edu/collab/reference.php?
ICPSR, “Digital Preservation Tutorial,” section 3 “Obsolescence”: “File Formats and Software” and “Hardware and
Lazorchak, “Whither digital video preservation?” http://blogs.loc.gov/digitalpreservation/2011/07/whither-digitalvideo-preservation/
Pilgrim, “Video on the web.” http://diveintohtml5.org/video.html (Stop at “Encoding video with Miro converter.”)
“Creating and keeping your digital treasures: A user guide.” http://www.slwa.wa.gov.au/__data/assets/pdf_file/
“What is OCR?” http://www.webopedia.com/TERM/O/optical_character_recognition.html
“SQL Join.” http://www.quackit.com/sql/tutorial/sql_join.cfm (read ONLY about inner joins; outer joins will confuse you!)
Week 11 (November 15): Digital preservation
Learning objectives: Threats to digital data. Format migration vs. system emulation.“Preservation copy” and Google Books.
Types of digital archives (institutional repository, disciplinary repository, data archive, “trusted digital repository,” dark archive).
LOCKSS/CLOCKSS and Portico. eScience, cyberinfrastructure, and data curation. Personal digital archiving.
Weekly assignment: Reflect on the longevity of your personal digital materials.
Rosenthal, “Requirements for digital preservation systems: a bottom-up approach.” D-Lib Magazine. http://
“Sustainable Economics for a Digital Planet.” http://brtf.sdsc.edu/biblio/BRTF_Final_Report.pdf (pp 1-16)
ICPSR, “Digital Preservation Management.” http://www.dpworkshop.org/dpm-eng/eng_index.html (Introduction,
sections 1, 2, 5.)
Skinner and Schultz, “Preserving Our Collections, Preserving Our Missions.” http://www.metaarchive.org/sites/default/
files/GDDP_Educopia.pdf (pp. 1-9)
Library of Congress. “Personal Digital Archiving Day Kit.” http://www.digitalpreservation.gov/personalarchiving/padKit/
index.html (download and read the PDF reference copy)
“About LOCKSS.” http://www.lockss.org/lockss/About_LOCKSS
“How CLOCKSS works.” http://www.clockss.org/clockss/How_CLOCKSS_Works
“About Portico.” http://www.portico.org/about/
Lynch, “Institutional repositories.” http://www.arl.org/resources/pubs/br/br226/br226ir.shtml
Peek through SSRN (http://ssrn.com/) and MINDS@UW (http://minds.wisconsin.edu/).
ARL, “Agenda for Developing E-Science.” http://www.arl.org/bm~doc/ARL_EScience_final.pdf (pp. 3-13)
November 22: Happy Thanksgiving!
Week 12 (November 29): Ebooks
Learning objectives: IDPF, EPub vs. PDF vs. .mobi, DRM, “first-sale,” leased vs. owned information, libraries as publishers,
print-on-demand. Licensing ebooks; e-reserves. Acquiring and cataloging ebooks. DMCA and its exceptions.
Weekly assignment: An emerging technology plan.
Ball, “E-books in practice: the librarian’s perspective.” http://epub.uni-regensburg.de/2047/1/Ball.pdf
“E-reader Pilot at Princeton.” http://www.princeton.edu/ereaderpilot/index.xml (read through the whole site, and at least
the summary version of the final report)
Houghton-Jan, “Imagine no restrictions: digital rights management.” http://www.libraryjournal.com/article/
Mod, “Books in the age of the iPad.” http://craigmod.com/journal/ipad_and_books/
Tenopir, “Usage and Functionality.” http://www.libraryjournal.com/article/CA6718560.htmlBayley, “E-Book Buyer’s Guide to Privacy.” http://www.eff.org/deeplinks/2010/01/updated-and-corrected-e-book-buyersguide-privacy
Neuberger, “Who Owns Your Ebook…? Probably Not You.” http://www.pbs.org/mediashift/2010/08/who-owns-your-ebook-of-war-and-peace-probably-not-you225.html
Anderson, “Landmark study: DRM truly does make pirates of us all.” ars technica. http://arstechnica.com/tech-policy/
Yelton, “Ebooks, choices, and the soul of librarianship.” The Digital Shift. http://www.thedigitalshift.com/2012/07/
Albanese, “PW talks with Jonathan Band.” http://www.publishersweekly.com/pw/by-topic/digital/copyright/article/
w3schools.com, SQL tutorials. http://www.w3schools.com/SQL/sql_and_or.asp and http://www.w3schools.com/SQL/
Unit 4: Overarching concerns
Week 13 (December 6): Privacy
Learning objectives: Library attitudes toward privacy. Privacy and threats to privacy in networked environments. Legal threats to
privacy online (CALEA, ECPA). Teaching patrons about privacy. Ebooks and privacy. Data mining and reidentification.
Weekly assignment: Privacy-policy language for a personalized library service
ObXKCD: http://xkcd.com/155/ and http://xkcd.com/522/
Take the EFF’s Know Your Rights! quiz at https://www.eff.org/pages/know-your-digital-rights-quiz
Owen, “Big e-reader is watching you.” Paid Content. http://paidcontent.org/2012/06/29/big-e-reader-is-watching-you/
Cline, “iPhone location-tracking incident boosts stock of ‘privacy by design.’” http://www.macworld.com/article/
Klinefelter, “Library Standards for Privacy: A Model for the Digital World?” http://papers.ssrn.com/sol3/papers.cfm?
“Big data is our generation’s civil rights issue, and we don’t know it.” http://solveforinteresting.com/big-data-is-ourgenerations-civil-rights-issue-and-we-dont-know-it/
“The Fundamental Limits of Privacy for Social Networks.” http://www.technologyreview.com/blog/arxiv/25146/
Onion. “Google Responds to Privacy Concerns with Unsettlingly Specific Apology.” http://www.theonion.com/articles/
Madrigal, “Why Facebook and Google’s concept of ‘real names’ is revolutionary.” http://www.theatlantic.com/
Schneier, Bruce. “Privacy Salience and Social Networking Sites.” http://www.schneier.com/blog/archives/2009/07/
Hotz, “Facebook and privacy: six years of controversy.” http://mashable.com/2010/08/25/facebook-privacy-infographic/
Harris, “FTC says yes to Facebook inclusion in background checks.” http://www.zdnet.com/blog/feeds/ftc-says-yes-tofacebook-activity-inclusion-in-background-checks/3973
Week 14 (December 13): Collecting and circulating digital materials
Learning objectives: Google Books. The impact of ebooks and other digital materials on collection development, technical services,
reference, and other information-agency functions.
Salo, “Turning collection development inside out.” https://vimeo.com/20019850
Samuelson, “GBS as copyright reform.” http://www.slideshare.net/naypinya/samuelson-gbs-as-copyright-reform
Band, “GBS March Madness.” http://www.arl.org/bm~doc/gbs-march-madness-diagram-final.pdf
Grimmelmann, “Inside Judge Chin’s Opinion.” http://laboratorium.net/archive/2011/03/22/inside_judge_chins_opinion
Hellman, “What the Google Books Settlement Agreement Says about Privacy.” http://go-to-hellman.blogspot.com/
Kolowich, “Flipping the script.” Inside Higher Ed. http://www.insidehighered.com/news/2012/07/20/library-groups-seedouble-standard-authors-guilds-stand-against-hathitrust
Cairns, “Monographs don’t support the library mission.” http://personanondata.blogspot.com/2011/06/ala-speechparallel-universe-monographs.html?spref=tw
Schonfeld and Housewright, “What to Withdraw.” http://www.ithaka.org/ithaka-s-r/research/what-to-withdraw/What | 1 | 4 |
<urn:uuid:7d0da519-8083-4722-9728-d3de672117ce> | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Psoriasis (IPA pronunciation: [sə'raɪ.əsɪs]) is an immune-mediated disease which affects the skin and joints. It commonly causes red scaly patches to appear on the skin. The scaly patches caused by psoriasis are often called psoriasis plaques or lesions. Psoriasis plaques are areas of excessive skin cell production and inflammation. Skin rapidly accumulates at these sites and sometimes takes a silvery-white appearance. Plaques frequently occur on the skin of the elbows and knees, but can affect any area including the scalp and genitals. Psoriasis is not contagious; it cannot be passed from person to person.
Psychological aspects of psoriasisEdit
- Main article: Mindfulness and psoriasis
The disorder is a chronic or recurring condition which can vary in severity, from minor localised patches to complete body coverage. Fingernails and toenails are frequently affected (psoriatic nail dystrophy). Psoriasis can also cause inflammation of the joints. This is known as psoriatic arthritis. 10-15 % of people with psoriasis have psoriatic arthritis.
Several factors are thought to aggravate psoriasis. These include stress and excessive alcohol consumption. Individuals with psoriasis may also suffer from depression and loss of self-esteem. As such, quality of life is an important factor in evaluating the severity of the disease. There are many treatments available but because of its chronic recurrent nature psoriasis is a challenge to treat.
Psoriasis is probably one of the longest known illnesses of humans and simultaneously one of the most misjudged and misunderstood. Some scholars believe psoriasis to have been included among the skin conditions called tzaraat in the Bible. Tzaraat was a punishment for sin whose cure could only be found in repentance and forgiveness. In more recent times psoriasis was frequently described as a variety of leprosy. It became known as Willan's lepra in the late 18th century when English dermatologists Robert Willan and Thomas Bateman differentiated it from other skin diseases and provided the first rational nomenclature based on the appearance of lesions. Willan identified two categories: leprosa graecorum and psora leprosa.
While it may have been visually, and later semantically, confused with leprosy it was not until 1841 that the condition was finally given the name psoriasis by the Viennese dermatologist Ferdinand von Hebra. The name is derived from the Greek word psora which means to itch.
It was during the 20th century that psoriasis was further differentiated into specific types.
Types of psoriasisEdit
Plaque psoriasis (psoriasis vulgaris) (L40.0) is the most common form of psoriasis. It affects 80 to 90% of people with psoriasis. Plaque psoriasis typically appears as raised areas of inflamed skin covered with silvery white scaly skin. These areas are called plaques.
Flexural psoriasis (inverse psoriasis) (L40.83-4) appears as smooth inflamed patches of skin. It occurs in skin folds, particularly around the genitals, the armpits, and under the breasts. It is aggravated by friction and sweat, and is vulnerable to fungal infections.
Guttate psoriasis (L40.4) is characterized by numerous small oval (teardrop-shaped) spots. These numerous spots of psoriasis appear over large areas of the body, such as the trunk, limbs, and scalp. Guttate psoriasis is associated with streptococcal throat infection.
Pustular psoriasis (L40.1-3, L40.82) appears as raised bumps that are filled with non-infectious pus (pustules). The skin under and surrounding pustules is red and tender. Pustular psoriasis can be localised, commonly to the hands and feet (palmoplantar pustulosis), or generalised with widespread patches occurring randomly on any part of the body.
Nail psoriasis (L40.86) produces a variety of changes in the appearance of finger and toe nails. These changes include discolouring under the nail plate, pitting of the nails, lines going across the nails, thickening of the skin under the nail, and the loosening (onycholysis) and crumbling of the nail.
Psoriatic arthritis (L40.5) involves joint and connective tissue inflammation. Psoriatic arthritis can affect any joint but is most common in the joints of the fingers and toes. This can result in a sausage-shaped swelling of the fingers and toes known as dactylitis. Psoriatic arthritis can also affect the hips, knees and spine (spondylitis). About 10-15% of people who have psoriasis also have psoriatic arthritis.
Erythrodermic psoriasis (L40.85) involves the widespread inflammation and exfoliation of the skin over most of the body surface. It may be accompanied by severe itching, swelling and pain. It is often the result of an exacerbation of unstable plaque psoriasis, particularly following the abrupt withdrawal of systemic treatment. This form of psoriasis can be fatal, as the extreme inflammation and exfoliation disrupt the body's ability to regulate temperature and for the skin to perform barrier functions.
A diagnosis of psoriasis is usually based on the appearance of the skin. There are no special blood tests or diagnostic procedures for psoriasis. Sometimes a skin biopsy, or scraping, may be needed to rule out other disorders and to confirm the diagnosis.
SeverityEditPsoriasis is usually graded as mild (affecting less than 3% of the body), moderate (affecting 3-10% of the body) or severe. Several other scales exist for measuring the severity of psoriasis. These scales are generally based on the following factors: the proportion of body surface area affected; disease activity (degree of plaque redness, thickness and scaling); response to previous therapies; and the impact of the disease on the person.
The Psoriasis Area Severity Index (PASI) is the most widely used measurement tool for psoriasis. PASI combines the assessment of the severity of lesions and the area affected into a single score in the range 0 (no disease) to 72 (maximal disease).
Effect on the quality of lifeEdit
Psoriasis has been shown to affect health-related quality of life to an extent similar to the effects of other chronic diseases such as depression, myocardial infarction, hypertension, congestive heart failure or type 2 diabetes. Depending on the severity and location of outbreaks, individuals may experience significant physical discomfort and some disability. Itching and pain can interfere with basic functions, such as self-care, walking, and sleep. Plaques on hands and feet can prevent individuals from working at certain occupations, playing some sports, and caring for family members or a home. The frequency of medical care is costly and can interfere with an employment or school schedule.
Individuals with psoriasis may also feel self-conscious about their appearance and have a poor self-image that stems from fear of public rejection and psychosexual concerns. Psychological distress can lead to significant depression and social isolation.
The prevalence of psoriasis in Western populations is estimated to be around 2-3%. A survey conducted by the National Psoriasis Foundation (a US based psoriasis education and advocacy group, which is partly funded by pharmaceutical companies) found a prevalence of 2.1% among adult Americans. The study also found that 35% of people with psoriasis could be classified as having moderate to severe psoriasis.
Around one-third of people with psoriasis report a family history of the disease, and researchers have identified genetic loci associated with the condition. Studies of monozygotic twins suggest a 70% chance of a twin developing psoriasis if the other twin has psoriasis. The concordance is around 20% for dizygotic twins. These finding suggests both a genetic predisposition and an environmental response in developing psoriasis.
Onset before age 40 usually indicates a greater genetic susceptibility and a more severe or recurrent course of psoriasis.
The cause of psoriasis is not fully understood. There are two main theories about the process that occurs in the development of the disease. The first considers psoriasis as primarily a disorder of excessive growth and reproduction of skin cells. The problem is simply seen as a fault of the epidermis and its keratinocytes. An alternate viewpoint sees the disease as being an immune-mediated disorder in which the excessive reproduction of skin cells is secondary to factors produced by the immune system. It is thought that T cells (which normally help protect the body against infection) become active, migrate to the dermis and trigger the release of cytokines (tumor necrosis factor-alpha TNFα, in particular) which cause inflammation and the rapid production of skin cells. It is not known what initiates the activation of the T cells.
The immune-mediated model of psoriasis has been supported by the observation that immunosuppressant medications can clear psoriasis plaques. However, the role of the immune system is not fully understood, and it has recently been reported that an animal model of psoriasis can be triggered in mice lacking T cells. Animal models, however, reveal only a few aspects resembling human psoriasis.
Psoriasis is a fairly idiosyncratic disease. The majority of people's experience of psoriasis is one in which it may worsen or improve for no apparent reason. Studies of the factors associated with psoriasis tend to be based on small (usually hospital based) samples of individuals. These studies tend to suffer from representative issues, and an inability to tease out causal associations in the face of other (possibly unknown) intervening factors. Conflicting findings are often reported. Nevertheless, the first outbreak is sometimes reported following stress (physical and mental), skin injury, and streptococcal infection. Conditions that have been reported as accompanying a worsening of the disease include infections, stress, and changes in season and climate. Certain medicines, including lithium salt and beta blockers, have been reported to trigger or aggravate the disease. Excessive alcohol consumption, smoking and obesity may exacerbate psoriasis or make the management of the condition difficult.
There can be substantial variation between individuals in the effectiveness of specific psoriasis treatments. Because of this, dermatologists often use a trial-and-error approach to finding the most appropriate treatment for their patient. The decision to employ a particular treatment is based on the type of psoriasis, its location, extent and severity. The patient’s age, gender, quality of life, comorbidities, and attitude toward risks associated with the treatment are also taken into consideration.
Medications with the least potential for adverse reactions are preferentially employed. If the treatment goal is not achieved then therapies with greater potential toxicity may be used. Medications with significant toxicity are reserved for severe unresponsive psoriasis. This is called the psoriasis treatment ladder. As a first step, medicated ointments or creams are applied to the skin. This is called topical treatment. If topical treatment fails to achieve the desired goal then the next step would be to expose the skin to ultraviolet (UV) radiation. This type of treatment is called phototherapy. The third step involves the use of medications which are ingested orally or by injection. This approach is called systemic treatment.
Over time, psoriasis can become resistant to a specific therapy. Treatments may be periodically changed to prevent resistance developing and to reduce the chance of adverse reactions occurring. This is called treatment rotation.
Bath solutions and moisturizers help sooth affected skin and reduce the dryness which accompanies the build-up of skin on psoriasis plaques. Medicated creams and ointments applied directly onto psoriasis plaques can help reduce inflammation, remove built-up scale, reduce skin turn over, and clear affected skin of plaques. Ointment and creams containing coal tar, dithranol (anthralin), corticosteroids, vitamin D3 analogues (for example, calcipotriol), and retinoids are routinely used. The mechanism of action of each is probably different but they all help to normalise skin cell production and reduce inflammation.
The disadvantages of topical agents are variabily that they can often irritate normal skin, can be awkward to apply, cannot be used for long periods, can stain clothing or have a strong odour. As a result, it is sometimes difficult for people to maintain the regular application of these medications. Abrupt withdrawal of some topical agents, particularly corticosteroids, can cause an aggressive recurrance of the condition. This is known as a rebound of the condition. Topical lotions and creams that contain fragrances should be avoided as they will sting when applied.
Some topical agents are used in conjunction with other therapies, especially phototherapy.
It has long been recognised that daily, short, nonburning exposure to sunlight helped to clear or improve psoriasis. Niels Finsen was the first physician to investigate the theraputic effects of sunlight scientifically and to use sunlight in clinical practice. This became known as phototherapy.
Sunlight contains many different wavelengths of light. It was during the early part of the 20th century that it was recognised that for psoriasis the therapeutic property of sunlight was due to the wavelengths classified as ultraviolet (UV) light.
Ultraviolet wavelengths are subdivided into UVA (380–315 nm), UVB (315–280 nm), and UVC (< 280 nm). Ultraviolet B (UVB) (315–280 nm) is absorbed by the epidermis and has a beneficial effect on psoriasis. Narrowband UVB (311 to 312 nm), is that part of the UVB spectrum that is most helpful for psoriasis. Exposure to UVB several times per week, over several weeks can help people attain a remission from psoriasis.
Ultraviolet light treatment is frequently combined with topical (coal tar, calcipotriol) or systemic treatment (retinoids) as there is a synergy in their combination. The Ingram regime, involves UVB and the application of anthralin paste. The Goeckerman regime, combines coal tar ointment with UVB.
Psoralen and ultraviolet A phototherapy (PUVA) combines the oral or topical administration of psoralen with exposure to ultraviolet A (UVA) light. Precisely how PUVA works is not known. The mechanism of action probably involves activation of psoralen by UVA light which inhibits the abnormally rapid production of the cells in psoriatic skin. There are multiple mechanisms of action associated with PUVA, including effects on the skin immune system.
Dark glasses must be worn during PUVA treatment because there is a risk of cataracts developing from exposure to sunlight. PUVA is associated with nausea, headache, fatigue, burning, and itching. Long-term treatment is associated with squamous-cell and melanoma skin cancers.
Psoriasis which is resistant to topical treatment and phototherapy is treated by medications that are taken internally by pill or injection. This is called systemic treatment. Patients undergoing systemic treatment are required to have regular blood and liver function tests because of the toxicity of the medication. Pregnancy must be avoided for the majority of these treatments. Most people experience a recurrence of psoriasis after systemic treatment is discontinued.
The three main traditional systemic treatments are the immunosupressant drugs methotrexate and ciclosporin, and retinoids, which are a synthetic forms of vitamin A. Other additional drugs, not specifically licensed for psoriasis, have been found to be effective. These include the antimetabolite tioguanine, the cytotoxic agent hydroxyurea, sulfasalazine, the immunosupressants mycophenolate mofetil, azathioprine and oral tacrolimus. These have all been used effectively to treat psoriasis when other treatments have failed. Although not licensed in many other countries fumaric acid esters have also been used to treat severe psoriasis in Germany for over 20 years.
Biologics are manufactured proteins that interrupt the immune process involved in psoriasis. Unlike generalised immunosuppressant therapies such as methotrexate, biologics focus on specific aspects of the immune function leading to psoriasis. These drugs are relatively new, and their long-term impact on immune function is unknown. They are very expensive and only suitable for very few patients with psoriasis.
- Antibiotics are not indicated in routine treatment of psoriasis. However, antibiotics may be employed when an infection, such as that caused by the bacteria Streptococcus, triggers an outbreak of psoriasis, as in certain cases of guttate psoriasis.
- Climatotherapy involves the notion that some diseases can be successfully treated by living in particular climate. Several psoriasis clinics are located throughout the world based on this idea. The Dead Sea is one of the most popular locations for this type of treatment.
- In Turkey, doctor fish which live in the outdoor pools of spas, are encouraged to feed on the psoriatic skin of people with psoriasis. The fish only consume the affected areas of the skin. The outdoor location of the spa may also have a beneficial effect. This treatment can provide temporary relief of symptoms. A revisit to the spas every few months is often required.
- Some people subscribe to the view that psoriasis can be effectively managed through a healthy lifestyle. This view is based on anecdote, and has not been subjected to formal scientific evaluation. Nevertheless, some people report that minimizing stress and consuming a healthy diet, combined with rest, sunshine and swimming in saltwater keep lesions to a minimum. This type of "lifestyle" treatment is suggested as a long-term management strategy, rather than an initial treatment of severe psoriasis.
- Some psoriasis patients use herbology as a holistic approach that aims to treat the underlying causes of psoriasis.
- A psychological symptom management programme has been reported as being a helpful adjunct to traditional therapies in the management of psoriasis.
- It is possible that Epsom salt may have a positive effect in reducing the effects of psoriasis.
The history of psoriasis is littered with treatments of dubious effectiveness and high toxicity. These treatments received brief popularity at particular time periods or within certain geographical regions. The application of cat faeces to red lesions on the skin, for example, was one of the earliest topical treatments employed in ancient Egypt. Onions, sea salt and urine, goose oil and semen, wasp droppings in sycamore milk, and soup made from vipers have all been reported as being ancient treatments.
In the more recent past Fowler's solution, which contains a poisonous and carcinogenic arsenic compound, was used by dermatologists as a treatment for psoriasis during the 18th and 19th centuries. Grenz Rays (also called ultrasoft X-rays or Bucky rays) was a popular treatment of psoriasis during the middle of the 20th century. This type of therapy was superseded by ultraviolet therapy.
All these treatments have fallen out of favour.
Future drug developmentEdit
Historically, agents used to treat psoriasis were discovered by experimentation or by accident. In contrast, current novel therapeutic agents are designed from a better understanding of the immune processes involved in psoriasis and by the specific targeting of molecular mediators. Examples can be seen in the use of biologics which target T cells and TNF inhibitors. Future innovation should see the creation of additional drugs that refine the targeting of immune-mediators further.
Psoriasis is a chronic lifelong condition. There is currently no cure but various treatments can help to control the symptoms. Many of the most effective agents used to treat severe psoriasis carry an increased risk of significant morbidity including skin cancers, lymphoma and liver disease. However, the majority of people's experience of psoriasis is that of minor localised patches, particularly on the elbows and knees, which can be treated with topical medication. Psoriasis does get worse over time but it is not possible to predict who will go on to develop extensive psoriasis or those in whom the disease may appear to vanish. Individuals will often experience flares and remissions throughout their life. Controlling the signs and symptoms typically requires lifelong therapy.
"The heartbreak of psoriasis"Edit
The phrase "the heartbreak of psoriasis" is often used both seriously and ironically to describe the emotional impact of the disease. It can be found in various advertisements for topical and other treatments; conversely, it has been used to mock the tendency of advertisers to exaggerate (or even fabricate) aspects of a malady for financial gain. (In Bloom County, the character of Opus once considered the possibility of his suffering from "the heartbreak of nose hemorrhoids.") While many products today use the phrase in their advertising, it originated in a 1960s advertising campaign for Tegrin, a coal tar-based medicated soap.
Notable people who have had psoriasisEdit
Jason Donovan, Ben Elton, Benjamin Franklin, Mark Gastineau, Abimael Guzmán, Stan Jones, Shawn Lane, Gordon Lish, Jerry Mathers, Vladimir Nabokov, Dennis Potter, Eli Roth, Joseph Stalin, Kenneth Winston Starr, August Strindberg, John Updike.
- "Application to dermatology of International Classification of Disease (ICD-10) - ICD sorted by code: L40.000 - L41.000", The International League of Dermatological Societies
- "Benchmark survey on psoriasis and psoriatic arthritis - summary of top-line results", National Psoriasis Foundation
- "The efficacy of a psychological symptom management programme for the treatment of psoriasis", The Department of Health Research Findings electronic Register (ReFeR)
Some of the information on this page was taken from the following public-domain resource:
- "Questions and Answers about Psoriasis", National Institute of Arthritis and Musculoskeletal and Skin Diseases
For descriptions of psoriasis and psoriasis treatments:
- Luba KM, Stulberg DL. (2006). Chronic plaque psoriasis. American Family Physician 73 (4): 636-44. PMID 16506705.
- Lebwohl M, Ting PT, Koo JYM. (2005). Psoriasis treatment: traditional therapy. Ann Rheum Dis. 64 (Suppl 2): ii83-6. PMID 15708945.
- "The heartbreak of psoriasis", Signals Magazine 2001 - the online magazine of biotechnology industry analysis
- "About biologics.", National Psoriasis Foundation
For descriptions of immune processes involved in psoriasis:
- Griffiths CE, Voorhees JJ. (1996). Psoriasis, T cells and autoimmunity. J R Soc Med. 89 (6): 315-9. PMID 8758188.
- ↑ Hunziker T, Schmidli J. Psoriasis, an autoimmune disease? Ther Umsch. 1993 Feb;50(2):110-3. PMID 8456414
- ↑ Shai A, Vardy D, Zvulunov A. Psoriasis, biblical afflictions and patients' dignity Harefuah. 2002 May;141(5):479-82, 496. PMID 12073533
- ↑ Glickman FS. Lepra, psora, psoriasis. J Am Acad Dermatol. 1986 May;14(5 Pt 1):863-6. PMID 3519699
- ↑ Krueger G, Ellis C. Psoriasis-recent advances in understanding its pathogenesis and treatment. J Am Acad Dermatol. 2005;53(1 Suppl 1):S94-100. PMID 15968269
- ↑ Zenz R, Eferl R, Kenner L, Florin L, Hummerich L, Mehic D, Scheuch H, Angel P, Tschachler E, Wagner E. Psoriasis-like skin disease and arthritis caused by inducible epidermal deletion of Jun proteins. Nature. 2005;437(7057):369-75. PMID 16163348
- ↑ Lofholm PW. The psoriasis treatment ladder: a clinical overview for pharmacists. US Pharm. 2000;25(5):26-47
- ↑ Nickoloff BJ, Nestle FO. Recent insights into the immunopathogenesis of psoriasis provide new therapeutic opportunities. "J Clin Invest" 2004;113:1664-1675. PMID 15199399
- ↑ White PJ, Atley LM, Wraight CJ. Antisense oligonucleotide treatments for psoriasis. Expert Opin Biol Ther. 2004 Jan;4(1):75-81. PMID 14680470
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | 1 | 12 |
<urn:uuid:eefbe222-7ea6-444d-b6f0-098cde035e4e> | The word "Hyborian" is a transliterated contraction by Howard of the Ancient Greek "hyperborean", referring to a "barbaric dweller beyond the boreas (north wind)." Howard stated that the geographical setting of the Hyborian Age is that of our Earth, but in a fictional version of a period in the past, c. Upper Paleolithic (40,000 to 10,000 B.C.).
The reasons behind the invention of the Hyborian Age were perhaps commercial: Howard had an intense love for history and historical dramas; however, at the same time, he recognized the difficulties and the time-consuming research needed in maintaining historical accuracy. By conceiving a timeless setting – a vanished age – and by carefully choosing names that resembled our history, Howard avoided the problem of historical anachronisms and the need for lengthy exposition.
Although he is not represented in Howard's library, nor alluded to in his papers and correspondence, there is a strong likelihood that Howard's conception of the Hyborian Age originated in Thomas Bulfinch's The Outline of Mythology (1913), acting as a catalyst that enabled Howard to "coalesce into a coherent whole his literary aspirations and the strong physical, autobiographical elements underlying the creation of Conan." Howard's Hyborian Age is also related to Clark Ashton Smith's Hyperborean cycle.
In Howard's artificial legendarium, the Hyborian Age is chronologically situated between two other eras: "The Pre-Cataclysmic Age" of Kull (c. Upper Paleolithic 20,000 B.C.) and the onslaught of the Picts (c. 9,500 B.C.). According to "The Phoenix on the Sword", the adventures of Conan take place "...Between the years when the oceans drank Atlantis and the gleaming cities, and the years of the rise of the Sons of Aryas..."
Fictional history
Cataclysmic ancestors
Howard explained the origins and history of Aquilonia and its people in his essay The Hyborian Age. The civilizations of Thuria, Lemuria, and Atlantis, mentioned in his series about Kull and Thulsa Doom, all fell to a cataclysm only a few centuries after the reign of the Valusian King.
According to the essay, at the time of this cataclysm, a group of primitive humans were at a technological level hardly above the Neanderthal. They fled to the northern areas of what was left of the Thurian continent to escape the destruction. They discovered the areas to be safe but covered with snow and already inhabited by a race of carnivorous apes. The apes were large with white fur and apparently native to their land. The Stone Age invaders engaged in a territorial war with them and eventually managed to drive them off, past the Arctic Circle. Believing their enemies fated to perish and no longer interested in them, the recently arrived group adapted to their new, harsh environment and its population started to increase.
Hyborian ancestors
One thousand five hundred years later, the descendants of this initial group were called "Hyborians". They were named after their highest ranking god deity, Bori. (Howard apparently based this god on Búri, first god in Norse mythology, father of Borr and through him grandfather of Odin, Vili, and Ve).[original research?] The essay mentions that Bori had actually been a great tribal chief of their past who had undergone deification. Their oral tradition remembered him as their leader during their initial migration to the north though the antiquity of this man had been exaggerated.
By this point, the various related but independent Hyborian tribes had spread throughout the northern regions of their area of the world. Some of them were already migrating south at a "leisurely" pace in search of new areas in which to settle. The Hyborians had yet to encounter other cultural groups, but engaged in wars against each other. Howard describes them as a powerful and warlike race with the average individual being tall, tawny-haired, and gray-eyed. Culturally, they were already accomplished artists and poets. Most of the tribes still relied on hunting for their nourishment. Their southern offshoots, however, had been practicing animal husbandry on cattle for a number of centuries.
The only exception to their long isolation from other cultural groups came due to the actions of a lone adventurer, unnamed in the essay. He had traveled past the Arctic Circle and returned with news that their old adversaries, the apes, were not in fact annihilated. They had instead evolved into apemen, and according to his description were by then numerous. He believed they were quickly evolving to human status and would pose a threat to the Hyborians in the future. He attempted to recruit a significant military force to campaign against them, but most Hyborians were not convinced by his tales and at last only a small group of foolhardy youths followed his campaign. None of them returned.
Beginnings of the Hyborian Age
With the population of the Hyborian tribes continuing to increase, the need for new lands also increased. The Hyborians started expanding outside their familiar territories, beginning a new age of wanderings and conquests. For five hundred years, the Hyborians spread towards the south and the west of their nameless continent.
They encountered other tribal groups for the first time in millennia. They conquered many smaller clans of various origins. The survivors of the defeated clans merged with their conquerors, passing on their racial traits to new generations of Hyborians. The mixed-blooded Hyborian tribes were in turn forced to defend their new territories from pure-blooded Hyborian tribes which followed the same paths of migration. Often, the new invaders would wipe away the defenders before absorbing them, resulting in a tangled web of Hyborian tribes and nations with varying ancestral elements within their bloodlines.
The first organized Hyborian kingdom to emerge was Hyperborea. The tribe that established it entered their Neolithic age by learning to erect buildings in stone, largely for fortification. These nomads lived in tents made out of the hides of horses, but soon abandoned them in favor of their first crude but durable stone houses. They permanently settled in fortified settlements and developed cyclopean masonry to further fortify their defensive walls.
The Hyperboreans were by then the most advanced of the Hyborian tribes and set out to expand their kingdom by attacking their backwards neighbors. Tribes who defended their territories lost them and were forced to migrate elsewhere. Others fled the path of Hyperborean expansion before ever engaging them in war. Meanwhile, the "apemen" of the Arctic Circle emerged as a new race of light-haired and tall humans. They started their own migration to the south, displacing the northernmost of the Hyborian tribes.
Rulers of the West
For the next thousand years, the warlike Hyborian nations advanced to become the rulers of the western areas of the nameless continent. They encountered the Picts and forced them to limit themselves to the western wastelands, which would come to be known as the "Pictish Wilderness". Following the example of their Hyperborean cousins, other Hyborians started to settle down and create their own kingdoms.
The southernmost of the early ones was Koth, which was established north of the lands of Shem and soon started extending its cultural influence over the southern shepherds. Just south of the Pictish Wilderness was the fertile valley known as "Zing". The wandering Hyborian tribe which conquered them found other people already settled there. They included a nameless farming nation related to the people of the Shem and a warlike Pictish tribe who had previously conquered them. They established the kingdom of Zingara and absorbed the defeated elements into their tribe. Hyborians, Picts, and the unnamed kin of the Shemites would merge into a nation calling themselves Zingarans.
On the other hand at the north of the continent, the fair-haired invaders from the Arctic Circle had grown in numbers and power. They continued their expansion south while in turn displacing defeated Hyborians to the south. Even Hyperborea was conquered by one of these barbarian tribes. But the conquerors here decided to maintain the kingdom with its old name, merged with the defeated Hyperboreans and adopted elements of Hyborian culture. The continuing wars and migrations would keep the state of the other areas of the continent for another five hundred years.
The world
The Hyborian Age was devised by author Robert E. Howard as the post-Atlantean setting of his Conan the Cimmerian stories, designed to fit in with Howard's previous and lesser known tales of Kull, which were set in the Thurian Age at the time of Atlantis. The name "Hyborian" is a contraction of the Greek concept of the land of "Hyperborea", literally "Beyond the North Wind". This was a mythical place far to the north that was not cold and where things did not age.
Howard's Hyborian epoch, described in his essay The Hyborian Age (most recently republished by Del Rey in The Coming of Conan the Cimmerian in 2003), is a mythical time before any civilization known to anthropologists. Its setting is Europe and North Africa (with occasional references to Asia and other continents; e.g. Mayapan, representing the American continent) – with some geological changes.
On a map Howard drew conceptualizing the Hyborian Age, his vision of the Mediterranean Sea is also dry. The Nile, which he renamed the River Styx, takes a westward turn at right angles just beyond the Nile Delta, plowing through the mountains so as to be able to reach the Straits of Gibraltar. Although his Black Sea is also dry, his Caspian Sea, which he renames the Vilayet Sea, extends northward to reach the Arctic Ocean, so as to provide a barrier to encapsulate the settings of his stories. Not only are his Baltic Sea and English Channel dry, but most of the North Sea and a vast region to the west, easily including Ireland, are, too. Meanwhile, the west coast of Africa on his map lies beneath the sea. There are also a few islands, reminiscent of the Azores.
In his fantasy setting of the Hyborian Age, Howard created imaginary kingdoms to which he gave names from a variety of mythological and historical sources. Khitai is his version of China, lying far to the east, Corinthia is his name for a Hellenistic civilization, a name derived from the city of Corinth and reminiscent of the imperial fiefdom of Carinthia in the Middle Ages. Howard imagines the Hyborian Picts to occupy a large area to the northwest. The probable intended correspondences are listed below; notice that the correspondences are sometimes very generalized, and are portrayed by ahistorical stereotypes. Most of these correspondences are drawn from "Hyborian Names", an appendix to Conan the Swordsman by L. Sprague de Camp and Lin Carter.
|Kingdom, Region, or Ethnic Group||Correspondence(s)|
|Acheron||A fallen kingdom corresponding to the Roman Empire. Its territory covered Aquilonia, Nemedia, and Argos. In Greek mythology, Acheron was one of the four rivers of Hades (cf. "Stygia").|
|Afghulistan||Afghanistan. Afghulistan (sometimes "Ghulistan") is the common name of the habitat of different tribes in the Himelian Mountains. The name itself is a mixture of the historical names of Gulistan and Afghanistan.|
|Alkmeenon||Delphi. Its name derives from the Alcmaeonidae, who funded the construction the Temple of Apollo in Delphi, from which the oracle operated.|
|Amazon||Mentioned in Robert E. Howard's Hyborian Age essay, the kingdom of the Amazons refers to various legends of Greek Amazons, or more specifically to the Dahomey Amazons. In classical legend, Amazonia was a nation of warrior women in Asia Minor and North Africa. The legend may be based upon the Sarmatians, a nomadic Iranian tribe of the Kuban, whose women were required to slay an enemy before they might marry.|
|Aquilonia||A cross between the Roman Empire and Carolingian Empire. The name is borrowed from Aquilonia, a city of Southern Italy, between modern Venosa and Benevento; it is also an ancient name of Quimper and resembles that of Aquitaine, a French region ruled by England for a long portion of the Middle Ages. The name is derived from Latin aquilo(n–), "north wind".|
|Argos||Various seafaring traders of the Mediterranean. The name comes from the Argo, ship of the Argonauts; or perhaps from the city of Argos, Peloponnesos, reputedly the oldest city in Greece, situated at the head of the Gulf of Argolis near modern Nafplion. Also, hints of Italy in regards to the indigenous population's appearance, names and culture. Howard labels the populace of his Argos as "Argosseans", whereas the folk of the historical Argos are known as "Argives". In Hyborian Age cartography, Argos takes on the shape of a "shoe" in its border boundaries as compared to Italy appearing as a "boot". The coastal city of Massantia derives its name from Massalia, the name given to Marseilles by its Greek founders.|
|Asgard||Dark Age Scandinavia. (Ásgard is the home of the Æsir in Norse mythology).|
|Barachan Islands||The Caribbean Islands. The pirate town of Tortage takes its name from Tortuga.|
|Border Kingdoms||German Baltic Sea coast. A lawless place full of savages, Conan once traveled through the Border Kingdom on his way to Nemedia. He befriended Mar the Piper and the King of the Border Kingdoms. He helped save the kingdom before returning to his quest to reach Nemedia.|
|Bossonian Marches (Aquilonia)||Wales, with an overlay of colonial-era North America. Possibly from Bossiney, a former parliamentary borough in Cornwall, South West England, which included Tintagel Castle, connected with the Matter of Britain.|
|Brythunia||The continental homelands of the Angles and Saxons who invaded Great Britain, which is the origin of the name, though the civilization depicted is similar to that of medieval Poland, Lithuania, and Latvia. Semantically, the name Brythunia is from the Welsh Brython, "Briton", derived from the same root as the Latin Brito, Britannia. Though Howard stated that the name was kept by the Æsir and Nemedians that settled there.|
|Cimmeria||While Howard wrote that there was a continental shift after Conan's time, some scholars believed that the Gaelic regions that are supposed to encompass the British Isles was the geographical place of Cimmeria even though ostensibly it can also be argued from Howard's map of the Hyborian Age that it is a part of North America that is shared with the Picts. Howard states in The Hyborian Age that "the Gaels, ancestors of the Irish and Highland Scots, descended from pure-blooded Cimmerian clans." The name is based on that of Cimmeria, which was once hypothesized to be the homeland of the Celtic Cymric tribe, due to the word's similarity to the names of Celtic areas such as Cymru (the Welsh word for Wales), Cumbria, etc. Conan, a Cimmerian, has an Irish name, as do the Cimmerian gods Crom, Lir and Manannán mac Lir (gods of the sea; the latter two mentioned in Xuthal of the Dusk)|
|Conajohara (Aquilonia)||Perhaps the name is based on the Conestoga wagons used by American settlers; the name's ending may come from Guadelajara or similar place names occurring in North America.|
|Corinthia||Ancient Greece. From Corinth (Korinthos), a rich city in Classical Greece. Possibly suggested to Howard by the Epistles to the Corinthians, or by the region of Carinthia.|
|Darfar||Howard derived this name from the region of Darfur, Sudan, in north-central Africa. Darfur is an Arabic language name meaning "abode (dar) of the Fur", the dominant people of the area. In changing the name to Darfar, Howard unwittingly changed the Arabic meaning to "the abode of mice". The original Darfur is now the westernmost part of the Republic of the Sudan.|
|Gunderland (Aquilonia)||Switzerland, Probably derived from Gelderland a province in The Netherlands; perhaps Germany or ancient Burgundy. Probably from Gunther (Gundicar), King of Burgundy or Gunderic, King of the Vandals.|
|Hyperborea||Finland, Russia and the Baltic countries (Hyperborea) was a land in "outermost north" according to Greek historian Herodotus. Howard's Hyperborea is described as the first Hyborian kingdom, "which had its beginning in a crude fortress of boulders heaped to repel tribal attack".|
|Hyrkania||Mongolia, Hyrcania. In classical geography, a region southeast of the Caspian Sea or Hyrcanian Sea corresponding to the Iranian provinces of Golestan, Mazandaran and Gilan. The name is Greek for the Old Persian Varkana, one of the Achaemenid Empire satrapies, and survives in the name of the river Gorgan. The original meaning may have been "wolf land". In Iranian legend, Hyrcania was remarkable for its wizards, demons, wolves, spirits, witches and vampires.|
|Iranistan||An eastern land corresponding to modern Iran. Historically, the name of the country is derived from the Iran + the Persian istan, estan, "country".|
|Kambuja/Kambulja||The original name of Cambodia, now Kampuchea.|
|Keshan||The name comes from the "Kesh", the Egyptian name for Nubia.|
|Khauran||The name perhaps derives from the Hauran region of Syria.|
|Khitai||China. The name is derived from the English word "Cathay" and Marco Polo's Cathay (kăthā'). In Russian and other eastern European languages China is called Khitai. Khitai is an ancient empire which is always at war with Kambuja to the south. The people of Khitai are yellow-skinned and of medium height. Khitai is ruled by a God-Emperor who's decisions are greatly influenced by The Scarlet Circle, a clan of some of the most powerful mage lords in all of Hyboria. Khitan laws flow from the overlord of the city-state. The culture of Khitai is similar to that of ancient China. The most prominent feature of Khitai is its Great Wall(similar to Great Wall of China) which protects it from foreign invasions from the north. The cities of Khitai are Ruo-Chen, Shu-Chen, Shaulum and the capital Paikang which contains the Jade Citadel, from where the God-Emperor rules over all of Khitai.|
|Khoraja||Constantinople and the Etruscans. and possibly the associated Principality of Antioch, County of Edessa and County of Tripoli, collectively known as Outremer. The name itself was inspired by the references of Sax Rohmer to the fictional city of Khorassa in The Mask of Fu Manchu novel.|
|Kosala||From the ancient Indo-Aryan kingdom of Kosala, corresponding roughly in area with the region of Oudh.|
|Kozaki||Semi-barbaric steppe-dwelling raiders analogous to the Cossacks.|
|Koth||From the ancient Hittites (the name Koth may come from the fact that the Hittites are called in the Bible the children of Heth, and the Egyptians called their land Kheta); The Kothian capital of Khorshemish corresponds to the Hittite capital of Carchemish. Perhaps from The Sign of Koth in The Dream-Quest of Unknown Kadath by H. P. Lovecraft. There is a town of Koth in Gujarat, India, but the connection is doubtful. Howard also used the same name in his interplanetary novel Almuric.|
|Kusan||Probably from the Kushan Empire.|
|Kush||From the kingdom of Kush, Nubia, North Africa.|
|Meru||Tibet. In Hindu mythology, Meru is the sacred mountain upon which the gods dwell. NOTE: Meru is not an original Hyborian Age country and was created by L. Sprague de Camp and Lin Carter for "The City of Skulls".|
|Nemedia||A cross between Rome and Byzantium. Nemedia was the rival of Aquilonia (which corresponds to The Carolingians), and depended on Aesir mercenaries for their defence (as the Byzantine Empire hired Vikings as the Varangian Guard). The name comes from Nemed, leader of colonists from Scythia to Ireland in Irish mythology; perhaps the name is also meant to allude to Nemea, home to the Nemean Lion of Greek mythology. The name may also be suggestive of various names for Germany in Slavic languages, e.g. Czech Německo.|
|Ophir||Ancient Ophir, a gold-mining region in the Old Testament, possibly on the shores of the Red Sea or Arabian Sea (e.g. western Arabia), though clearly Howard saw it as situated somewhere in Italy.|
|Pathenia||Greenland. The name comes from the Greek word Parthenia meaning "virgin" or "untouched" since Pathenia is a forbidden country and its landscape has largely remain untouched from any human activity. It contains the dreaded snow apes and Yahlgan, the sacred city of Erlik, the flame-god. NOTE: Pathenia is not an original Hyborian Age country.|
|Pelishtim (tribe)||Philistines (P'lishtim in Hebrew). The Pelishti city of Asgalun derives its name from Ashkelon. The Pelisti god Pteor or Baal-Pteor derives its name from the Moabite Baal-Peor.|
|Pictish Wilderness||Pre-Columbian America, with an overlay of North America during the European colonization of the Americas, possibly even colonial-era New York. Howard bestows names from Iroquoian languages on many, though not all, of his Picts (see also: Bran Mak Morn). Note that the name "Pict" comes from the Latin language term for "painted one", which could be applicable to a number of the Indigenous peoples of the Americas. The historical termed Picts were a confederation of Celtic tribes in central and northern Scotland which bordered Roman Britain.|
|Poitain (Aquilonia)||A combination of Poitou and Aquitaine, two regions in southwestern France. From the 10th to the mid-12th century, the counts of Poitou were also the dukes of Aquitaine.|
|Punt||The Land of Punt on the Horn of Africa. A place with which the ancient Egyptians traded, probably Somaliland.|
|Shem||Mesopotamia, Syria, Palestine, and Arabia. In the Bible, Shem is Noah's eldest son, the ancestor of the Hebrews, Arabs and Assyrians; hence, the modern "Semite" and Semitic languages (via Greek Sem), used properly to designate the family of languages spoken by these peoples.|
|Stygia||Egypt. The name comes from Styx, a river of the Greek underworld in Greek mythology. In earlier times the territory of Stygia included Shem, Ophir, Corinthia, and part of Koth. Stygia is ruled by a theocracy of sorcerer-kings.The people are dark skinned. Most of the common people are descendants of the various races across the world.They worship the serpent god Set. Stygia's terrain is a mix of mountains, desert, plains, and marshes. The Styx river flows through Stygia into the sea.|
|Turan||Persian name for Turkestan. A Turkish land, possibly referring to the Gokturk Empire, the Timurid Empire, or the Seljuk Empire. The name derives from Turan, the areas of Eurasia occupied by speakers of Ural–Altaic languages. The names of the various Turanian cities (e.g. Aghrapur, Sultanapur, Shahpur) are often in Persian language. King Yezdigerd is named after Yazdegerd III, ruler of the Sassanid Empire. The name of King Yildiz means star in the Turkish language. The city of Khawarizm takes its name from Khwarezm, and Khorusun from Khorasan.|
|Uttara Kuru||From the medieval Uttara Kuru Kingdom at the north and central of Pakistan.|
|Vanaheim||Dark Age Scandinavia. (Vanaheim is the home of the Vanir in Norse mythology)|
|Vendhya||India (The Vindhya Range is a range of hills in central India). The name means "rent" or "ragged", i.e. having many passes.|
|Wadai (tribe)||The Ouaddai Empire.|
|Wazuli (tribe)||The Waziri tribe in northwest Pakistan.|
|Zamora||The Romani people. The name comes from the city of Zamora, Zamora province, Castile-León, Spain, alluding to the Gitanos of Spain (see Zingara for discussion); or possibly it is based on the word "Roma". There may also be some reference to southern Italy, as Zamorans dance the tarantella. Also hints of ancient Israel and Palestine.|
|Zembabwei||The Munhumutapa Empire. The name comes from Great Zimbabwe, a ruined fortified town in Rhodesia, first built around the 11th century and used as the capital of the Munhumutapa Empire. Oddly, this is the same route as the modern name for the Republic of Zimbabwe.|
|Zingara||Spain/Portugal. Iberian Peninsula as a whole. Zingara is also Italian for "Gipsy woman"; this may mean that Howard mixed up the source names of Zingara and Zamora, with Zingara originally meant to apply to the Roma kingdom, and Zamora to the Spanish kingdom.|
|Zuagir (tribe)||The name is perhaps derived from a combination of Tuareg and Uyghur.|
|Other Geographic Features|
|Amir Jehun Pass||Takes its name from a combination of the Amu Darya river and the Gihon river (Jayhoun in Arabic), which has been identified by some with the Amu Darya. Perhaps corresponds to the Broghol Pass, which is near the headwaters of the Amu Darya in Wakhan.|
|The Himelian Mountains||Take their name the Himalayas but correspond more closely to the Hindu Kush or Karakoram ranges.|
|The Karpash Mountains||The Carpathian Mountains.|
|The Poitanian Mountains||The Pyrenees.|
|The River Styx||The Nile.|
|The River Alimane||Alamana river, (present Spercheios) in Greece.|
|Vilayet Sea||The Caspian Sea. The name comes from vilayet, the term for administrative regions in the Ottoman Empire.|
|Zhaibar Pass||The Khyber Pass which has been the traditional borderline between Afghanistan and Pakistan.|
|Zaporoska River||The Dnieper river and/or the Don and/or the Volga. The river's name was probably influenced by Zaporizhian Sich, a settlement of the Ukrainian Cossacks in Zaporizhzhia (region). It was situated on the Dnieper river, below the Dnieper rapids (porohy, poroz.a), hence the name, translated as "territory beyond the rapids".|
See also
|Wikisource has original text related to this article:|
- Harold Lamb, The March of the Barbarians; 1940, Country Life Press, ASIN: B000GQ81MM.
- Robert E. Howard's Hyborian Age essay adapted by Roy Thomas and Walt Simonson.
- Patrice Louinet. Hyborian Genesis: Part 1, page 434, The Coming of Conan the Cimmerian; 2003, Del Rey.
- Howard, Robert E., "The Phoenix on the Sword", The Coming of Conan the Cimmerian (2003).
- De Camp, L. Sprague, Carter, Lin, and Nyberg, Björn (1978). "Hyborian Names". Appendix to Conan the Swordsman. Toronto: Bantam Books. ISBN 0-553-20582-X.
- Shadows in Zamboula
- Howard, Robert E., "The Hyborian Age", The Coming of Conan the Cimmerian (2003).
- de Camp, L. Sprague, Carter, Lin, and Nyberg, Björn (1978). "Hyborian Names". Appendix to Conan the Swordsman. Toronto: Bantam Books. ISBN 0-553-20582-X. | 1 | 2 |
<urn:uuid:64404a02-b153-4f94-9bc7-a1bb0d49636a> | Facing the Command Problem
The relationships of command within the Generalissimo's China Theater had not been thoroughly explored by the President and the War Department in concert since China Theater had been set up in January 1942, when the United States feared China might make a separate peace. What attention had been given to the command situation since then had been in the nature of specific responses to specific pressures from the Chinese or Chennault. The lack of harmony between the President and the War Department had not permitted continuing attention and close supervision. Therefore, no agency of the U.S. Government ever inquired as to why the Chinese had not been willing to set up an Allied staff for China Theater, as they had pledged themselves to do in 1942, or, of course, sought to hold the Chinese to their promise. The issue of whether the Chinese would let Stilwell command any Chinese troops in China had been dropped by the Chinese as soon as he arrived in Chungking. The Soong-Stimson accord of January 1942, and the Generalissimo's reply to the inquiry of John J. McCloy, then Assistant Secretary of War, had implied such an intent on the part of the Chinese, but the U.S. Government had never pursued the matter.1
The impending Japanese offensive, threatening the Chinese Government with defeat, revived the command question. The Generalissimo's China Theater was an Allied theater, for two American air forces operated in it. Had all gone well in China Theater, probably the command situation would have stayed as it had for two years, with the question of Stilwell's exact powers and duties in that theater undefined.
If the Generalissimo could hold east China, there would be no one to question his conduct of affairs. In 1937-38, when China's armies lost the Yangtze valley, the sea ports, and the key centers of north China, the loss could be ascribed to various causes beyond Chinese control, and since no American forces were involved, the U.S. Government could not concern itself with the quality of Chinese leadership. The events of 1944 followed on two years in which one group of American officers had predicted them, and threatened to affect the American effort in the Pacific. Moreover, they contrasted with the
unbroken chain of successes in north Burma, where Chinese troops under Stilwell's command had defeated some of the best units in the Japanese service.
Stilwell's Mission Laid Aside
At the Cairo Conference, Stilwell had sought for a directive from the President on China policy, but had received none. After Cairo, the President in effect took the conduct of American military relations with China in his own hands, but on an improvised, ad hoc basis with no attempt to keep Stilwell informed of the President's goals. Then came the 2 May 1944 JCS directive, with its order that Stilwell stockpile supplies in China to support Pacific operations, at a time when Hump tonnage could not even support existing U.S. activities in China.2 From SEAC, the AXIOM Mission had visited Washington to urge Mountbatten's views. Moreover, on 1 May 1944, as noted above in Chapter VI, Stilwell had told his deputy theater commander, General Sultan, that he could not carry out his mission of opening a land line of communications to China unless CBI Theater was reinforced by U.S. combat troops.
As he was accustomed to do, Stilwell on 24 May turned to Marshall for guidance, reviewing his missions as he saw them and asking Marshall to correct him where he was wrong. Stilwell saw his duties with the British as being to co-operate generally in furthering the war against Japan. As for the Chinese, Stilwell said he understood:
My mission vis a vis the Chinese is to increase the combat efficiency of the Chinese Army. The basic plan is to equip and train a first group of thirty divisions, followed by a second group of thirty. To get this mission accomplished I have never had any means of exerting pressure. I am continuing to work on the problem as I have from the beginning,--by personal acquaintance and influence, by argument and demonstration. This is a slow process, so slow as to require evaluation from the point of view of time available, and possible results to be obtained.
Commenting on his relations with the British, Stilwell revealed how the irritations and fatigues of the campaign in north Burma, then at its height, were pressing on him, for he judged the British with extreme harshness. He doubted that the help they were giving in the war against Japan was worth the American logistical support currently being extended to SEAC. He rated the RAF in India as far from impressive, and the Indian Army as being even less so. Unless there was a wholesale shake-up in the British command in India, Stilwell saw no chance of an effective attack on Burma from India in the fall of 1944. In his opinion, "The British simply do not want to fight in Burma or reopen communications with China."
Turning to affairs in China, Stilwell revealed by his comments that he wanted the President to apply the quid pro quo approach not only to the question
of whether the Chinese should join in operations in Burma, as he had, but also to the problems of making the Chinese Army a more effective fighting force. Stilwell had well-nigh ended his personal share in this activity, but it was, as this radio showed, still close to his heart:
CKS will squeeze out of us everything he can get to make us pay for the privilege of getting at Japan through China. He will do nothing to help unless forced into it. No matter how much we may blame any of the Chinese government agencies for obstruction, the ultimate responsibility rests squarely on the shoulders of the G-Mo. If he is what he claims to be, he must accept the responsibility. In spite of delays, evasions, broken promises and double crossing, we have accomplished something. By fall we can have five fairly dependable divisions, with corps troops, in the CAI [Chinese Army in India]. We have partially trained and equipped twenty-six divisions in Yunnan. Some of these are now getting tested in combat, and we shall soon know how much good we have done them. We have conducted schools at Ramgarh, Kunming, and Kweilin that have made a great impression on a large number of Chinese officers of all grades. We have U.S. instructors in all divisions of the Y-Force, and we are ready to start a similar system in the second group of thirty divisions. This foundation could produce a big improvement in China's ground forces, if we could deliver the necessary equipment and get the sincere cooperation of the G-Mo. Up to date we have not had it, nor will we get it except through pressure.
So with the Chinese the choice seems to be to get realistic and insist on a quid pro quo, or else restrict our effort in China to maintaining what American aviation we can. The latter course allows CKS to welsh on his agreements. It also lays the ultimate burden of fighting the Jap Army on the U.S.A. I contend that ultimately the Jap Army must be fought on the mainland of Asia. If you do not believe this, and think that Japan can be defeated by other means, then the proper course may well be to cut our effort here to the A.T.C. and the maintenance of whatever air force you consider suitable in China. If on the other hand you think it worth while for me to continue on my original mission of increasing the combat effectiveness of the Chinese Army, that is still feasible, but it can be accomplished only if and when we get on a realistic basis with the G-Mo, or whatever passes for authority in China.
As to present and future possibilities, as I see them, the maximum that I can reasonably hope to accomplish with the present British and Chinese high command working under their present policies, and with the American resources now available to me, is to hold the Myitkyina area as an air base, with supply by road, air, and pipe-line. To insure the reopening of communications with China, I still need an American corps and more engineers.
I request your decision. Is my mission changed, or shall I go ahead as before?3
Marshall's reply on 27 May made it clear that Stilwell was primarily a U.S. theater commander, made his mission to China for the present definitely subordinate to supporting U.S. operations in the Pacific, and stated that the United States wished to avoid a major effort in Asia. Marshall did not mention the crisis in China, nor the post of chief of staff to the Generalissimo in the latter's role of Supreme Commander, China Theater.
Your mission with respect to the British as stated in your radio DTG 240240Z May twenty-fourth is correct. Your mission with respect to the Chinese as stated by you is your primary mission and has the President's approval. Decisions taken at QUADRANT and SEXTANT Conferences especially those contained in CCS 319/5, CCS 417, and CCS 397, set up
requirements for your accomplishment which for the time being interfere with your primary mission.4 Decision has been made for example that operations in China and Southeast Asia should be conducted in support of the main operation in the Central and Southwest Pacific.
Japan should be defeated without undertaking a major campaign against her on the mainland on Asia if her defeat can be accomplished in this manner. Subsequent operations against the Japanese ground army in Asia should then be in the nature of a mopping up operation.
Timely support for Pacific operations requires that priority be given during the next several months to a buildup of our air effort in China.
The heavy requirements for our operations against Germany and for our main effort in the Pacific, preclude our making available to you the American corps you request to assist you in the reopening of ground communications with China. We are forced therefore to give first priority to increasing the Hump lift.
Accordingly the U.S. Chiefs of Staff are about to propose to the British Chiefs of Staff that Mountbatten's directive be changed. . . .
Our view is that your paramount mission in the China Theater for the immediate future is to conduct such military operations as will most effectively support the main effort directed against the enemy by forces in the Pacific. In order to facilitate timely accomplishment of this mission, for the present you should devote your principal effort to support of the Hump lift and its security, and the increase in its capacity with the view to development of maximum effectiveness of the Fourteenth Air Force consistent with minimum requirements for support of all other activities in China. In pressing the advantages against the enemy you should be prepared to exploit the development of overland communications to China.5
Stilwell Called to China
As May melted into the hot, damp June of east China summer, Japanese actions made it unmistakably clear that ICHIGO was not another foray, but a major effort. Chennault and the Generalissimo grew steadily more alarmed. But even as Stilwell moved to place what resources he had in China behind the Fourteenth Air Force, he sought to end the long-standing differences between himself and Chennault by asking Marshall to relieve the Fourteenth Air Force commander. Concluding that Chennault's action in submitting his "air estimate" to the Generalissimo after Stilwell had specifically told him not to was insubordination calculated to embarrass Stilwell in his relations with the Generalissimo, he asked Marshall to relieve the air commander of all responsibility for the Fourteenth Air Force's operations and relegate him to training and leading the Chinese Air Force.6
General Marshall was traveling when Stilwell's communications arrived, and Lt. Gen. Joseph T. McNarney, Deputy Chief of Staff, answered for him.
Because of the current situation in China and because of the political aspects of the case, the War Department thought it best not to take action. McNarney pointed out that if Chennault was removed and central China then lost to the Japanese, responsibility for that loss would of course be placed on Stilwell. As McNarney wrote this, the British and American forces under Gen. Dwight D. Eisenhower were battling to make good their foothold in Normandy after successfully crossing the Channel. Conceding the force of McNarney's advice, Stilwell asked him to return the papers and forget the incident, for "with the performance going on in the main tent you can't be bothered with side shows."7
Just after Stilwell asked Marshall to relieve Chennault, the latter in a most urgent radio followed by a clarifying letter warned that the Japanese were again on the move, this time south from Hankow. This offensive was the second or TOGO phase of Operation ICHIGO. Describing the situation as one of the "utmost gravity" Chennault predicted the Japanese would take their objectives in east China unless the Chinese were powerfully assisted. He believed that given a minimum of 10,000 tons a month for operations in north China, east China, and from Kunming, the Fourteenth Air Force could stop the Japanese. Chennault asked again that he be allowed to draw on the MATTERHORN stocks at Cheng-tu, and that he have almost all the ATC tonnage entering China.8 With Chennault's radio Stilwell received word that the Generalissimo wanted him to come to Chungking as soon as possible.9 Stilwell felt the critical situation around Mogaung and Myitkyina, where the fighting was growing steadily heavier, would not permit him to leave for a week or ten days.10
Chennault's warnings were immediately reinforced by others from the Chinese and from Stilwell's own staff which confirmed the impressions given by Chennault. On 31 May, as it began to seem Myitkyina would not soon fall, Ho Ying-chin, the Minister of War and Chief of Staff of the Chinese Army, called in General Ferris and Maj. Gen. Adrian Carton de Wiart (SEAC's liaison with the Chinese). General Ho believed that a recently concluded Russo- Japanese fishing pact had secret annexes, permitting the Japanese to withdraw troops from Manchuria. Ho feared that the Russians, the Chinese Communists, and the Japanese were working together. He warned that the Japanese manpower situation might permit the raising of thirty-five new divisions. General Ho believed the Japanese held the great arc of their Pacific perimeter from Burma through the Bonin Islands with forty-three divisions, and that they planned to fight the decisive naval battle for mastery of the Pacific somewhere along the line Kuril Islands-Hokkaido-Bonin Islands-Formosa.
General Ho entertained the liveliest fears in regard to China, for he told Ferris and Carton de Wiart that the Japanese were trying to drive China out of the war. To support this view Ho offered detailed information on Japanese troop movements and construction projects in China. To meet the emergency, Ho asked that the United States urge the Russians to contain the Japanese in Manchuria, and that Chennault be given the means to attack the Japanese supply centers in Hankow and the enemy's Yangtze shipping. Ferris relayed the warning to Stilwell with the comment that the Chinese were definitely concerned, that though they might have somewhat overestimated Japanese strengths it was reasonable to assume that Chinese intelligence sources were good.11
That same day of 31 May, Gen. Shang Chen, head of the Chinese Military Mission in Washington, delivered a similar warning to the President on behalf of the Generalissimo. The Generalissimo believed that the Japanese were moving six divisions from Manchuria to China, and that they aimed to seize the entire Hankow-Canton rail line and the airfields at Kweilin and Heng-yang. To meet this threat the Generalissimo asked that:
The 14th U.S. Army Air Force should be strengthened. With the exception of whatever small amount that is absolutely necessary, the air tonnage between India and Kunming should all be allocated for the shipment of gasoline and spare parts for the said Air Force. It is therefore urgently requested that the total tonnage for shipment of supplies to the 14th U.S. Army Air Force be increased to at least ten thousand (10,000) tons.
. . . the entire stock of gasoline, spare parts and aircrafts [sic] stored in Chengtu [for the B-29's] be immediately turned over to the 14th U.S. Army Air Force to be concentrated for operation along the Peiping-Hankow Railway.
It is also requested that the Chinese Air Force be strengthened, if possible.
The ground troops should also be strengthened. Request is made to have eight thousand (8,000) launcher rockets, each with one hundred (100) ammunition, delivered as soon as possible in order that the fire power of the Chinese troops in the various war areas may be effectively increased.12
The Generalissimo also called in Chennault and Stilwell for conferences. In Chennault's 29 May warning to Stilwell, he mentioned that he had received such an order. Probably as a result of this meeting, Chennault radioed Stilwell that his June Hump allocation of 6,700 tons was "hopelessly inadequate," that 10,000 tons delivered was the minimum.13 A day or so later Stilwell received direct word from the Generalissimo that his presence in Chungking was urgently desired.14
Probably the same day that he received the Generalissimo's request, Stilwell had word from Ferris that the Japanese "move south from Hankow has actually started."15 It was time for Stilwell to visit China.
Chennault Given 10,000 Tons
On the eve of his departure from his Burma headquarters, Stilwell received from Dorn a pessimistic account of how the Fourteenth Air Force and the Chinese were reacting to the Japanese drive. Dorn reported that officers of the Fourteenth Air Force were complaining to newspapermen of lack of support from theater headquarters, and that Chennault was charging that supplies had been diverted from him to support other U.S. activities. In that connection, Dorn asked Stilwell to recall that in April 1944 two thirds of the Y-Force Hump tonnage had gone to Chennault and in May more than half. Dorn charged that of 30,000,000 rounds of rifle ammunition allotted to the War Ministry in November and December 1943, the Chinese had taken delivery of but 7,000,000 rounds and now stated they did not know where the 7,000,000 rounds were. Radio sets and antitank rifles turned over to the Chinese at the end of 1943 were still in Chungking the last Dorn knew of them. Dorn reported that only a few days before 4 June 1944 the Chinese had in Kunming equipment and ammunition for five battalions of field artillery, plus considerable stocks of antitank ammunition. The only recent arms shipment to east China had been equipment for a battalion of field artillery, which U.S. Army authorities had sent to the Kweilin training center.16 This report, with others which came in from American liaison personnel as the fighting progressed in east China, may have reinforced Stilwell's conviction that merely giving arms and ammunition to Chinese authorities was not the solution to China's military problems.
In writing to Mrs. Stilwell on 2 June, the general offered a brief analysis, in language cryptic but still capable of translation in the light of Stilwell's relations with the Generalissimo and the President in 1942 and 1943. Stilwell wrote that the Generalissimo had been offered "salvation," that is, a modernized and re-equipped Chinese Army, but had rejected it. Now, with the Japanese driving through east China, it was "too late" to create the powerful ground force which, in Stilwell's repeatedly expressed opinion, was the only thing that could save China. "This [the crisis in China] is just what I told them [the Generalissimo and the President] a year ago," he added, "but they knew better."17
On 5 June 1944 Stilwell conferred with the Generalissimo. The Chinese leader began by questioning Stilwell as to the progress of the Chinese Army in India. He seemed dissatisfied with the performance of the Chindits but appeared to accept Stilwell's reassurances. Then the Generalissimo turned to China,
saying matters there were so serious that the entire air effort should be used to stop the Japanese--"the situation was one to be solved by air attack." Stilwell agreed with the Generalissimo that affairs were indeed serious but added that all ground and air resources in China Theater should be used. The Generalissimo, consistent with his 31 May request to the President, then asked Stilwell to give Chennault the B-29 supplies and to suspend transport of arms and ammunition over the Hump.
Stilwell replied that he was diverting 1,500 tons a month from the B-29 share of Hump tonnage, that he had asked JCS permission to use B-24's as flying fuel tankers. With these measures, the Fourteenth Air Force would get the 10,000 tons a month that Chennault had said would be enough. This did not satisfy the Generalissimo, who asked that the B-29 stocks at Cheng-tu be turned over to Chennault. Stilwell demurred, saying that he thought instead of suspending all Hump tonnage to Z-Force, to the unengaged portion of Y-Force, and to the U.S. Navy personnel in China. In the light of this he suggested that China National Aviation Corporation (CNAC) aircraft, which were under contract to the Chinese Government, should devote their entire capacity to Chennault's supplies. The Generalissimo objected, for he felt that the paper money CNAC craft flew in was essential to the war. Stilwell replied that Chiang could hardly ask for the B-29 tonnage if the Chinese did not use all their resources. Stilwell was applying the bargaining technique that he always sought to use with the Chinese. The Generalissimo then agreed to consider use of the CNAC transports and Stilwell promised to ask for permission to use the B-29 stocks if the situation grew worse.18
True to his word, Stilwell on 6 June shifted Hump allocations all across the board, cutting all other activities to raise the Fourteenth Air Force's allocation to 8,425 tons. With 1,500 tons from the B-29 allocation Chennault would have his 10,000 tons.19 To Marshall, Stilwell radioed that the Generalissimo and he had talked things over, listed the diversions he was making for Chennault, and as an "ace in the hole" requested that he be granted permission to use the B-29 stocks though he would not touch them save as a last resort.20 Then at Kunming Stilwell met with Chennault and the latter's key staff officers.
Stilwell rather chilled his audience by saying he could spare only thirty minutes for their problems, but as the meeting went on he appeared receptive to Chennault's plan to use the B-29's against Hankow, and then stated that he was, as Chennault's minutes put it, "willing and anxious to do everything possible to assist." Stilwell described what he was doing to divert Hump tonnage to Chennault's support. But Stilwell was not optimistic over the east
China situation for he thought there was "nothing to stop" the Japanese. "General Chennault countered with the statement that he felt confident that with the help of the VLR [B-29's against Hankow] he could stop the drive, but emphasized the necessity for immediate action." Chennault impressed Stilwell with the need to use the B-29's at once, and Stilwell implied that it would be done. When the meeting ended, Chennault presented Stilwell with his plan for current operations, and Stilwell took it with him.21
The logistical aspects of Chennault's plan were the important ones from the point of view of Headquarters, CBI Theater, for the airman was a free agent in his tactics. The plan Chennault now gave Stilwell asked for 4,823 tons for north China--of which the Chinese-American Composite Wing was to get 3,097 tons--4,283 tons for east China, and 2,546 tons for the Kunming area.22
The answer the War Department almost immediately returned to Stilwell's request for permission to use the B-29 stocks in an emergency was a refusal in terms that placed Stilwell effectively between the upper and nether millstones of pressure from Chennault and the Generalissimo to use the B-29 stocks in the current critical situation and the determination of General Arnold and the Army Air Forces to use the B-29's only against Japanese industry.
With reference to your 18238 of 6 June regarding VLR stocks in China these are not to be released to the Fourteenth Air Force without express approval from the Joint Chiefs of Staff. It is our view that the early bombing of Japan will have a far more beneficial effect on the situation in China than the long delay in such an operation which would be caused by the transfer of these stocks to Chennault. Furthermore, we have positive evidence in Italy of the limiting [sic] delaying effect of a purely air resistance where the odds were nearly 7,000 planes on our side to 200 on the German. Furthermore, the Twentieth Bomber Group represents a powerful agency which must not be localized under any circumstances any more than we would so localize the Pacific Fleet. Please keep this in mind.23
Surely here was faith in strategic bombardment at its highest pitch, while the comment on the probable worth of Chennault's efforts suggests that Stilwell had gone to the limit of his discretionary authority in giving Chennault such priority on Hump tonnage. In reply, Stilwell said: "Instructions understood, and exactly what I had hoped for. As you know, I have few illusions about power of air against ground troops. Pressure from G-MO forced the communication."24
At the time Stilwell replied to Marshall, the B-29's had just completed their shakedown attack on railway workshops in Bangkok, Thailand. On 15 June came the long-awaited attack on the Japanese homeland for which the President entertained such high hopes. About 221 tons of bombs were dropped on the Imperial Iron and Steel Works' Yawata plant, on Kyushu Island. Before the
FOURTEENTH AIR FORCE AIRCRAFT INVENTORY BY TYPE OF AIRCRAFT:
MARCH 1943-DECEMBER 1944
a. Includes Photo, Liaison, Transport, Utility Cargo, and Trainer Planes.
End of month Total Fighter Medium
Othera 1943 March 162 107 12 34 9 April 158 110 12 30 6 May 174 118 16 33 7 June 176 119 16 35 6 July 182 123 11 35 13 August 180 117 14 34 15 September 193 127 15 36 15 October 247 183 15 35 14 November 303 209 29 43 22 December 285 188 23 51 23 1944 January 349 212 63 46 28 February 351 209 65 49 28 March 421 257 73 55 36 April 447 287 74 47 39 May 547 377 82 36 52 June 547 339 79 39 90 July 614 388 86 45 95 August 541 323 81 48 89 September 690 433 102 43 112 October 777 524 109 43 101 November 793 531 106 49 107 December 902 531 104 73 194
Source: Black Book, prepared for Lt. Gen. A. C. Wedemeyer, 1945, OCMH.
end of October 1944 attacks followed at intervals on the Showa Steel Works at An-shan, Manchuria, again on Yawata, on the Plajoe Refinery at Palembang, Sumatra, on the Okayama aircraft assembly plant, Takao harbor, and Heito airfield on Formosa, and on the Omura aircraft factory on Kyushu. In 1944 the China-based B-29's dropped a total of 3,623 tons, with no discernible effect on the east China crisis. Of the attack on Japanese steel production, which it was hoped the B-29's would cut in half, the Strategic Bombing Survey concluded: "The reduction in ingot steel supply, excluding electric steel, was not over 2 per cent and in finished steel less than 1 per cent."25
The Japanese Drive Rolls On in East China
The Japanese had been very pleased with the success of their Honan operations, which had cost them but 869 dead and 2,280 wounded.26 Promoted to field marshal on 2 June, Hata now aimed to take the two communication centers of Changsha and Heng-yang. (See Map 18.) The initial attack would be launched by the 40th, 116th, 68th, 3d, and 13th Divisions, which were stretched along the south bank of the Yangtze in that order from west to east. The 58th, 34th, and 27th Divisions would follow as a second echelon.
The Chinese troops facing them were those of the IX War Area, mostly southern Chinese divisions under a Cantonese, Gen. Hsueh Yueh. After the war, the Japanese estimated Hsueh's strength at about forty divisions. In addition to Chennault's airmen, there were a few other American resources capable of being committed to the campaign, namely, the personnel and the matériel accumulated at Kweilin for the Infantry Training Center. As the Japanese menace grew, the time for training seemed to Stilwell's headquarters to have passed, while the need for accurate and timely information about the Chinese and technical aid to them was steadily more acute. In the first weeks of June, CBI Theater headquarters radioed that Chinese permission in principle had been received to send observer teams to the IX War Area. A party under Col. Woods King was promptly sent to join Hsueh Yueh and his XXVII and XXX Group Armies. When the teams joined their respective Chinese units they found that General Hsueh had established his headquarters at Lei-yang, 100 miles south of Changsha, and they understood that Hsueh would be directly responsible for operations east of the Hsiang River. At Pao-ching (Shao-yang), 120 miles southwest of Changsha, Hsueh's deputy, Maj. Gen. Liu Chi-ming, was reported to have control of operations against the Japanese west of the river.27
The American observer group with Hsueh Yueh, which had been most coolly received, radioed to General Lindsey that the Chinese badly needed arms and ammunition. A special train was then dispatched on 13 June from Kweilin to Heng-yang, with 9 37-mm. antitank guns and 3,000 rounds, 200 Boys antitank rifles and 6,080 rounds, 20 Chinese Maxim machine guns with 218,600 rounds, 26 Bren guns and 13,728 rounds, and 2 rocket launchers with 20 rounds. Their arrival cheered General Hsueh, and his attitude was more cordial. Other observer and artillery teams went out, until by mid-July there were sixteen with Chinese armies south of the Yangtze River.
U.S. Army liaison teams were sent to the headquarters of the IX and IV War Areas, the XXIV and XXVII Group Armies, and the 31st, 37th, 46th, 62d, 64th, 79th, and 100th Armies. The three battalions of the Chinese 29th Field Artillery Regiment were equipped with U.S. 75-mm. pack howitzers and aided
by American liaison teams. These three battalions were then attached to the 31st, 46th, and 64th Armies. Another battalion of Chinese artillery, supporting the 10th Army at Heng-yang, also had U.S. pack howitzers and training.
The action of CBI Theater in giving arms and ammunition to General Hsueh was strongly protested by the Generalissimo's representatives in Kweilin. They said shipments to Hsueh Yueh might fall into the hands of bandits who would use them against the central government. Z-Force Operations Staff headquarters interpreted these protests and warnings as hints that the Generalissimo feared Hsueh Yueh might revolt.28
To meet the problem of giving effective tactical air support despite the handicaps of language problems and poor communications, Chennault had requested and obtained permission on 26 April to expand his air-ground liaison net with General Hsueh's troops. When the Japanese drove for Changsha the 5329th Air Ground Force Resources Technical Staff (Provisional) (AGFRTS) was in the field with thirty-five officers and sixty-five enlisted men.29
The 3d and 13th Divisions on the eastern side of the Hsiang River began moving south 27 May, on a course that would permit them to cut off Changsha from a Chinese force at Liu-yang. The next day the 40th, 116th, and 68th Divisions moved south, directly on Changsha. In the last Japanese drive on Changsha, in December 1941-January 1942, the Japanese motive had been to distract the Chinese from operations to relieve Hong Kong. The 1941 drive on Changsha had been stoutly resisted by the Chinese, and when the Japanese, their mission accomplished, began to pull back north, the Chinese and their sympathizers concluded that a great victory for China had been won. Earlier Japanese operations against Changsha had been to disperse threatening Chinese concentrations, and Japanese withdrawals had been followed by claims of Chinese victories. So, to the Chinese and their friends, Changsha was a name with which to conjure. In 1944 the Japanese were determined to hold what they might win.
In the first days of the drive, the Japanese met resistance only in the Ta-mo Shan hills, where the 99th Army held stubbornly on the west flank of the Japanese advance. The Japanese finally contained it and resumed their southward drive. By 10 June, the 58th, 116th, 68th, 3d, and 13th Divisions were lined up along the Liu-yang Ho, the last geographic barrier north of Changsha. Next day they attacked. The 58th and 116th Divisions found little resistance in front of them. The 3d and 13th Divisions, driving the Chinese from steep hillside positions to take Liu-yang on 14 June, cut off Changsha from the east.
The key Chinese strongpoint at Changsha was a mountain just north and
EVACUATION OF KWEILIN. Refugees at the Kweilin railroad station wait for transportation.
west called Yueh-lu Shan on which General Hsueh, IX War Area commander, had massed his artillery. The 34th Division began attacking it on 16 June while the 58th Division, which had so easily crossed the Liu-yang Ho, swung west to attack Changsha itself that same day. Changsha's garrison, the 4th Army, abandoned the town over the 16th and 17th and marched toward Pao-ching to the southwest. Changsha was occupied by the Japanese on 18 June.30
On 23 June an American liaison team attached to Hsueh's deputy met the 4th Army near Pao-ching. Some of the Chinese officers and men stated they had been attacked by gas sprayed from Japanese aircraft, whose effects, as described by them, seemed like tear gas, though none of the wounded showed signs of it. The 4th Army's commander, a General Chin, was voluble in saying that gas bombs and gas shells had forced them to yield Changsha. On the night of 23 June an American air force sergeant, who had been an air-ground liaison radio operator at Changsha, walked into Pao-ching. He had awakened a few days before to find the Chinese had quietly walked out of Changsha, and hastily followed. Present in the town throughout the engagement, he had heard nothing of gas attacks. The sergeant had observed some Japanese artillery fire
around Changsha; there had been quite a few Japanese air attacks, but he knew nothing of any heavy fighting. Inspecting the 4th Army troops, the American liaison team thought they were fresh, in good condition, and found nothing to suggest that they had just emerged from a bitter defeat. The Japanese postwar account makes no mention of any heavy fighting.31
With Changsha in Japanese hands, Stilwell's staff feared the Japanese might enter Kweilin in another seven days, and so began moving British and American nationals from the Kweilin area. Hospital patients, missionaries, Red Cross workers, and teachers began moving out by air around 21 June. Surplus personnel of the Infantry Training Center and Z-FOS began leaving by truck on 27 June. Obviously, CBI Theater headquarters thought the Japanese would shortly take the key points in east China.32
Vice-President Wallace Suggests Stilwell's Recall
On 19 May, General Ferris had told Stilwell and General Sultan, Stilwell's deputy theater commander in India, that he had learned from the U.S. Embassy of a forthcoming visit of the Vice-President of the United States, Henry A. Wallace, and that Wallace wanted to see something of the U.S. Army.33
Mr. Wallace arrived at Chungking on 20 June and was met by a suitable delegation, including Generals Ferris and Chennault. On leaving for his Kunming headquarters, Chennault left behind him 1st Lt. Joseph W. Alsop to be "air aide" to the Vice-President. Before the war, Lieutenant Alsop had been a nationally syndicated columnist and had known Mr. Wallace both socially and professionally. Stilwell's staff feared that Chennault had stolen a march on the theater commander.34
A report from General Ferris, summarizing "the views of the American Government to be communicated to President Chiang Kai-shek," gave an idea of what the State Department thought the Vice-President should discuss with the Generalissimo, and this seems to have been the only attempt by an agency of the U.S. Government to effect any co-ordination between the Vice-President and the CBI Theater commander. The suggestions for Wallace's guidance referred several times to the desirability of the Nationalist and Communist forces fighting the Japanese rather than one another, and hinted at the lifting of the blockade that the Generalissimo maintained against the Communists. The need for better Sino-Soviet understanding was stressed. The message struck
VICE-PRESIDENT WALLACE is greeted on his arrival at Chungking by a delegation of military and civilian dignitaries.
a chilling note, given emphasis by the current background of crisis in east China, that "China must depend upon herself at the moment rather than to look for major assistance from the outside," which latter would not be forthcoming until after the defeat of Germany.35
Over the next few days, the Generalissimo in Chungking and Chennault in Kunming absorbed Wallace's time and energies, so that Stilwell's staff found little opportunity to present the theater commander's point of view, nor did Wallace feel that he could spare the time to visit General Stilwell in Burma. Reporting on his efforts to place Stilwell's case before the Vice-President, Stilwell's political adviser, John P. Davies, wrote to Stilwell:
Now for the Jones-Davies conducted tours angle. It was a flop because (1) we arrived late (2) we had no letter of introduction to Mr. W. saying we were Theater Commander's Reception and Itinerary Committee until after Alsop had presented a letter from himself to the VP stating that Chennault had appointed him Air Aide to the VP (3) in Kunming Wallace was for three days Chennault's guest with the result that Jones and I were given the best job of run-around I have ever been up against. It was done by two of Claire's boys. I blew up with Chennault about it, uttering a few homely truths about the damage done by
the Chennault-Stilwell feud. To his juniors I read the same line and added that I had as little use for Chennault disciples who needled Chennault on Stilwell in the belief that they thereby won Claire's gratitude as I had for Stilwell enthusiasts who did the same thing to the Theater Commander. It seemed to jar them somewhat so that on the night of June 26 when Wallace, Chennault, Glenn, and the two stooges discussed with me the possibility of the VP visiting you in the Valley, Chennault and Co. were very reasonable and decent, with the exception of [individual's name deleted] who is, of course, congenitally conspiratorial. Decision by VP was against because of weather uncertainty and tight future schedule.36
In the conversations between Wallace and the Generalissimo, General Ferris and John S. Service of the State Department, who was with the Chungking headquarters as a political adviser to General Ferris, were allowed to present themselves on 23 June to support the project of sending an observer group to the Chinese Communists to collect Japanese order-of-battle and target information and to aid search and rescue of Air Forces personnel shot down over north China. After stating the position of CBI Theater headquarters on this matter they withdrew. Headquarters, CBI Theater, was not invited to join in Wallace's discussions of the Communist problem, and Wallace did not try to obtain General Stilwell's views on anything.37
Wallace's advocacy of the observer group project was successful. The Generalissimo agreed that a small party could go, that they could communicate directly with theater headquarters, and that they would be free of Nationalist control.38 The directive from theater headquarters ordered the commanding officer, Col. David D. Barrett, to devote himself to ". . . intelligence concerning both our allies and the enemy, and affording assistance to downed pilots. . . . (Under no circumstances will you engage in discussions or make commitments of any kind pertaining to political, economic, sociological, or military [sic]. All matters of policy and commitments remain responsibility of theater commander.)" The project received the code name DIXIE Mission. Personnel included sixteen officers and enlisted men, plus two U.S. Foreign Service officers, John S. Service and Raymond P. Ludden, who had been assigned to theater headquarters as political advisers.39
After weighing the Generalissimo's and Lieutenant Alsop's comments on General Stilwell, Wallace decided to recommend that Stilwell be recalled. Many years later, Wallace testified that his first impulse had been to suggest Stilwell's replacement by Chennault, but that Lieutenant Alsop had persuaded him that the War Department would not approve, and that Chennault could not leave the campaign he was then directing. On the ground that the Generalissimo had stated that Stilwell could not grasp what the Chinese statesman
called "political considerations," Wallace told President Roosevelt that he considered Stilwell unsuited to his post.
Wallace recommended that another general officer of the highest merit be appointed who could win the Generalissimo's confidence, command all U.S. forces in China Theater, and co-ordinate the Sino-American military effort in China. He added that the Generalissimo had been favorably impressed by General Wedemeyer, who had recently visited Chungking on behalf of SEAC. As an alternative, Wallace suggested the appointment to China of a presidential representative with considerable independent powers, with the right to approach the President directly, and with an official position as Stilwell's deputy.
Wallace believed that the President should take determined steps to stop the steady deterioration of the east China situation or be prepared to accept the loss of China as a base from which to support U.S. operations in the Pacific. He stated that a Sino-American offensive, in its first phase primarily a guerrilla attack, should be launched in east China to avert the loss of Chennault's fields. Wallace insisted that the military situation in China was not hopeless and that the present crisis might even improve American prestige in the Far East since the Generalissimo seemed very eager for military aid and guidance, and, if wisely approached, would probably inaugurate reforms in China's internal political structure.40
Far to the south and west of China, the steady progress of Stilwell's operations in Burma was affecting the command structure within SEAC. Ever since March 1944 Mountbatten had been anxious to see Stilwell transferred from SEAC to China Theater.41 Observing Mountbatten's problems with his subordinates, Wedemeyer told Marshall that SEAC's commanders in chief resisted Mountbatten as Supreme Commander. It was notable, added Wedemeyer, that Stilwell, though often "cantankerous," co-operated promptly as soon as he had a clear-cut directive from Mountbatten.42 Moreover, when Stilwell had solved the command problem presented by his having to act as a corps commander in north Burma under operational control of General Slim, whom he outranked, Stilwell had stipulated that this arrangement would endure only until his Chinese troops reached Kamaing. (See Chart 1.) At that point he would insist on being released from Slim's control, to pass directly under Mountbatten, for he then could reasonably anticipate a speedy meeting with Chinese troops from China and the reassembling of the Chinese Army in India and the Y-Force into an elite force, the Thirty Divisions, under the Generalissimo, for service in China. On 20 May Stilwell had given notice to SEAC that it would soon be time for him to leave Slim's control. Mountbatten therefore thought
it opportune to ask the British Chiefs of Staff to take up with the Joint Chiefs of Staff the appointment of an officer to fill Stilwell's place in SEAC.43
Currently, Mountbatten was moving toward a complete rearrangement of the higher officers in SEAC. From the time of his arrival in the Far East, he had had trouble with his three commanders in chief. When the Imphal crisis arose, Mountbatten was dissatisfied with Gen. Sir George Giffard's conduct of operations, and when later the Supreme Allied Commander found Giffard taking what Mountbatten considered a highly negative approach toward an aggressive conduct of operations he resolved to ask for Giffard's relief. Mountbatten's relations with Admiral Somerville had been equally difficult. Somerville had refused to treat him as a Supreme Commander and in Mountbatten's opinion tried to make of him simply the chairman of a commanders-in-chief committee. As for the RAF commander, Air Chief Marshal Peirse, Mountbatten was not seeking his relief because he did not wish to change all of his principal subordinates simultaneously.44
In June 1944 General Marshall was in London. There were meetings of the Combined Chiefs of Staff and after one of them General Brooke, Chief of the Imperial General Staff, took Marshall aside, thanked him for his many kindnesses to Field Marshal Dill, who was very sick, then startled Marshall by saying that Stilwell would have to be recalled because he did not get along with Mountbatten's commanders in chief or the British in Burma.
General Marshall found this difficult to understand, and asked Brooke why the only aggressive and successful commander in chief Mountbatten had should be singled out for recall. Brooke replied that the British planned to relieve Giffard, Somerville, and Peirse, which to Marshall put a different light on the situation. On Marshall's return to the United States the Joint Chiefs of Staff began to study the question of sending Stilwell up to China Theater.45
No new CCS directives to SEAC resulted from those June meetings of the Combined Chiefs of Staff. The CCS were satisfied that their current policy in regard to Burma operations was clearly understood. The main object of current operations was to send the maximum flow of supplies over the Hump. The Kohima-Imphal road had to be cleared, Myitkyina taken, and a defensive position held in the Mogaung-Myitkyina area. There were no comments about any exploitation of possible victory at Imphal by the Fourteenth Army.46
Advised by Marshall that the CCS talks at London had discussed the problems of obtaining the maximum flow of supplies to China, Stilwell commented that such would not be realized by a defensive attitude at Myitkyina or anywhere else and that it would not be realized without bringing in most of it on the ground. He thought it was too bad the Burma campaign had not been
launched in October 1943, for he believed in that case the Allies would now hold north Burma, and by implication a new line of communications to China.47
Stilwell Nominated for Command
The situation within China Theater in June 1944 was such that the President was now ready to go beyond the positions he had taken in March and April and put the full weight of his support behind Stilwell, accepting the Joint Chiefs' analysis of the state of affairs in China and their suggestions for a drastic remedy. Contributing to the choice of Stilwell rather than Chennault was the Army Chief of Staffs conclusion that Chennault's air offensive and the massive ATC structure erected to support it had been a waste of the nation's resources. Marshall thought that "it was not yielding any dividends in China," and was hobbling the war in Europe. The Allies had been handicapped in exploiting the Rome break-through in June 1944 because transport aircraft and crews were in India to support Chennault in his attempt to stop eight Japanese divisions by tactical air.48
Explicitly facing the issues created by Mountbatten's proposal that Stilwell be transferred to China and the Wallace suggestion that Stilwell be recalled, Maj. Gen. Thomas T. Handy of OPD recommended to Marshall on 30 June 1944 that as "the most effective answer" Stilwell be promoted from lieutenant general to general. In supporting his suggestion, Handy presented an eloquent eulogy of Stilwell's North Burma Campaign.
Handy wrote that no general of modern times had faced and overcome the obstacles that Stilwell had. "Against the initial apathy on the part of both our Allies, General Stilwell has welded an effective Chinese Army in Burma." Chinese tactics under his command had been superb, and the campaign for Myitkyina a masterpiece. "Beset by a terrific struggle with the jungle, the monsoons, the Japanese, logistics, to say nothing of mite typhus complications, he has staged a campaign that history will call brilliant."49
After studying Handy's memorandum, Marshall on 1 July placed the matter before Stilwell for his comments, in a radio embodying many of Handy's phrases. Saying that he had waited until Stilwell had consolidated the Mogaung-Myitkyina area before raising the matter, the Army Chief of Staff told Stilwell that the British were pressing for a command rearrangement, and that, of far greater importance to Marshall, things were steadily getting worse in China. He asked Stilwell for a candid opinion on whether the latter thought he could do some good by taking an active part in operations in central China. Marshall remarked that "the pressure" was on to get more tonnage over the
Hump to Chennault, "an immense effort in transportation" that possibly would be completely wasted. He asked what Stilwell would think of transferring his principal efforts to "the rehabilitation and in effect the direction of the leadership of the Chinese forces in China proper," with Sultan commanding in Burma.50
Stilwell's reply began by surveying the India-Burma side of his several positions. He assured Marshall that his present position as acting Deputy Supreme Allied Commander need not stand in the way of his going to China Theater. There was need for a vigorous deputy to Mountbatten, but he doubted that anyone would be allowed to go about SEAC as a trouble shooter for no one would be permitted to dismiss any of SEAC's general officers, even for gross incompetence.
Then Stilwell turned to the China problem, and in his discussion revealed no enthusiasm for the role of field commander of the Chinese forces. After explaining for Marshall some of the personnel problems involved in finding someone to replace him as commander of the Chinese Army in India, Stilwell offered his comments on the situation in China Theater:
Second, the China situation. If I go to China, I could detail Sultan to take over this command during my absence and I believe the Chinese would offer no objection. But if Sultan is made deputy we have the same old situation as soon as he takes command here. Merrill could, but he is physically out of the picture. . . . It is a difficult matter to find a man to command US, British, and Chinese units acceptably. Supposing a steady, seasoned, senior man can be found for this job, and I go to China. The G-Mo is scared, but he is still driving from the back seat, both on the Salween and Hunan. If the President were to send him a very stiff message, emphasizing our investment and interest in China, and also the serious pass to which China has come due to neglect of the Army, and insisting that desperate cases require desperate remedies, the G-Mo might be forced to give me a command job. I believe the Chinese Army would accept me. Ho Ying-chin would have to step out as Chief of Staff, or if he kept the title, give up the power. Without complete authority, over the Army, I would not attempt the job. Even with complete authority, the damage done is so tremendous that I can see only one chance to repair it. This is to stage a counter-offensive from Shansi, and attacking through Loyang toward Chengchow and Hankow. There are units in West Hupeh that could help. The Communists should also participate in Shansi, but unless the G-Mo makes an agreement with them, they won't. Two years ago they offered to fight with me.51 They might listen now. Time and space factors are against us, even if we had good will and full cooperation. You can readily imagine what is involved in organizing and moving such a scattered and loose-jointed force but outside of this one shot I see no chance to save the situation. The units on the Salween should not be withdrawn and cannot be withdrawn in time. The garrison on Indo-China border must stay there, or Kunming is open to attack. I refrain from any comments about the efficacy of air power because you have heard them before. The case is really desperate. The harvest of neglect and mismanagement is now being reaped and without very radical and very quickly applied remedies, we will be set back a long way. These matters must be put before the G-Mo in the strongest terms or he will continue to muddle along and scream for help without doing any more than he is doing now which is nothing. To sum up, there is still a faint chance to salvage something in China
but action must be quick and radical and the G-Mo must give one commander full powers. If the President can get this idea across, we can at least try hoping that a weak and disjointed effort, by dint of numbers and determination, might stop the Japs before they finish breaking up all resistance. The chances are definitely not good, but I can see no other solution at the moment.52
Satisfied with Stilwell's answer, Marshall accepted Handy's suggestion that Stilwell's promotion be recommended and fitted into the solution to China's problem that the Joint Chiefs of Staff were about to propose to the President. In drafting their proposal, the Joint Chiefs were candid almost to bluntness in reminding the President of their own long-held views on China, which hitherto he had disregarded. As Marshall remarked later, the JCS placed Chennault's promises in one column, then pointed out how Chennault had failed on each. Against them they placed Stilwell's predictions and related the fulfillment of each.53
4 July 44
MEMORANDUM FOR THE PRESIDENT:
The attached memorandum from the Joint Chiefs of Staff with a proposed message to Chiang Kai Shek are for your consideration.
We are in full agreement that this action is immediately necessary to any chance to save the situation in China.
For the Joint Chiefs of Staff,
(Signed) WILLIAM D. LEAHY
Admiral, U.S. Navy,
Chief of Staff to the Commander in Chief
of the Army and Navy.
MEMORANDUM FOR THE PRESIDENT FROM THE U.S. CHIEFS OF STAFF:
The situation in Central China is deteriorating at an alarming rate. If the Japanese continue their advances to the west, Chennault's 14th Air Force will be rendered ineffective, our very long-range bomber airfields in the Chengtu area will be lost and the collapse of China must inevitably result. Whether or not there is a possibility of our exerting a favorable influence on the chaotic condition in China is questionable. It is our view, however, that drastic measures should be taken immediately in an effort to prevent disaster to the U.S. effort in that region.
The Chinese ground forces in China, in their present state of discipline, training and equipment, and under their present leadership, are impotent. The Japanese forces can, in effect, move virtually unopposed except by geographical logistic difficulties.
From the beginning of the war, we have insisted on the necessity for building up the combat efficiency of the Chinese ground forces, as the only method of providing the necessary security for our air bases in China. The pressure on us from the Generalissimo throughout the war has been to increase the tonnage over the Hump for Chennault's air in particular, with the equipment and supply for the ground forces as incidental only. This presents the problem of an immense effort in transportation, with a poorly directed and possibly completely wasteful procedure. Chennault's air alone can do little more than slightly delay the Japanese advances. We have had abundant proof of this in our operations against the German army.
Our experience against both the Germans and the Japanese in theaters where we have had immensely superior air power has demonstrated the inability of air forces alone to prevent the movement of trained and determined ground armies. If we have been unable to stop the movement of German ground armies in Italy with our tremendous air power, there is little reason to believe that Chennault, with the comparatively small air force which can be supported in China, can exert a decisive effect on the movement of Japanese ground forces in China. The more effective his bombing of their shipping and the B-29 operations against Japan the more determined will be the Japanese thrusts in China.
Under the present leadership and organization of the Chinese armies, it is purely a question of Japanese intent as to how far they will advance into the interior of China. The serious pass to which China has come is due in some measure to mismanagement and neglect of the Army. Until her every resource, including the divisions at present confronting the communists, is devoted to the war against the Japanese, there is little hope that she can continue to operate with any effectiveness until the end of the war.
The time has come, in our opinion, when all the military power and resources remaining to China must be entrusted to one individual capable of directing that effort in a fruitful way against the Japanese. There is no one in the Chinese Government or armed forces capable of coordinating the Chinese military effort in such a way as to meet the Japanese threat. During this war, there has been only one man who has been able to get Chinese forces to fight against the Japanese in an effective way. That man is General Stilwell.
The British are pressing for a readjustment of command relationships in the Southeast Asia Command, maintaining that General Stilwell's position as Deputy Supreme Commander and that of the Commander of the Chinese Corps in India are incompatible. The British would undoubtedly concur in the relief of General Stilwell from his present assignment.
After full consideration of the situation in China, we recommend:
That you dispatch to the Generalissimo the attached message, urging him to place General Stilwell in command of all Chinese armed forces.
That you promote General Stilwell to the temporary grade of General, not only in recognition of his having conducted a brilliant campaign with a force, which he himself made, in spite of continued opposition from within and without and tremendous obstacles of terrain and weather, but in order to give him the necessary prestige for the new position proposed for him in China.
We are fully aware of the Generalissimo's feeling regarding Stilwell, particularly from a political point of view, but the fact remains that he has proved his case or contentions on the field of battle in opposition to the highly negative attitudes of both the British and the Chinese authorities. Had his advice been followed, it is now apparent that we would have cleared the Japanese from northeast Burma before the monsoon and opened the way to effective action in China proper. Had his advice been followed the Chinese ground forces east of the Hump would have been far better equipped and prepared to resist or at least delay the Japanese advances.
That in case Stilwell goes to China, we propose the following arrangements in the Southeast Asia Command to the British Chief of Staff:
Sultan to command the Chinese Corps in Burma under the general direction of Stilwell.
Wheeler, now senior administrative officer on Mountbatten's staff, to succeed Stilwell as Deputy to Mountbatten.54
The President accepted the recommendations of the Joint Chiefs of Staff,
and the message which they had prepared for his signature was quickly sent to General Ferris for personal presentation to the Generalissimo. To Stilwell at his jungle headquarters, Marshall sent extracts of the key passages from the President's radio to the Generalissimo so that Stilwell would know what was to come, then read him a homily on why, in Marshall's opinion, the President's support, and that of the Generalissimo, had been so long withheld: ". . . the difficulty has been the offense you have given, usually in small affairs, both to the Generalissimo and to the President." At Cairo, Marshall had cautioned Stilwell that the President disliked his frequent loosing of barbed shafts at the Chinese leader. Now, Marshall warned him again that he must do all in his power to avoid offending the Generalissimo. Marshall also remarked that had Stilwell personally, or some member of his staff, devoted more attention to establishing good relations with the President, Mr. Roosevelt's support would have been forthcoming long before.55 Stilwell replied:
Your messages and instructions are unmistakably plain. If this new assignment materializes, I will tackle it to the best of my ability. I am keenly aware of the honor of the President's confidence and of yours, and I pledge my word to him and to you that I will "consistently and continuously avoid unnecessary irritations" and get on with the war. I fully realize that I will have to justify that confidence, and I find it even in prospect a heavy load for a country boy.56
The American attempt to persuade the Generalissimo to accept Stilwell as his field commander began with a radio sent by the President on 6 July 1944. As the President had directed in May 1944, after hearing that the Chinese were rephrasing his messages to the Generalissimo to make them read more agreeably to that dignitary, General Ferris, the senior American officer in Chungking, delivered the message in person:
The extremely serious situation which results from Japanese advances in Central China, which threaten not only your Government but all that the U.S. Army has been building up in China, leads me to the conclusion that drastic measures must be taken immediately if the situation is to be saved. The critical situation which now exists, in my opinion calls for the delegation to one individual of the power to coordinate all the Allied military resources in China, including the Communist forces.
I think I am fully aware of your feelings regarding General Stilwell, nevertheless I think he has now clearly demonstrated his farsighted judgment, his skill in organization and training and, above all, in fighting your Chinese forces. I know of no other man who has the ability, the force, and the determination to offset the disaster which now threatens China and our over-all plans for the conquest of Japan. I am promoting Stilwell to the rank of full general and I recommend for your most urgent consideration that you recall him from Burma and place him directly under you in command of all Chinese and American forces, and that you charge him with the full responsibility and authority for the coordination and direction of the operations required to stem the tide of the enemy's advances. I feel that the case of China is so desperate that if radical and properly applied remedies are not immediately effected, our common cause will suffer a disastrous set-back.
I sincerely trust that you will not be offended at the frankness of my statements and I assure you that there is no intent on my part to dictate to you in matters concerning China; however, the future of all Asia is at stake along with the tremendous effort which America has expended in that region. Therefore I have reason for a profound interest in the matter.
Please have in mind that it has been clearly demonstrated in Italy, in France, and in the Pacific that air power alone cannot stop a determined enemy.
Matter of fact, the Germans have successfully conducted defensive actions and launched determined counter-attacks though overwhelmingly outnumbered in the air.
Should you agree to giving Stilwell such assignment as I now propose, I would recommend that General Sultan, a very fine officer who is now his deputy, be placed in command of the Chinese-American force in Burma, but under Stilwell's direction.57
As a concrete expression of the United States gratitude for the victories in north Burma, as a sign of the special trust and confidence which the President now reposed in him, and to give him a rank suitable for the great responsibilities which, it was confidently expected, would soon be his, Stilwell was promoted to full general on 1 August 1944.58 He then shared the rank with only Generals Marshall, MacArthur, Eisenhower, and Arnold.
The Generalissimo Agrees "in Principle"
After two years during which the United States for a variety of reasons had been content to let Stilwell's position in China Theater be nebulous and ill-defined, it was now moving to undo the decision of the ARCADIA Conference of December 1941 and make Stilwell in effect responsible for the conduct of operations against the Japanese in China Theater. One may doubt that this development was pleasing to the Generalissimo. Stilwell's aggressiveness was well known. If Stilwell took command, he would surely try to devote more of the military resources of Nationalist China to the east China campaign. The current Japanese offensive had been aimed at the east China airfields rather than at the Generalissimo's strongholds of Kunming and Chungking, and the Generalissimo's national troops, as distinguished from the troops of the war area commanders, had taken no part in the east China fighting.59 If Stilwell as field commander directed the Generalissimo's own troops against the Japanese the enemy might well turn his attention to Chiang himself. Moreover, moving major elements of Nationalist forces to east China would affect the Generalissimo's position in Chinese politics in relation not merely to the Communists but also to the several war area commanders.
There may well have been a personal factor. Ever since 1937 propaganda
had hailed the Generalissimo as a military genius who had kept the Japanese at bay, who had outfought and outmaneuvered them and needed only modern weapons to drive them into the sea. Now that eight Japanese divisions were racing through east China, the United States was asking the Generalissimo to drop his military role and let Stilwell, a foreigner whom he remembered as a colonel riding perched on top of a boxcar with the humble Chinese soldiery--the same Stilwell who called him "Peanut" and gibed at him as an incompetent dilettante--take command under the Generalissimo of both the Generalissimo's forces and those of his deadly enemies, the Communists.
Later events suggest that the Generalissimo resolved that Stilwell should on no account hold command in China Theater, and that he rallied all his diplomatic resources to the task of avoiding any such outcome of the crisis. At first glance, a diplomatic struggle between China and the United States in 1944 would have seemed a most unequal one. The Generalissimo wanted lend-lease, credits, and air support, and the United States was the only likely source. The longer China was isolated, the more desperate would be her need for help. If the United States was to insist on a quid pro quo, how could China refuse?
But the Generalissimo had advantages and allies of the greatest usefulness: First, during 1944 the Generalissimo and T. V. Soong, who had quarreled in the fall of 1943, had been reconciled. Soong's assistance brought his great knowledge of the U.S. scene and his influential connections to the Generalissimo's side. Second, the Americans were not united among themselves. Harry Hopkins, though no longer wielding the power he once did, was an intimate friend of Soong's and had long opposed Stilwell.60 Lieutenant Alsop, of Chennault's staff, was Soong's adviser.61 And third, the President was preoccupied with many things and gave the China Theater command question only a fraction of his attention, whereas for the Generalissimo it was a matter that received the fullest exercise of his diplomatic talents.
After weighing the President's message, the Generalissimo adopted a tactic that he had used before, most notably in the winter of 1941-42 when the American Military Mission to China was trying to integrate the American Volunteer Group into the Tenth Air Force. The Generalissimo on 8 July agreed in principle but found a major obstacle to carrying out the President's suggestion at once:
July 8, 1944.
My dear Mr. President:
I have much pleasure in acknowledging receipt of your telegram which came on July 7, conveying to me your deep concern over the war situation in China and your effective suggestion to meet it.
While I fully agree with the principle of your suggestion that directly under me General Stilwell be given the command of all Chinese Army and American troops in this theater of war, I like to call your attention to the fact that Chinese troops and their internal political conditions are not as simple as those in other countries. Furthermore, they're not as easily directed as the limited number of Chinese troops who are now fighting in north Burma. Therefore, if this suggestion were carried out in haste it would not only fail to help the present war situation here but would also arouse misunderstanding and confusion which would be detrimental to Sino-American cooperation. This is the real fact of the situation and in expressing my views on your exacting and sincere suggestion, I have not tried to use any misleading or evasive language. Hence, I feel that there must be a preparatory period in order to enable General Stilwell to have absolute command of the Chinese troops without any hindrance. In this way I shall not disappoint you in your expectation.
I very much hope that you will be able to despatch an influential personal representative who enjoys your complete confidence, is given with full power and has a far-sighted political vision and ability, to constantly collaborate with me and he may also adjust the relations between me and General Stilwell so as to enhance the cooperation between China and America. You will appreciate the fact that military cooperation in its absolute sense must be built on the foundation of political cooperation.
Our people have an unwavering faith in your friendship and sincerity towards China. I had already explained in detail to Vice President Wallace on this subject and I trust he will transmit my views to you.
I shall much appreciate it if you will discuss directly with Dr. Kung on any important question of this nature whenever it should arise in the future. If you have any telegram for me you can give it to him for transmittal.
With my warmest personal regards.
When the Generalissimo's answer came to Washington, the President was preparing to visit General MacArthur and Admiral Chester W. Nimitz at Honolulu, there to discuss major questions of Pacific strategy. The President was apparently pleased with the Generalissimo's answer for he wrote to Admiral Leahy on 13 July: "Can you get this to the Joint Staff before we leave? There is a good deal in what the Generalissimo says." OPD drafted an answer for the President urging speed, but having been impressed by the Generalissimo's views, the President greatly modified it.
The President was happy to note that the Generalissimo agreed in principle. He urged the Chinese leader to have in mind the importance of speed and remarked that "some calculated political risks appear justified when dangers in the overall military situation are so serious and immediately threatening." Roosevelt then said he accepted the Generalissimo's suggestion about an American political representative in Chungking. The right man had to be chosen for the job, the difficulties had to be explored, and "in the meantime I again urge you to take all steps to pave the way for General Stilwell's assumption of command at the earliest possible moment."63
Thus in the President's answer Stilwell's assumption of command became second to the careful choice of a political representative. Mr. Roosevelt then departed Washington for a prolonged trip through the Pacific.
The Generalissimo's preparatory period, of unspecified length, would presumably be inaugurated with the arrival in China of a Presidential representative previously agreed on as satisfactory to both powers. The arrival of the proposed emissary, and the solution of the command question, were now postponed many weeks while the President visited San Francisco, Alaska, and Hawaii. The Generalissimo had won the first skirmish.
The Ledo Road Project Reduced
In the four-week interim during which the command question awaited the President's convenience, the carrying capacity of the great Ledo Road project was finally set by the War Department at a point which revealed that it was losing importance. General Pick was building the road as a two-way, graveled highway. Its capacity would depend on whether it was brought to two-way, all-weather standard all the way to China and whether enough men, vehicles, and operating facilities were assigned to it to permit fullest exploitation of its capacity. The QUADRANT Conference (Quebec, August 1943) had directed that it be able to transport 30,000 tons a month by January 1945, with an ultimate capacity of 96,000 tons a month.
As early as March 1944, the Asiatic Section of OPD had proposed that the building of the road stop at Myitkyina and that traffic from there simply use the existing road from Myitkyina to China after it had been brought to one-way, all-weather condition with a gravel surface.64 The proposal was shelved for the time being, but in July, with an American landing in the Philippines an ever more likely possibility, Brig. Gen. Patrick H. Tansey, Chief, Logistics Group, OPD, revived it, for he believed that by setting operation of the Ledo Road at a lower level than that originally intended, 35,000 men would be made available for use in the Philippines.
A variety of other arguments presented themselves to General Tansey. He believed that the trucks operating on the Ledo Road would lower by some 26,000 tons the amount of gasoline and lubricants otherwise available to the air force in China, that too many dry stores would be transported, that a great amount of shipping space would be wasted, all of which could be avoided if first priority went to maintaining and increasing air deliveries, building three pipelines to China, and simply improving the existing road from Myitkyina to Bhamo to permit movement of trucks and artillery.65
The AAF informally approved Tansey's suggestion on the ground that the principal advantage of the Ledo Road was that it permitted re-equipping the Chinese Army, which in the light of current plans could not be done in time for the Chinese to co-operate with the U.S. forces in the Pacific.66
These recommendations were strongly opposed by Army Service Forces. Consistent with the support for the Ledo Road always expressed by General Somervell, Maj. Gen. LeRoy Lutes, then Acting Chief of Staff, ASF, and Col. Carter B. Magruder, Acting Deputy Director, Plans and Operations, ASF, denied that the effort involved in bringing the road to two-way, all-weather standard from Ledo to Kunming was out of step with Pacific strategy or would waste American resources.
ASF's planners pointed out that strong Japanese resistance in the Pacific could throw off the whole timetable of Pacific operations. They believed that Japanese resistance on Saipan, where the Americans had landed on 15 June 1944, had already delayed contemplated operations by a month. If the American cross-Pacific advance was to be seriously delayed at any point, then the best possible line of communications to China might be of urgent importance and could not be improvised on the spur of the moment. Only 45,000 tons of equipment remained to be shipped to India, they argued, while maintenance personnel would be needed to keep even a passable trail from Myitkyina east.67
General Lutes doubted that any great savings in manpower would follow from limiting the capacity of the road beyond Myitkyina to one-way traffic. He believed that many of the men needed to build and operate the road at two-way standard beyond Myitkyina would be required to protect and operate the pipeline even though the road was limited in capacity. There were already enough troops allocated to CBI Theater, Lutes thought, to complete the road on the basis originally contemplated.68
OPD did not adopt ASF's point of view but recommended to General Marshall that the Ledo Road be strictly limited. Marshall's advisers told him that if the Ledo Road was opened to one-way traffic enough equipment to meet the minimum needs of the U.S.-sponsored Chinese divisions could be delivered. OPD, though relaying ASF's fear that the Pacific operations timetable was too optimistic, did not share that view. It argued that by the end of 1945, which was the earliest date given for the completion of the road to a two-way standard, the United States would have either occupied Formosa or bypassed it and be operating well beyond, in either case thus being free of any great dependence on operations from Chinese bases.69
Marshall approved the OPD study, and CBI Theater was told that it would receive no increase in personnel to build the road. Therefore, the radio went on, development of land communications to China should be limited to building a two-way, all-weather gravel road to Myitkyina and opening the existing trail from Myitkyina to China. Beyond Myitkyina it was thought desirable to improve the one-way, dry-weather trail so far as practicable with minimum essential permanent bridge construction and transport necessary to install and maintain one 6-inch and two 4-inch pipelines to China and to deliver trucks and artillery. The War Department would supply whatever CBI Theater lacked in resources to meet these goals.70 Thus the War Department, to conserve men for Pacific operations, sharply limited the tonnage that the Ledo Road would ultimately deliver to China.
Within Burma, however, the Ledo Road during 1944, at the very beginning of the year supporting the advance of the Chinese Army in India, carried hundreds of thousands of tons, the bulk of them in vehicles belonging to tactical and service units. In 1944 Advance Section received at Ledo 497,590 tons, and forwarded only 224,804 tons, the difference being carried by organizational vehicles. Of the 224,804 tons forwarded by Advance Section, about half went by road and half by air, so that the Ledo Road was the means of transporting about 75 percent of the supply tonnage for the North Burma Campaign.71 (Chart 7)
Slow Progress Across the Salween
Whatever the War Department in Washington might contemplate in the way of any future development of the Ledo Road, that development necessarily waited on driving the Japanese from north Burma. The Generalissimo's resolution to continue his part in the attempt might not hold against a crisis in east China. It was perhaps significant that he was not sending any replacements to keep the Y-Force up to strength. As for Stilwell, from May to August, he did not attempt to take any part in the Salween campaign, for as will be recalled, he had no command powers over any Chinese troops in China; along the Salween front Gen. Wei Li-huang might well resent any guidance from Stilwell. But Stilwell could not escape the consequences of what happened on the Salween or in east China, for events in those areas would be the background to President Roosevelt's attempt to put Stilwell in command of the Chinese Army.
After the Chinese fell back from Lung-ling in late June, XX Group Army on the north and XI Group Army on the south were faced by one minor and two major strongpoints. (See Map 19.) Northernmost of these was the old
Tonnage Forwarded by USAF SOS CBI Advance Section to North Burma;
January 1944-May 1945
walled city of Teng-chung, with some 20,000 inhabitants. The configuration of the land would permit building a road from China to Myitkyina via Teng-chung, and the Burma Road Engineers had such plans prepared. In the center of the Salween front, Japanese artillery on Sung Shan, or "Pine Mountain," commanded the Burma Road's descent into the Salween gorge. In their initial drive on Lung-ling the Chinese had bypassed Sung Shan and relied on air supply. Failure to take Lung-ling had left a considerable Chinese force in an uncomfortable position on the Japanese side of Sung Shan. If the Japanese could be driven off Sung Shan, supplies could roll right down the Burma Road. Twenty-five miles southeast of Lung-ling was Ping-ka, the obscure valley where cholera lay in wait, and where a Chinese regiment and a Japanese battalion waited out the war in endless small bickerings. Ping-ka was obscure, but the 56th Division had far fewer battalions than General Wei had regiments and had committed one of these few to Ping-ka.
Since General Wei had left a containing force below Sung Shan while he tried to take Lung-ling, there had been an artillery duel between the Japanese guns on Sung Shan and the Chinese since early in the campaign, while the Chinese had launched unsuccessful attacks in regimental strength. As June 1944 ended, this situation continued while Dorn and Wei considered the problems presented by the setback at Lung-ling. To the north, at Teng-chung, a new
phase in the campaign was clearly opening, for the Chinese patrols were closing the exits from Teng-chung valley as XX Group Army deployed before the wall of that strong city.72
The outer defenses of Teng-chung were pillboxes covering every avenue of approach, supported and covered by the 6,500-foot-high, fortified mountain peak of Lai-feng Shan, "The Place Where the Birds Come." Here were 600 or more Japanese with most of the garrison's artillery. Teng-chung itself was girdled by a massive wall of earth that in some places was forty feet high and sixty feet thick at the base, faced throughout with great slabs of stone. Chinese necromancers had carefully laid out the wall in a great square to cut the cardinal points of the compass. Each side had a gate, and each gate now had a Japanese command post, while Japanese machine guns and rifles swept the approaches to the wall, its face, and its parapets. Within the city were about 2,000 Japanese. In all, Colonel Kurashige, who had defended the Kaoli-kung mountains, had about 1,850 Japanese, a heavily reinforced battalion combat team built around the 2/148. Kurashige's orders were to hold Teng-chung until the Chinese threat to Lung-ling passed.73
The XX Group Army's five divisions fanned out around Teng-chung by occupying the heights. With them were the 40th, 47th, and 48th Portable Surgical Hospitals of the U.S. Army. The attack began on 26 June when B-25's of the 341st Bomb Group flew from Yun-nan-i in an attempt to breach the wall. This and subsequent attempts at medium-level bombing, though wreaking severe damage on the residential area--after a bombing on 14 July fires could be seen for thirty-five miles--merely piled rubble around the Japanese positions in Teng-chung. Secure in their dugouts, the Japanese were unshaken, and the airmen turned to skip bombing, trying to hurl their bombs directly against the face of the wall.74
The Chinese infantry attack began on 2 July, when the 116th Division moved against the northwest side of the Japanese defenses. Making the assault in a torrential monsoon downpour, the 348th Infantry, 116th Division, took seven pillboxes on a height four and a half miles northwest of the city. From here the 348th Regiment looked down on the Japanese positions between it and the city. By the end of the first week of July the five Chinese divisions had Teng-chung encircled. Several days later, when a Chinese patrol from XX Group Army made contact with scouts from XI Group Army at the Man-lao Bridge, Teng-chung was again cut off from the Japanese at Lung-ling. Off to the northwest, Chinese guerrillas working for Y-Force occupied a village
CHINESE INFANTRYMEN REST ON LAI-FENG SHAN after an attack on Japanese dugouts.
twenty-six miles from Teng-chung and only the same small distance from Kachin tribesmen, fighting for Stilwell, who had occupied Fort Harrison (Sadon) in Burma.
A week of perfect weather in mid-July permitted XX Group Army to seize an airstrip southwest of Teng-chung, to restore its supplies by air delivery, and to move its lines on the southeast up to easy pack howitzer range of the walls. Here the advance was slowed by fire from Colonel Kurashige's howitzers atop Lai-feng Shan. The Chinese turned to reconnoiter the mountain more carefully and discovered the Japanese had sited their defenses on the slopes facing Teng-chung. The reverse side had much defiladed ground and no entrenchments.
Since Gen. Huo Kwei-chang of XX Group Army had the bulk of his strength on the high ground to the south of Teng-chung valley, he found it comparatively easy to mass three divisions against the weak side of Teng-chung, while one more division was to get a foothold in the city itself. Then the rain closed in again and operations had to await clearing weather. In a week the storms lifted, and at noon of 26 July the first of four waves of P-40's and B-25's hit the northeast wall of Teng-chung and the summit of Lai-feng Shan.
The Chinese attack that followed revealed that previous experiences with
THE WALLED CITY OF TENG-CHUNG after an air attack that breached the wall (arrow).
Japanese positions had not been wasted. The Chinese infantry moved off quickly, on time, and as whole regiments rather than squads committed piecemeal. Mortar and artillery fire was brought down speedily on suspected Japanese positions, and the infantry took full advantage of it by advancing again the minute it lifted. Having taken one pillbox, the Chinese infantry kept right on going rather than stopping to loot and rest. At nightfall they were on top of the mountain and had taken a fortified temple on the summit. After mopping up the next day, the Chinese tallied about 400 Japanese dead. They themselves had lost 1,200. Nevertheless, the speedy capture of Lai-feng Shan was a brilliant feat of arms and dramatic evidence of the capabilities of Chinese troops when they applied proper tactics.
The simultaneous attack on the southeast wall of Teng-chung did not carry across the massive wall, but the Chinese had a firm foothold in the scraggly collection of mud huts just outside the wall which an ancient Greek, a Roman, or a medieval townsman would recognize at once as the sort of suburb so many of his cities had. The Japanese fought stubbornly in defense of these tenements, but making bold use of their lend-lease 37-mm. antitank guns the Chinese knocked down one hut after another at point-blank range. Casualties mounted in this bitter infighting; American medical aid was of great utility. The commander of the 130th Division, who had seen considerable action against the
Japanese, remarked that at Teng-chung his men seemed to fight harder because they knew they would have good medical attention if they were wounded.
Configuration of the ground suggested the southeast as the most logical avenue of approach and the principal Chinese effort was now directed there. On 2 August twelve B-25's breached the southeast wall in five places. Direct hits hammered out a gap fifteen feet wide, but the displaced earth and rubble were still a strong barrier and the Japanese did their best to mend the breach and cover it with machine gun fire. The Chinese needed something more and this was supplied by five waves of P-40's and P-38's which strafed the wall at twenty-minute intervals. This permitted the Chinese to place their scaling ladders against the wall. By this means, one company from the 107th and 348th Infantry Regiments reached the top of the wall just east of the south corner at 1500 hours on 3 August. The Japanese strove to drive them out but the Chinese clung to their advantage, and one lone platoon held fast all during the night. Next morning, Chinese reinforcements moved through the breaches, entered Teng-chung, and took a pillbox inside the city. Barring an attempt at rescue by other elements of the 56th Division, or a change of heart by the Chinese, the capture of Teng-chung was inevitable. The important question was, as General Huo's men crossed the walls on 4 August, how long would it take to capture Teng-chung?
The Battle for Sung Shan
Since the Chinese attempt to cut the center out of the Japanese position on the Salween by taking Lung-ling had failed, the attention of the Chinese commanders had shifted from Lung-ling to Sung Shan.75 The hill mass of Sung Shan dominated the area where the Burma Road crosses the Salween and so barred the direct approach from China down the Burma Road. The Chinese had invested it with a containing force in their initial drive on Lung-ling. That drive had been supplied by air, and now that the Chinese were stalled between Lung-ling and Sung Shan, air supply was not too adequate, and clearing the Japanese from Sung Shan appeared essential.
Sung Shan (the name Pine Mountain applies to its highest peak) is an intricate hill-mass rising to 3,000 feet above the Salween gorge. It is roughly triangular in shape. The Burma Road, in climbing out of the Salween gorge, runs along the northeast side of the triangle, angles sharply round its northern tip, then runs back down along the northwest side of the triangle. In all, thirty-six
miles of the Burma Road were dominated by the Japanese guns on Sung Shan. Time did not permit building a cutoff road to bypass Sung Shan. The Japanese defensive system, manned by some 1,200 men under Maj. Keijiro Kanemitsu, was built around elements of the 113th Infantry, supported by a battalion of mountain artillery, some transport troops, and some engineers. Of the 1,200, only 900 were effective.
In June, during the containing phase, the Chinese had assembled seven 150-mm. howitzers, two 75-mm. howitzers, and two 76-mm. field guns. Later joined by some pack artillery, and directed by an American artillery observer in a liaison plane, the Chinese cannoneers dueled with Major Kanemitsu's gunners. Finally, the Japanese howitzers ceased to fire on the Burma Road Engineers and the Chinese who were preparing to rebuild the Burma Road bridge over the Salween. Now safe, the engineers proceeded with their rebuilding. During this same containing phase, the Chinese New 28th and New 39th Divisions had made attacks in regimental strength against Sung Shan. On 15 June, they succeeded in taking a peak at the southeast corner of the triangle, but failed to take its twin at the southwest corner, two miles away. Other Chinese attempts failed, though heavy casualties were taken in the attempt.
As the period of containment merged into one of preparation for all-out attack, General Wei's hand was strengthened by the arrival of the 8th Army (the Honorable 1st, the 82d, and 103d Divisions). Originally stationed on the Indochina border, it had begun to arrive in battalion increments at the time of the Chinese setback at Lung-ling. The 8th Army had some lend-lease equipment, but only two thirds of its officers had been exposed to Y-FOS training efforts. The relief of the New 28th Division by the 3d Infantry, Honorable 1st Division, on 27 June was not well co-ordinated, for the Japanese were able to reoccupy the positions the New 28th Division had taken in June. Japanese also filtered through the Chinese lines to reinforce Sung Shan, and as further evidence of Japanese determination, on 28 June Japanese aircraft for the first time appeared over the Salween front. A reconnaissance aircraft, three fighters, and two transports circled Sung Shan and made a supply drop, some of which fell in the Chinese lines.
Accompanied by Y-FOS personnel under command of Col. Carlos G. Spaht, the 8th Army assembled east and south of Sung Shan and set 5 July for the attack. The Chinese artillery fired a nightlong preparation, and at dawn of 5 July two Chinese regiments attacked but not in strength. A few positions were overrun, the Japanese counterattacked, and at nightfall the Chinese were back in their initial positions, minus seventy dead. Colonel Spaht reported to Dorn that teamwork between the demolition squads and the assault teams had left much to be desired, that further training was badly needed.
The 8th Army's next attempt was made by the 246th Regiment the night of 7-8 July. It was directed against the southwest corner of the triangle and surprised the Japanese defenders of Kung Lung-po peak. By midnight the
CHINESE TROOPS ON KUNG LUNG-PO PEAK
Chinese had all Japanese strongpoints in their hands, but shortly after midnight the Japanese counterattacked over what was for them familiar terrain and drove off the 246th Regiment, inflicting more than 200 casualties. Y-FOS' observers reported that the Chinese grew quite confused during the night fighting and often shot at one another. The 246th Regiment had to be replaced by the 307th Regiment. The 307th faced what was for them a new Japanese defensive tactic between 10 and 12 July. Since the Chinese in climbing up the hills tended to bunch along the easiest routes to the top, the Japanese used their machine guns to keep the Chinese huddled down in the natural cover the hill afforded, then hurled grenades and mortar shells into the parties of Chinese. Such tactics were of deadly efficiency, and so the 8th Army brought up another regiment to reinforce the battered 307th.
Two weeks passed before the 8th Army again essayed an attack on Sung Shan. This time, instead of piecemeal attacks by a regiment or two, 8th Army prepared the attack by moving its howitzers up to pound Japanese positions at from 1,500 to 3,200 yards with direct fire. When the Chinese attacked with three regiments, on the morning of 23 July, the division commander of the 103d personally directed the 75-mm. fire, and on occasion placed shells twenty-five to forty feet in front of the assaulting Chinese. Captured Japanese diaries
contained praise of the artillery and of the 103d Division's valiant infantry. This well-led, co-ordinated attack succeeded and by dawn the Chinese were in Japanese positions almost at the crests of the two peaks Kung Lung-po and Tayakou. Alarmed by the successful Chinese artillery fire, Major Kanemitsu on 26 July pleaded for Japanese air support to attack the Chinese batteries, which had been emplaced in the open to use direct fire. Japanese fighters promptly responded, and machine-gunned the Chinese cannon and crews. The damage plus the moral effect halted the Chinese attack for a week, until 3 August.
When the 308th Regiment resumed the advance on 3 August it had flame throwers which it used with devastating effect to take the crest of Kung Lung-po. There the Chinese found several Japanese tankettes, which had been dug in for use as pillboxes. When the Japanese failed to make their usual prompt counterattack Y-FOS personnel surmised they might be short of ammunition. This was so, and Major Kanemitsu decided to raid the 8th Army's artillery positions and supply dumps to replenish his supply. Seven parties of Japanese volunteers struck during the night of 9 August, destroying several howitzers and taking away all the light weapons and ammunition they could carry.
At this time, Burmese civilians, who had been impressed into the Japanese service as laborers and who were found hiding in Japanese dugouts, estimated that Kanemitsu had 700 men, most of them wounded or starving. Actually, he now had but 300, including sick and wounded.
Having tried attacks by night, during rainstorms, and by surprise, none of which had quite succeeded and all of which had taken precious time, the Chinese now decided on a return to more formal siegecraft. With technical advice from Y-FOS engineers, the Chinese on 11 August began digging under what seemed the key to the Japanese positions that remained in the Sung Shan triangle. Significant of the closeness of the fighting, the tunnels needed to be but twenty-two feet long to put the mines in place under the Japanese pillboxes. One mine held 2,500 pounds of TNT, the other 3,500 pounds.
The mines were fired on 20 August at 0905 and the resulting destruction was quickly exploited by engineers armed with flame throwers. In one pillbox forty-two Japanese were buried alive, of whom five were rescued. The prisoners stated that they had been asleep and had never suspected that they were being undermined. At 0920 the 3d Regiment against light opposition took the few strongpoints that remained on Sung Shan proper. Kanemitsu's men still held out in scattered pockets about the triangle. These launched desperate counterattacks on 21 and 22 August. That of the 22d produced particularly bloody fighting in which the Chinese lost many company grade officers.
After the failure of these counterattacks there was nothing left but mopping up. Actually, since the completion of the new Salween bridge on 18 August and the mine blast on the 21st, the rest was anticlimax, even Major Kanemitsu's death on 6 September, and the macabre ceremony the next day when the
Japanese burned their colors and slew their wounded. Of the 1,200 Japanese on and around Sung Chan, 9 were captured, and 10 were believed to have escaped. The significance of Sung Shan lay in that it had cost the Chinese 7,675 dead to clear that block from the Burma Road, of which some 5,000 were from the 8th Army, leaving it but two understrength regiments fit to fight for Lung-ling.
As August waned, the Generalissimo was committed "in principle" to giving Stilwell command in China. Events along the Salween did not suggest there would be any speedy relief for China by a victory on that front, while in east China the Japanese had not as yet met effective resistance. Delay in breaking the blockade of China and in setting up an effective barrier to Operation ICHIGO in east China meant still further deterioration in China's military and political situation. Defeats in the field place great strain on coalitions; events on the Salween and south of Changsha would be felt as far away as Washington.
Table of Contents ** Previous Chapter (9) * Next Chapter (11)
1. Stilwell's Mission to China, Ch. V.
2. CM-OUT 31202, JCS to Stilwell, 2 May 44. For details of the JCS directive, see pages 201-02, above.
3. Rad DTG 240240Z, Stilwell to Marshall, 24 May 44. Item 2740, Bk 7, JWS Personal File.
4. (1) CCS 319/5, 24 Aug 43, sub: Final Rpt [QUADRANT] to President and Prime Minister. See Stilwell's Mission to China. (2) CCS 417, 2 Dec 43, sub: Plan for Defeat of Japan. (3) CCS 397 (Revised), 3 Dec 43, sub: Specific Opns for Defeat of Japan, 1944. See p. 75, above.
5. (1) For background on the JCS proposal to change Mountbatten's directive, see Chapter V, above. (2) Rad WAR 42202, Marshall to Stilwell, 27 May 44. Item 2562, Bk 7, JWS Personal File.
6. (1) See Ch. VIII, above. (2) Japanese Study 129. (3) Ltr, Stilwell to Marshall, 27 May 44, sub: Discipline. SNF 31. (4) Rad CHC 1123, Stilwell to Marshall, 30 May 44. SNF 131.
7. Rad WAR 47843, McNarney to Stilwell, 8 Jun 44; Rad CHC 1175, Stilwell to McNarney, 9 Jun 44. SNF31.
8. (1) Rad CAK 2773, Chennault to Stilwell and Stratemeyer, 29 May 44. The Chennault-Wedemeyer Letter, Item 39. (2) Ltr, Chennault to Stilwell, 29 May 44. SNF 31.
9. Rad CFB 17887, Ferris to Stilwell, 28 May 44. Item 2565, Bk 7, JWS Personal File.
10. Rad, Stilwell to Ferris, 28 May 44. Item 2584, Bk 7, JWS Personal File.
11. Rad CFB 17969, Marshall and Sultan from Ferris signed Stilwell, 31 May 44. Item 2571, Bk 7, JWS Personal File. The gist of Ho's warning was repeated by him in Ltr, Ho to Stilwell, 31 May 44. Item 2574, Bk 7, JWS Personal File. Ho added a prediction that the Japanese would attack in India and Burma, at which Stilwell snorted: "Ask him for me how he intends to defend Changsha and Hengyang and I will tell him how we will defend Mogaung." Rad CHC 1136, Stilwell to Ferris, 2 Jun 44. Item 2581, Bk 7, JWS Personal File.
12. Aide Mémoire to President Roosevelt (delivered by General Shang, 31 May 1944). China File (Hurley), Item 61, OPD Exec 10.
13. Rad CAK 2981, Chennault to Stilwell, 2 Jun 44. Item 2583, Bk 7, JWS Personal File.
14. Memo, Generalissimo for Stilwell, 3 Jun 44. Item 2584, Bk 7, JWS Personal File. The memorandum went out at once to Stilwell as an "eyes alone" radio.
15. Rad CFB 18134, Ferris to Stilwell, 3 Jun 44. Item 2587, Bk 7, JWS Personal File.
16. Rad, Dorn to Stilwell, 4 Jun 44. SNF 31.
17. The Stilwell Papers, p. 301.
18. (1) Memo for Record, 5 Jun 44, sub: Conf between CKS and Stilwell. Stilwell Documents, Hoover Library. (2) The Stilwell Papers, p. 302.
19. Rad CFB 18251, Stilwell from Ferris to Fwd Ech NCAC, Hq Rr Ech, Hq AAF India-Burma Sec, ATC Chabua, Humpco Chabua, Fourteenth AF, Kweilin Z-Force, Kunming Y-Force, Delhi SOS, Ramgarh, and Chabua SOS, 6 Jun 44. Item 2595, Bk 7, JWS Personal File.
20. Rad CFB 18238, Stilwell to AGWAR for Marshall, 6 Jun 44. Item 2600, Bk 7, JWS Personal File.
21. Memo, sgd E.G., 6 Jun 44. The Chennault-Wedemeyer Letter, Item 44.
22. (1) Plan for Defense of East China. The Chennault-Wedemeyer Letter, Item 45. (2) Table 5 shows Fourteenth Air Force aircraft inventory for this period.
23. Rad WAR 47296, Marshall to Stilwell, 7 Jun 44. Item 2603, Bk 7, JWS Personal File.
24. Rad CHC 1173, Stilwell to Marshall, 8 Jun 44. SNF 131.
25. (1) Quotation is from the United States Strategic Bombing Survey, The Effects of Strategic Bombing on Japan's War Economy, p. 45. (2) The figure on tonnage dropped is from the survey's The Strategic Air Operation of Very Heavy Bombardment in the War Against Japan (Washington, 1946), Chart 8. (3) The B-29 missions are listed in Chapter VI, Section Two, History of CBI.
26. Japanese Study 78.
27. (1) Japanese Studies 78 and 129. (2) History of Z-FOS.
28. (1) Memo, G-4, Z-FOS, for CofS, Z-FOS, Jun 44. AG (Z-FOS) 337, KCRC. (2) History of Z-Force. (3) Hq Kwangsi Comd, Chinese Combat Comd (Prov), U.S. Forces, China Theater, Campaign in Southeastern China, MS in possession of Brig Gen Harwood C. Bowman, USA (Ret), Montgomery, Ala. (hereafter cited as Campaign of Southeastern China), p. 4.
29. Summary of Data; Memo, Chennault for Stilwell, 20 May 44, sub: Activation of AGFRTS. The Chennault-Wedemeyer Letter, Incl I, par. 22, and Incl II, Item 4.
30. Japanese Studies 78 and 129.
31. (1) Rpt, Col Thomas J. Heavey, U.S. Observer, IX War Area, to Gen Lindsey, 26 Jun 44. AG (Z-FOS) 210.684, KCRC. (2) Japanese Study 78.
32. (1) For details of the movement of U.S. personnel from the Kweilin area, see Z-FOS Journal. KCRC. (2) Rad CFB 19135, Ferris to Stilwell, 26 Jun 44. The Chennault-Wedemeyer Letter, Item 51. (3) Rad CFB 19119, Ferris to Sultan, 25 Jun 44. Item 2653, Bk 7, JWS Personal File.
33. Rad CFB 17559, Ferris to Stilwell and Sultan, 19 May 44. Item 2544, Bk 7, JWS Personal File.
34. Rad CFB 18945, Ferris to Stilwell, 21 Jun 44. Item 2634, Bk 7, JWS Personal File.
35. Memo, Ferris for Stilwell, 23 Jun 44, Item 299, Bk 3, JWS Personal File.
36. Rpt, Davies to Stilwell, undated. SNF 31.
37. (1) U.S. Department of State, United States Relations with China, pp. 556-57. (2) Wallace-Alsop Testimony, U.S. Senate Judiciary Subcommittee, Senator Pat McCarran, Chm, 17-18 Oct 51.
38. U.S. Department of State, United States Relations with China, pp. 556-57.
39. Ltr O, Hq Fwd Ech USAF CBI to Barrett, 21 Jul 44, sub: Dispatch Observers Sec to Areas Under Control of Chinese Communists. DIXIE Mission, Vol. I, Sec VI, Item 73. OCMH.
40. (1) Rad New Delhi 472, Wallace to Roosevelt, 28 Jun 44. Item 58, OPD Exec 10. (2) Wallace-Alsop Testimony, cited n. 37(2).
41. (1) See p. 119, above. (2) Rad 155, F.M.D. [Field Marshal Dill], JSM to Mountbatten, 10 Mar 44; Rad SAC 1022. Mountbatten to COS and JSM, 13 Mar 44. Item 66, OPD Exec 10.
42. Ltr, Wedemeyer to Marshall, 9 Jul 44. Item 57, OPD Exec 10.
43. Mountbatten Report, Pt. B, pars. 171, 172.
44. Ltr, Mountbatten to Dill, 26 Jun 44. Item 57, OPD Exec 10.
45. Interv with Marshall, 6 Jul 49.
46. CM-OUT 53610, 20 Jun 44. Case 404, OPD 381, A47-30.
47. Rad CHC 1215, Stilwell to Marshall, 22 Jun 44. SNF 131.
48. (1) Quotation from Interv cited n. 45. (2) See General Okamura's comments, p. 316, above, and Japanese Officers' Comments, Incl 3.
49. Memo, Handy for Marshall, 30 Jun 44. Item 869, Msg Bk 20, OPD Exec 9.
50. Rad WAR 59012, Marshall to Stilwell, 1 Jul 44. Item 2674, Bk 7, JWS Personal File.
51. The authors could find no trace of the offer to which Stilwell refers here.
52. Rad CHC 1241, Stilwell to Marshall, 3 Jul 44. SNF 131.
53. (1) Stilwell's Mission to China. (2) Interv cited n. 45.
54. (1) Item 57, OPD Exec 10. (2) For details of use of superior air power in Italy, see Sidney T. Mathews, The Drive on Rome, a volume in preparation for this series.
55. (1) See Ch. II, above. (2) CM-OUT 61514, Marshall to Stilwell, 7 Jul 44.
56. CM-IN 7045, Stilwell to Marshall, 9 Jul 44.
57. (1) Received in Chungking as a radio, WAR 6080, Roosevelt to Generalissimo, 6 July 1944, this message was presented to the latter as Memorandum 214, Stilwell for the Generalissimo, on 6 July 1944. Item 2676, OKLAHOMA File, JWS Personal File. (2) See Ch. VIII, above.
58. WD SO 109, 9 Aug 44, par. 1. Stilwell's date of rank in the Army of the United States was 1 August 1944.
59. On 4 December 1944, the Generalissimo stated that national troops had taken no part in the east China fighting. Min, 12th Mtg, Wedemeyer with Generalissimo, 4 Dec 44. Bk 1, Generalissimo Minutes, 1-69, 13 Nov 44-15 Jul 45, Job T49-20 CBI.
60. (1) Stilwell's Mission to China, Ch. X. (2) For the Soong-Hopkins relationship, see Books VII and IX, Hopkins Papers.
61. In the Washington Post, July 25, 1951, Alsop stated: "[John P.] Davies was the political adviser of Gen. Joseph W. Stilwell; I was the adviser of Dr. T. V. Soong and Maj. Gen. C. L. Chennault."
62. Item 60, OPD Exec 10.
63. (1) Memo, Roosevelt for Leahy, 13 Jul 44. Item 59, OPD Exec 10. (2) Rad WH 25, Roosevelt to Generalissimo, 13 Jul 44, presented as Memo 215, Ferris for Generalissimo, 15 Jul 44. Item 2677a, OKLAHOMA File, JWS Personal File.
64. (1) See Ch. I, above. (2) Memo, Chief, Asiatic Sec, Theater Gp, OPD, for Chief, Logistics Gp, OPD, 25 Mar 44, sub: Opn Plan, Burma-Myitkyina-Kunming Road, Project TIG-IC. ABC 384 Burma (8-25-42) Sec 7, A48-224.
65. Memo, Tansey for Handy, 16 Jul 44, sub: Projects TIG-IA and TIG-IC, Ledo Road--Construction and Opn Phases. ABC 384 Burma (8-25-42) Sec 7, A48-224.
66. Memo, Col Donald W. Benner, Chief, Air Services Div, Office Asst Chief of Air Stf, Matériel and Services, for Tansey, 25 Jul 44, sub: Projects TIG-IA and TIG-IC, Ledo Road--Construction and Opn Phases. ABC 384 Burma (8-25-42) Sec 7, A48-224.
67. Memo, Somervell (sgd by Magruder) for ACofS, OPD, 24 Jul 44. ABC 384 Burma (8-25-42) Sec 7, A48-224.
68. Memo, Actg CofS, ASF, for ACofS, OPD, 29 Jun 44. ABC 384 Burma (8-25-42) Sec 7, A48-224.
69. Memo, Handy for Marshall, 14 Aug 44. ABC 384 Burma (8-25-42) Sec 7, A48-224.
70. CM-OUT 85479, OPD to Stilwell 23 Aug 44.
71. Rpt, Gen Pick, CG, Advance Sec USF IBT, to Gen Wheeler, CG, USF IBT, 9 Aug 45; sub: Rpt on Stilwell Road, pp. 97-102. OCMH.
72. (1) See Ch. IX, above. (2) Y-FOS Journal.
73. Japanese Study 93.
74. Apart from using the Y-FOS Journal, Y-FOS 1944 Historical Report, and Japanese Study 93, this portion of the Salween campaign is based on the following sources: (1) War Dept Combat Film 26, Signal Corps Film Library, Washington. (2) Stodter Report. (3) Hist Rpt, Hq 69th Composite Wing, 27th Tr Carrier Sq, and 19th Ln Sq, Fourteenth AF. USAF Hist Div. (4) SOS in CBI, App. I, History of Burma Road Engineers.
75. In addition to the Y-FOS Journal, Y-FOS 1944 Historical Report, Japanese Study 93, and Japanese Officers' Comments, sources consulted for this section are: (1) Rpt, Col Carlos G. Spaht, CO, U.S. Ln Gp, 8th Army, to Dorn, 29 Jul 44. AG (Y-FOS) 319.1. (2) Interv with Spaht, Baton Rouge, La., 1 Oct 48. (3) Of the six Chinese Armies to participate in the Salween campaign, the 8th Army prepared the only detailed and frank account of its role. This translated history, including tactical maps, is among the papers of Colonel Spaht. (4) Ltrs, Spaht to authors, 24 May, 29 Jul, 24 Sep, 2 Oct, and 28 Oct 47. OCMH. | 1 | 27 |
<urn:uuid:d6cc8af8-a809-4c06-b2fb-fa6f48eb7a94> | J Abernathy & Co Stonemasons Lathe 1881
This link appeared in an Australian machinery forum today:
Granite Lathe - MORUYA ANTIQUE TRACTOR & MACHINERY ASSOCIATION (Inc).
It shows the large stonemason's lathe which made the granite columns for many of Sydney's public buildings from the 1880s on. It worked until the 1960s, and could turn granite columns up to twenty one feet in length. It's good to see that it has survived and is being cleaned up for display. The link also includes photos of some of the public buildings in which the columns were used, and a photo of one of the General Post Office building columns being turned.
Anyone have ideas on the tool bits used and how frequently they would be consumed on such a beast turning granite in the late 1800s?
The following websites have interesting articles on this type of machine. It even includes speeds and feeds. Around the turn of the 20th century, a similar machine was used to make the columns for St. John the Divine Cathedral in New York.
Originally Posted by doug8094
Tools and Machinery of the Granite Industry, Part IV | Chronicle of the Early American Industries Association, Inc., The | Find Articles at BNET
Last edited by dinosaur; 08-08-2010 at 09:57 AM.
Reason: Additional Information
Thanks for posting this. Itís good to be able to tie a machine tool in with specific items of work!
Incidentally, itís J Abernethy, not Abernathy, of Aberdeen, Scotland.
I canít view the Google links posted by Dinosaur, but his first link refers to turning by pressing a free-running wheel against the granite to remove material by a crushing action, and finishing using a carborundum grinding wheel.
I have a book called Historic Industrial Scenes - Scotland by Donnachie, Hume & Moss (another of Peter Sís tips), and this has a photo c.1907 of a very substantial lathe turning a block of granite. This has a round wheel of some sort held by a stout arm in the tool post. Thanks to Dinosaur, I now know what it is!
Incidentally, Aberdeen was known as the granite city, but recent public buildings there have been clad with imported granite!
J. Abernethy & Co. of the Ferryhill Foundry in Aberdeen produced an absolutely enormous range of castings and machinery. They could design and completely build machinery for almost any industry. They were the world's largest maker of granite working machinery including cranes and lathes etc. They closed their foundry in the 1960's.
From the 1820's until the 1930's Aberdeen was the largest granite producing area in the world with 25,000 men employed in the industry in Aberdeen and the surrounding area.
Whole families of granite workers were recruited to move to Moruya in Australia to work the granite for the Sydney Harbour Bridge in the 1930's.
This lathe must have been moved from Aberdeen to Moruya at this time.
This was probably easier than the first plan which was to ship the finished granite from Aberdeen until they found suitable rock at Moruya.
Last year I had to cut cast iron samples from an 1861 Abernethy sectional railway bridge for tensile/strength testing for insurance purposes (These bridges are still in use in their hundreds on Scottish railway lines) The testing Lab said in their report that they were the best samples of cast iron they had tested in the 45 years they had been in business.
Great link. I was engrossed as much by the technical history as I was by the cultural history.
Originally Posted by Buchanman
The lathe had been in Australia for many years before the Sydney Harbour Bridge was built. The photo of the General Post Office building which appears in the link was taken about 1901.
General Post Office (Sydney) - Wikipedia, the free encyclopedia
I am sorry that the link does not work for you in the UK.
If you go to Google and search: granite lathe St. John the Divine
it should bring you to the sources where the link is supposed to take you.
As the General Post Office was built in the late 1860's I doubt that this particular lathe was used to make the columns as it has a manufacturing date of 1881.
When I was conducting some family history research I found reference to Aberdeen granite workers and their families complete with tools and machinery moving to Australia in the 1920's (not 1930's as I previously mentioned) to work the Moruya granite for use on the Sydney Harbour Bridge.
After the bridge was built, most of these families moved back to Aberdeen but some remained in Australia so some North East Scotland families have branches in Australia as well as Aberdeenshire.
On a connected note I worked for a time with a company that had been an offshoot of The Consolidated Tool Company of Fraserburgh, Scotland. (about 45 miles from Aberdeen). When they were clearing out an old document cupboard they found the original 1923 order from The Sydney Harbour Bridge Company for the CPT compressed air rivetting hammers that were supplied to fix in place the millions of rivets that were used.
So not only men with granite working experience but machinery from North East Scotland was exported to the other side of the world to build this iconic structure.
Thanks for the bit of history about the Aberdeen families who came to Australia to work on the Sydney Harbour Bridge pylons.
You said "As the General Post Office was built in the late 1860's I doubt that this particular lathe was used to make the columns as it has a manufacturing date of 1881." Stage 1 of the GPO Building was finished about 1874, and stage 2 some time later. It would appear that the lathe shown on the linked web site was indeed the one used to turn the columns for the GPO, Queen Victoria Building and many other buildings constructed in Sydney in the late 1800s and early 1900s.
The proceedings of the Eurobodalla Shire Council Works and Services Committee on 13/10/09 shown on page 7 of this rather voluminous document:
give a more detailed history of the lathe, and an agreement to the acceptance of a Government grant of $35909 to move the lathe from Forbes to Moruya and provide a shed to accommodate it because of its historical significance to the town.
If you compare the details in the photo of the lathe now at Moruya and the one shown turning the GPO column they are so similar that they are almost certainly the same machine, and the Council proceedings would seem to confirm this.
In what looks to be the official website of the Moruya Antique Tractor & Machinery Association (inc.), the company name is given as Abernathy, with an 'a'.
Hi, Franco and Marty Feldman.
It does indeed appear that the lathe at Moruya must have been supplied new to Australia in 1881. They supplied these lathes all over the world, including the USA, Germany, France and there is reputed to be one still in use in Russia.
The name is spelt"Abernethy" not "Abernathy" perhaps the cast plate on the Moruya lathe is a bit indistinct but that is very unusual for an Abernethy casting.
Drain covers and branders can still be seen in older parts of Aberdeen with "J. Abernethy & Co. Ferryhill Foundry. Aberdeen" cast into them.
On the opposite bank of the River Dee less than 1/2 a mile from the Abernethy's Foundry was the factory of Harper & Co. who were the world's largest manufacturer of flat belt and rope pulleys.
When the British Standards for pulleys was published they were an exact copy of Harper's specifications!
I cannot open Dinosaurs link for the turning tools, But here goes for my contribution on Granite turning, Some 30 years back, i was in Aberdeen, and noticed an old factory, with the name on its side The Aberdeen Granite Turning Company, I think from memory, it was up near the prison, another handsome building.
however in the late 1960, early 70/s period the Glasgow shipbuilding firm of Alexander Stephens &Co of Linthouse were looking for further avenues of work to pursue, and they bought over the goodwill of an Aberdeen granite firm, I wonder if that was this firm i have just mentioned? At the time of the closure of Stephens, i was in the Linthouse engine works, by that time the plant had by and large all gone, but outside in another fair sized building was Stephens granite turning dept, One of the lathes i believe was an Abernethy, The rest were some what beat up large conventional geared head engine works lathes, And i remember thinking the beds were in a poor and somewhat water stained state
At that period, i could not believe that the tools on the toolposts could work, they by and large just looked like a portion of boiler plate profiled to a circle, with the edge angled away, Still lying about were various nice turned workpieces, just left as the workers had lifted their jackets, and bid the concern a fond farewell, as they contemplated a dismal future! It is equally of note that the original Stephens family were from the east coast of Scotland.
In Edinburgh also the craft of granite turning was carried out by some of the old paper machinery builders, and i think also the firm of Mclean and Gibson of Glenrothes may also still do this work, as i believe they are still in the paper machinery trade.
My local museum has an east coast stone planing machine lying in the yard, It was constructed by The Anderson Grice Co.of Carnoustie, Better known for very fine steam &electric cranes, they were still building steam derricks till into the 1970/s
Whist i am here (as an aside) My daughter and her husband and child live in Dunfermline in Fife, back through east! and about half an hour ago, he told me in a phone call that the handsome old mill building of the former Winterthur Silk Spinning Co. (Yes a branch of a fine Swiss firm The mill closed i believe in the mid sixties and was until a year ago home to a large furniture retailer This building, was seriously damaged by fire last night no doubt todays scourge of vandalism, This firm, made the most fine silk, for the wedding dress of our present monarch Queen Elizabeth the second, It is sad how nice things are vanishing very fast.
I did a search on Anderson-Grice and came up with this photo of one of their circular saw blades, and how the haulage contract transported it with his Leyland Comet truck:-
Angus Council | Local History | MS 671 Anderson-Grice of Carnoustie
Another photo here, too small to be of any use, but interesting to see a big circular saw blade in a lathe:-
Scran - Anderson-Grice Co Ltd - Making biggest circular saw in the world
Thanks for the link to Anderson-Grice and for the photo of the Harry Lawson Leyland Comet.
Harry Lawson of Broughty Ferry are still one of Scotland's leading transport companies renowned for their always immaculate vehicles and excellent reputation as a top class haulier.
today i went over to our local museum, and had a look at the old stone planer, I am afraid i was mixing the manufacturers up, Looking at the name on the machine, it reads
Nicol Esplin &Co
Leys Mill Ironworks.
does anyone know anything about this concern, Previously this old machine had been owned by the large Motherwell civil engineering contractors Murdo Mackenzie & Co, Some years ago, i asked the late Mr Mackenzie about this machines history, but he was not sure as they had bought it second hand many years before It would seem up the North East coast of Scotland was a hotbed of firms making stone working plant & machinery Does Buchanman know anything of the former Rubislaw granite quarry ? as i believe at one time this huge man made hole was one of the biggest in the world, and i am led to believe has now changed hands
I couldnít find anything on Nicol Esplin, but thanks to Graceís Guide I was able to find an early Arbroath (Leysmill) connection with stone planing machinery:-
1851 Great Exhibition: Official Catalogue: Class VI.: J. Hunter
A search on James Hunter led to a lot of information, including this:-
'James Hunter was the Manager of the freestone quarry at Leysmill, from which came high quality paving stone that was sent all over Europe. In 1833 he became interested in mechanised stone dressing and after two years work obtained his first patent for a planing machine in 1835. Twenty years later came the big saw embodying renewable tip tooling, a principle (and a common design of tool) applied also to planers, milling machines and reciprocating saws. Patents were obtained in the joint names of James Hunter and his son George, but contemporary accounts suggest that George was largely responsible. Hunterís machinery was made by Archibald Munro & Co. of the Arbroath Foundry and was considered by many to have been a major factor in the expansion of the Scottish granite industry.'
Hi, Cutting Oil Mac.
Rubislaw Quarry in Aberdeen was not the largest excavated hole in Europe although it was the deepest in the world in relation to it's surface area.
In the 1970's I stood on the very edge and looked down and it was a frightening experience. The men climbed up and down ladders and walkways to get to and return from their work every day.
At these depths the rock to build the beautiful granite buildings of Aberdeen could only be extracted by the invention of the "Blondin" by a man called John Fyfe from Monymusk near Aberdeen.
The "Blondin" (named after the famous tight-rope walker) consists of tight steel cables strung across the top of the quarry with a trolley which lowered and raised a bucket/grab to raise the stone from the bottom of the quarry.
The mechanism of the original "Blondin" prototype was still in place at the top of the Balmedie quarry some years ago, but whether it still exists I don't know.
John Fyfe was a very inventive and shrewd businessman and went on to become the largest granite producer in the world.
Rubislaw Quarry has now filled with water and has been sold for probably leisure/boating use.Rubislaw quarry - Wikipedia, the free encyclopedia
I have had some further info from Mr Park, one of the senior curators at my local museum Summerlee, regarding the manufacture of the stone planer, It would seem that some time ago, a descendant of the inventor of the stone planer James Hunter came into the museum, and had a conversation with him, & he states that William Munro & Co were the first manufacturers of this machine, But that Nicol Esplins only made the planers for seven years, (1906- 1913) So that ties down the time scale for the manufacture of the museums planer, It would seem that an Esplin steam engine still exists on the island of lewis, Apparently Esplin was related to the Leysmill quarrymaster, There is a suspicion that Hunter &Esplin may also be related
In construction the planer is a strange beast It is" reasonably planer like", In construction being a double column machine with a rack driven table, But the table does not ride on slideways, but bears on rollers kept in alignment on machined slots on the underside of the table, which is a most heavy &solid construction
The tool saddle is placed on its guides opposite from a conventional planer Being in this instance in the back face of the cross rail, There is no auto cross feed, only hand feed, and for down feed adjustment one has to raise or lower the cross rail,which has power up &down movement, by three belt pulleys and open &closed belts driving the two conventional raising screws on the side columns There is also a most strange "pneumatic buffer" cylinder attached to the cross rail, The cross rail, is hinged at each side to its twin saddles on each vertical side cheek or frame, I am uncertain if this whole caboodle is for tool lift on the return stroke?
The table is not fitted with tee slots, only cored round holes and still affixed to the table after all these years is a couple of brackets,carrying a cast iron beam, having dogs for gripping lengths of stone. This beam can be turned 180 degrees between the brackets to allow at least three to be finished, A primitive deviding head?
I am not sure if this qualifies as a planer...anyone like to hazard a guess as to what is going on here? I thought that might be a badly overhung rotating cutter, but I now think it may be fixed - whatever it is). I think the pulley in the centre is a chain guide, not a belt pulley for tooling.
No indexing that I can see, perhaps all the adjustments are made to the tool, allowing it to reach half the flutes, and then the column turned 180* to finish...but why are some flutes partially finished/roughed...
This photo was taken on-site during the construction of the Auckland War Memorial Museum in the early 1920's.
There is a name on this machine. It seems to say The Anderson............oustie, Scotland. I just had a look at Mac's post, I bet it says The Anderson Grice Co. of Carnoustie!
The granite foundations for the building come from Coromandel in NZ, but the facings for the building are Portland stone from the Isle of Wight.
Quote: "Thereafter, speed of construction depended on the stonecutters, for walls could only rise as fast as stones were made ready on site. The process required a diamond-toothed 'break-down' saw, followed by grinding and smoothing on a large circualr table known as 'the gramophone'. Each of the eight columns at the front of the building was composed of eight blocks of stone, about 3 ft 4 unches high and 5 ft in diameter, and the cutting and fluting of each of these took one man about one month".
(A Noble Prospect; 75 Years of The Auckland War Memorial Museum Building by Richard Wolfe).
Is there any taper on 'Doric' columns?
Maybe I need to take my calipers on my next visit and check the indexing | 1 | 6 |
<urn:uuid:f51335f6-5119-41ed-9478-912f0d514068> | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Anthroposophy is a "spiritual science" founded by Rudolf Steiner. It is an attempt to investigate and describe spiritual phenomena by means of "soul-observations using scientific methodology". Anthroposophical research attempts to investigate and describe a spiritual world that, it seeks to show, resides behind the world of human senses and experience, aiming thereby to attain precision and clarity approaching that of natural science's investigations and descriptions of the physical world.
Steiner's ideas have their roots in the flowering of Germanic culture that resulted in the transcendent philosophy of Hegel, Fichte and Schelling, on the one hand, and on the other, the poetic and scientific works of Goethe, upon whom Steiner draws heavily. Steiner was also profoundly influenced by two seminal philosophers of the existential school, Franz Brentano and Wilhelm Dilthey, upon whose works both Edmund Husserl and Ortega y Gasset built. Steiner's purely philosophical early work led him through the consciousness of thinking itself into an increasingly explicit treatment of spiritual experience:
- "Anthroposophy is a path of knowledge to guide the Spirit of the human being to the Spiritual in the universe. It arises in man as a need of the heart, of the life of feeling: and it can be justified inasmuch as it can satisfy this inner need." Rudolf Steiner
The word anthroposophy is derived from the Greek roots anthropo meaning human, and sophia meaning wisdom. The term was first used by philosopher Robert Zimmermann in his book Anthroposophy. Steiner borrowed this term when he founded his own process of spiritual study, Anthroposophy . Anthroposophy should not be confused with anthropology, the empirical study of human cultures.
In his early twenties, Steiner was asked to edit Goethe's scientific writings for a major publication of that writer's complete works. In the course of this work, Steiner began publishing various works that foreshadowed his later ideas, but were still set within the philosophical and scientific framework of his age: chiefly Goethe's Conception of the World and his commentaries on Goethe's scientific essays. His first work, Die Philosophie der Freiheit (translated variously as The Philosophy of Spiritual Activity, The Philosophy of Freedom, or Intuitive Thinking as a Spiritual Path), was published when he was in his early thirties. Steiner created a concept of free will that was strongly founded upon inner experiences, especially those that occur in independent thought, without any explicit references to the nature of these experiences. His first reference to 'anthroposophy' dates from this early period.
Steiner's development and studies led him further and further into explicitly spiritual and philosophical research. These studies were chiefly interesting to others who were already oriented towards spiritual ideas; chief amongst these, at least in Steiner's earlier phase of development, was the Theosophical Society. He was asked to lead the German section of this primarily Anglo-American group. His work was distinct from that of most other members of the Society (exceptions included Bertram Kingsley in England) and both he and the then president of the Theosophical Society appear to have 'agreed to disagree' in an at first harmonious way. By 1907, however, there was a growing split between the group around Steiner, who was trying to develop a path that embraced such cornerstones of Western civilizations as Christianity and natural science, and the mainstream Theosophical Society, which was oriented toward an Eastern, and especially Indian, approach.
The Anthroposophical Society was formed in 1912 after Steiner left the Theosophical Society Adyar over differences with its leader, Annie Besant. She intended to present to the world the child Jiddu Krishnamurti as Christ reincarnated. Steiner strongly objected, and considered any equation between Krishnamurti and Christ to be nonsense (as did Krishnamurti himself once he had reached adulthood). This and the philosophical differences mentioned above led Steiner to leave the Theosophical Society. He was followed by a large number of members of the Theosophical Society's German Section, of which he had been secretary. Members of other national chapters of the Theosophical Society followed.
By this time, Steiner had reached considerable stature as a spiritual teacher. He claimed to have direct experiences of the Akashic Records (sometimes called the "Akasha Chronicle"), a spiritual chronicle of the history, pre-history and future of the world encoded in the etheric field of the earth. In a number of works — especially How to Attain Knowledge of Higher Worlds and Occult Science: An Outline —, Steiner described a path of inner development that would, he wrote, enable anyone to attain comparable spiritual experiences. Sound vision could be developed in part by practicing rigorous forms of ethical and cognitive self-discipline, concentration and meditation; in particular, a person's moral development must precede the development of spiritual faculties.
By 1912, a flowering of artistic work inspired by Steiner and the anthroposophical movement was well underway. New directions in drama, painting, sculpture, artistic movement and architecture all came together in a grand performance center, the First Goetheanum, built in the years 1913-1920. To a significant extent this was built by volunteers from many countries and much of the work was accomplished during the First World War. The international community of workers, artists and scientists that came together around the project in neutral Switzerland existed in sharp contrast to the war-torn European nations around.
After World War I, the anthroposophical movement took on new directions. Practical projects such as schools, centers for the handicapped, organic farms and medical clinics were established, all inspired by anthroposophical research.
Steiner died in 1925, but anthroposophical work has continued in all of the areas established during his lifetime as well as in many new projects established since. Seminars, artistic trainings, and institutions such as schools, banks, farms and clinics exist throughout the world, all inspired by the idea that spiritual work can be systematically and methodically pursued in harmony with outer endeavors. The Goetheanum continues to be the world center of the anthroposophical movement; national, regional and local centers have grown up in many areas, however.
Possibility of a union of science and spiritEdit
Steiner believed in the possibility of uniting the clarity of modern scientific thinking with the awareness of a spiritual world that lives in all religious and mystical experience. Science focuses on theories which can be tested and verified. Steiner tried to create an approach to what he called the "inner life" that would use the careful, systematic methodology created by modern science, but turn its attention to the soul and spirit.
In anthroposophy, artistic expression is treated as a potentially valuable bridge between spiritual and material reality. The aim is to reach higher levels of consciousness through meditation and observation. Steiner developed and described numerous systematic exercises which he maintained would realize these goals; the most complete exposition of these is found in How to Know Higher Worlds : a modern path of initiation.
Conception of the human beingEdit
Anthroposophy suggests that human beings have inhabited earth since its creation, albeit in a spiritual form. This spiritual form then processed through a number of stages to reach its current form, stages which included emanations of lesser beings such as animals and plants, before the first physically incarnate humans appeared on earth. Thus every living thing has evolved from humankind in its purely spiritual form.
Steiner believed that any phenomena could be described from a variety of perspectives. His descriptions of the nature of the human being include a three-fold, four-fold, and seven-fold view (see below). He recommended viewing any question from a variety of perspectives, and explicated twelve different, equally valid world-views that could be applied in any situation and world religions.
In the three-fold view, the human being is composed of body, soul and spirit.
- The body containing the physical self, the life processes and forces, and the framework of consciousness.
- The soul passing into incarnation in a body, and out of this again into the spiritual existence.
- The spirit connects the lives on earth together and with the spiritual world; this spirit is eternal and creative and humans are only beginning to become conscious of its activity within us.
In the fourfold view, which Steiner expands on very frequently and puts to practical uses in subjects such as medicine and child education, the human being includes:
- the physical body,
- the life or etheric body, the organization of forces of metamorphosis and growth for living beings
- the consciousness or astral body, and
- the ego or "I" of the human being.
The anthroposophic description of the human being as consisting of seven intimately connected parts, several of which are still in development, is similar to that found in Theosophy. Three stable organizations — physical body, life, and consciousness — the self or ego and three spiritual components — spirit consciousness, spirit life, and spirit self — make up the seven levels. This view is especially clearly articulated in his Theosophy, and An Outline of Occult Science.
Steiner changed from his early use of theosophical terms ("etheric body", "astral body") to a more descriptive terminology (life body or rhythmic organization, sentient body or organization of consciousness).
The physical body is the carrier of the human form, from which all animal forms are one-sided derivations. It has three primary functional areas, each supporting a particular psychological activity:
- the nerve/sense system, primarily centered in the nervous system, supporting thinking and perception
- the rhythmic system, including the breathing and the circulatory system, supporting feeling
- the digestive system, including the organs below the diaphragm, supporting willing
Elements of each functional system are found in areas primarily dedicated to other systems; for example, there are nerves found in the heart and lungs, and blood vessels in the sense organs. They thus interpenetrate throughout the human form, and the corresponding psychological activities also interpenetrate; all conscious perception and all directed thinking has an element of will or intention, all conscious feeling has an element of cognition, and so on.
In his mature work, Steiner identified twelve senses:
- balance, or equilibrioception
- movement, or proprioception
- pain/well-being, or nociception
- touch, or tactition
- taste, or gustation
- smell, or olfaction
- warmth, or thermoception
- sight, or vision
- hearing, or audition
Only the first nine of these are presently recognized senses of empirical science.
Life or etheric bodyEdit
All that lives has, in addition to a physical body, a permeating life organization. Steiner cites as proof of this the physical identity of a dead and living organism; what is lacking in the former is the element of life itself. This life organization, or etheric body, supports a variety of functions, seven in all:
- maintaining the organism
The life organization is the carrier of biological rhythms and the habits. It is dependent upon its immediate environment in the earliest phase of childhood, when physical growth is most active. Approximately seven years after conception, an individual's life organization becomes independent of its environment; at this stage, it develops forces free of those directing the organism's growth and capable of being utilized for directed learning. (Previously, learning takes place imitatively, through the unconscious unity of these forces with their environment.) Directed learning that takes place before this independence thus redirects forces that would otherwise support physical growth and development.
With the independence of the life forces, the organism's life forces begin to transform the inherited physical body into a more individualized form. Steiner identifies the onset of the second dentition as an indication that the first stage of growth is complete and that this transformation has begun.
Organization of consciousness, or astral bodyEdit
Animal life adds an element of sentience to the living world of plants. Steiner points to sleep life, when the physical body and life organization are identical with waking life, yet sentience is withdrawn, as proof that sentience is not purely a function of the physical and life bodies. Our concepts (and prejudices), emotions and will (and willfullness) reside here; these are relatively fixed, in contrast with our more fluid and active soul life. There is an intimate connection between the soul and consciousness, however; the soul leaves an impression on the organization of consciousness, its thinking coming to fixed concepts, its feeling resulting in emotions and its volition forming our set will.
As the young child picks up concepts, emotional patterns and intentions from its environment, the organization of consciousness is not yet independent at this age. At around fourteen years after conception, an age often marked by puberty, this organization becomes independent; this is marked by a capacity for independent judgment and thinking, by a more volatile life of feeling and by volition directed towards more personalized goals. The free organization transforms the young person's personality into a more individual form at this time.
Human existence includes an element distinct from animal consciousness, the ego. This supports self-awareness and self-reflection; Steiner points to the lack of a true biography, more particularly of autobiography in animal existence as an indication that the ego is particular to humans. The capacity for self-direction and full responsibility are connected to the ego, which only becomes independent around twenty-one years after conception. This event is generally recognized by modern societies granting adult responsibilities and rights at about this age.
Place in Western PhilosophyEdit
The epistemic basis for Anthroposophy is contained in the seminal work, The Philosophy of Freedom, as well as in Steiner's doctoral thesis, Truth and Science. These and several other early books by Steiner anticipated 20th century continental philosophy's gradual overcoming of Cartesian idealism and of Kantian subjectivism by linking on to Goethe's conception of the human being as a natural-supernatural entity: natural in that humanity is a product of nature, supernatural in that through our conceptual powers we extend nature's realm, allowing it to achieve a reflective capacity in us as philosophy, art and science.
Like Edmund Husserl and Ortega y Gasset, Steiner was profoundly influenced by the works of Franz Brentano, whose lectures he had heard as a student at the Technical University of Vienna, and read Wilhelm Dilthey in depth. Through Steiner's early epistemological and philosophical works, he became one of the first European philosophers to overcome the subject-object split that Descartes, classical physics, and various complex historical forces had impressed upon Western thought for several centuries. His philosophical work was taken up in the middle of the twentieth century by Owen Barfield, a philosopher of language from Oxford University and through him influenced the Inklings, a group that included such writers as J.R.R. Tolkien and C.S. Lewis. It was also taken up by the philosopher (and prolific author) Herbert Witzenmann. Steiner's philosophy has not found widespread recognition by academic philosophers outside of the anthroposophical movement, however; one exception is Richard Tarnas, author of The Passion of the Western Mind.
Steiner's philosophy begins with the division between our sensory experiences of the outer world and our soul experiences of an inner world consisting of thoughts, feelings and intentions (will impulses). He focused on how our thinking in particular complements what we experience through the senses; one facet of the world is its outer appearance, a second is its inner structure. Humans access the two separately but they are originally united in the objective world, and we have the capacity to reunite them through creating a relationship between our percepts and our concepts, between what we experience outwardly and inwardly. Steiner suggested that we only understand some part of the outer world when we find this connection between our sensory impressions of it and our concepts about it.
Thus, in his view, though all human experience begins being conditioned by the subject-object divide, through our own activity we can progressively overcome this divide. This lies in our free will, however; we are given the divide but not its overcoming.
Steiner also examines the step from thinking that is determined by outer impressions to what he calls sense-free thinking, characterizing thoughts without sensory content, such as mathematical or logical thoughts, as free deeds. He thus located the origin of the free will in our thinking, and in particular in sense-free thinking. Especially in his later work, Steiner points to the objective truths attainable through mathematics and logic as evidence of an objective non-sensory world - a world of spirit/mind that is not determined by the subjective nature of our inner experiences.
|“||A person seeking inner development must first of all make the attempt to give up certain formerly held inclinations. Then, new inclinations must be acquired by constantly holding the thought of such inclinations, virtues or characteristics in one's mind. They must be so incorporated into one's being that a person becomes enabled to alter his soul by his own will-power. This must be tried as objectively as a chemical might be tested in an experiment. A person who has never endeavored to change his soul, who has never made the initial decision to develop the qualities of endurance, steadfastness and calm logical thinking, or a person who has such decisions but has given up because he did not succeed in a week, a month, a year or a decade, will never conclude anything inwardly about these truths.||”|
— Rudolf Steiner, "On the Inner Life",
Paths of spiritual developmentEdit
The goals of spiritual developement are two-fold. One branch focuses on the "inner activity" through which thoughts, feelings and intentions arise. Steiner suggested that for our modern consciousness it is most productive and leaves the esoteric student most free to start by focusing on thinking, which we today experience with more conscious clarity than feelings or will, – this is the path of spiritual science – but that it is in principle possible to achieve an esoteric training through a focus on feeling (mysticism) or the will life (ritual), as well. The latter cases may involve a sacrifice of clarity (on the mystic path) or freedom (on the path of ritual), however.
The second path of esoteric developement consists of revealing the normally hidden process by which the world's objective nature arises, and on subjective perceptions of it. This is the path of phenomenological science, also known as Goethean science. According to Steiner, this path leads to the perception of the spiritual beings that underlie world evolution, beginning with the elemental beings of nature.
The esoteric path of spiritual scienceEdit
The anthroposophic path of esoteric training can be articulated into three steps, which do not necessarily follow strictly sequentially in any single individual's spiritual progress. The first step in this esoteric training is to recollect and follow how thought processes proceed in a particular situation, contemplating their sequential progression. Usually we attend to the thoughts themselves, the content that arises through thinking, and ignore the process by which they arise. By attending to the latter, we are examining an aspect of our experience that is normally hidden to us by the content itself. Philosophy – especially epistemology –, logic, and aspects of mathematics contemplate the structure and origin of our experience in this way, and thus belong to this first stage of esoteric training. This stage can be called the philosophical state.
A second stage is reached when we no longer, as is usual in philosophy or logic, reflect on past thinking processes, but rather focus our attention on our immediate thinking, on the thinking taking place in the moment of my attention. The unity of contemplating or experiencing subject and the object of contemplation/experienced content is complete here; my attention now focuses on itself, the content of my thinking is my thinking, is itself. This corresponds to the meditative state, known in some spiritual traditions as samidha, yoga or simply union. My inner activity is now simultaneously subjective – I experience myself bringing it forth – and objective – I experience it given to me as the content of my experience.
A third stage of esoteric training transforms the direction of the will, which is normally directed by the ego, i.e. from within, to an intended result in the outer world. When I seek to accomplish, not a transformation of outer conditions, but a transformation of my inner nature and self, I experience my inner condition – first of all, perhaps, my momentary thoughts, feelings and intentions, but later, my whole character and nature – as subject to my own conscious control. My soul life, which seemed to arise "naturally" and without my conscious participation, is progressively the result of my own conscious activity; I become the creator of my own inner life. Just as advances in technology allow us to progressively transform, more and more completely, the outer, naturally given world, that at an early stage of culture seems to be a factor beyond all human control, so do developments in our inner, moral capacities allow us to progressively transform our inner being to an extent – we discover on this path – only limited by our progress in developing these capacities.
Esoteric training thus consists of bringing this element of our experience – our character and inner nature – that usually plays into our experience without our conscious awareness of its contribution – into conscious focus and then control. The result of this path, according to Steiner, is the capacity to perceive the spiritual beings that underlie and generate inner experience, including those that direct our evolution from lifetime to lifetime and that influence our destiny. Steiner described this as a capacity to envision karma.
Steiner described numerous exercises for spiritual development, and other anthroposophists have added many others. A central principle is that "for every step in spiritual perception, three steps are to be taken in moral development". Moral development reveals the extent to which one has achieved control over one's inner life and exercises this in a direction in harmony with others' spiritual life. It shows the real progress in spiritual development, the fruits of which are given in spiritual perception. It also guarantees the capacity to distinguish between false perceptions or illusions (which are possible in perceptions of both the outer world and the inner world) and true perceptions, or, better said, to distinguish in any perception between the influence of subjective elements (i.e. viewpoint) and the objective reality the perception points to.
In order for a spiritual training to bear healthy fruits, Steiner suggested, a person would have to attend to the following:
- Striving to live in a health-giving manner – to develop a healthy body and soul.
- Feeling at one with all of life; to recognize oneself in everything, and everything in oneself; not to judge others without standing in their shoes.
- Recognizing that one's thoughts and feelings have as significant influence as one's deeds, and that work on one's inner life is as important as work on one's outer life.
- Recognizing that the true essence of a human being does not lie in the person's outer appearance, but rather in the inner nature, in the soul and spiritual existence of this person. Finding the genuine balance between having an open heart for the outer world's requirements and having inner strength and unshakeable endurance.
- The ability to be true to a decision once made, even in the face of daunting adversity, as long as the decision is still valid (until one comes to the conclusion that it was or is made in error).
- Developing thankfulness for everything that meets us, and that universal love that allows the world to reveal itself fully to me.
- Ceaselessly to live as these guidelines indicate.
Steiner suggested that a special group of general exercises should accompany all spiritual training as their influence on inner development would be beneficial whatever the spiritual path. These exercises are:
- Practicing ever better control of thinking. For example: for a period of time – normally a few minutes, not longer – contemplate any object and concentrate one's thoughts exclusively on this object. (A crystal or a paper clip might do.)
- Development of initiative. For example, choose any free deed, i.e. one that nothing is influencing you to do, and choose a regular time of day or day of the week to practice this. (Watering a plant daily could be a freely chosen deed.)
- Equanimity. Quiet reactive emotions. Discover how to express one's true feelings sensitively.
- Positivity. See the positive aspects of everything, and make the best out of every situation.
- Open-mindedness. Be open to new experiences, never letting expectations based upon the past close your mind to the lessons of the moment.
- Harmony. Find a harmonious, balanced relationship between the above five qualities, be able to move dynamically between them.
Some of the many exercises developed in anthroposophy include:
- Review of the day. Each evening, go backwards through the day recalling its events, its sequential unfolding (experienced here reversed in time), the people one met, etc.
- Experiencing the year's unfolding.
- Drawing the same plant or tree or landscape over the course of a year.
- Meditating the sequence of 52 mantric verses, the Calendar of the Soul, that Steiner wrote to deepen one's experience of the course of the seasons and the year and to bring the inner life of the soul into dialogue with nature.
- Building up an imagination independent of all outer experience, and then dissolving this imagination. The creative activity of imagination itself — the creative activity of the human spirit — can thus be experienced directly, stripped of the particular content with which it was occupied.
Relationship to Natural ScienceEdit
Anthroposophy explicitly seeks to extend natural science's mandate, which is to study the world as external observers to explore human experience from within. Steiner postulated that, as we have learned over centuries and even millennia to treat our experience of the outer world in a clear and systematic way, we can also learn to do this for our experience of out inner life.
Steiner and many other anthroposophists have tried to show how the genuine and even scientific study of man, need not restrict itself to externally observable phenomena. If an equally objective description of human soul and spiritual life can be achieved, he believed, these too can be elevated to a science. Natural science thus sets the example and provides a methodological goal for anthroposophy; the potential content of observation is however extended to experiences beyond the purely sensory.
The discipline of science assumes that scientific reasoning is possible, i.e.in anthroposophical terms, that our soul experience of thinking can be as objective and verifiable as the sensory phenomena themselves. (See also Anthroposophy#Scientific basis)
Relationship to religionEdit
Steiner was early in seeing the challenges of a multicultural society. He articulated the need for a spirituality that could respect and unite all religions and cultures. His line of thought can be summarized as follows:
Many people, especially those of Eastern cultures, see the need for a spiritual basis for a culture. Others, especially in the West, live in a materialistic framework that has achieved astonishing results, especially through the achievements of modern science, but has abandoned its spiritual roots. Steiner suggested that, without a reconciliation of these two, a clash of cultures would be inevitable. He suggested that the East (for Steiner, characteristically spiritually centered people and peoples) would only respect the West (characteristically people and peoples who focus on external reality and achievements) when a new spirituality arose in the West, a spirituality that united the achievements of both cultures.
The Christ being as the center of earthly evolutionEdit
Steiner's writing, though appreciative of all religions and cultural developments, emphasizes recent Western (rather than older Hindu or Buddhist) esoteric thought as having evolved to meet contemporary needs. He describes Christ and his mission on earth as having a particularly important place in human evolution.
Steiner emphasized, however, that:
- Christianity has evolved out of previous religions,
- The being that manifests in Christianity also manifests in all faiths and religions,
- Each religion is valid and true for the time and cultural context in which it was born,
- The historical forms of Christianity need to be transformed considerably to meet the on-going evolution of humanity.
It is the being that unifies all religions, and not a particular religious faith, that Steiner saw as the central force in human evolution. This "Christ Being" is for Steiner not only the Redeemer of the Fall from Paradise, but also the unique pivot and meaning of earth's "evolutionary" processes and of human history, manifesting in all religions and cultures.
Steiner's Christianity differs from that of the Gnostics who viewed the Christ phenomenon through the knowledge gained through earlier gnosticism, whereas for Steiner, Christ's incarnation was a historical reality and a pivotal and unique point in human history. In a lecture explaining the relationship between Anthroposophy and Christianity, Steiner explained: "Spiritual science does not want to usurp the place of Christianity; on the contrary it would like to be instrumental in making Christianity understood. Thus it becomes clear to us through spiritual science that the being whom we call Christ is to be recognized as the center of life on earth, that the Christian religion is the ultimate religion for the earth's whole future. Spiritual science shows us particularly that the pre-Christian religions outgrow their one-sidedness and come together in the Christian faith. It is not the desire of spiritual science to set something else in the place of Christianity; rather it wants to contribute to a deeper, more heartfelt understanding of Christianity."
Divergence from conventional Christian thoughtEdit
Steiner's views of Christianity diverge from conventional Christian thought in key places, and include gnostic elements. Only a very simplified account of those views can be given here, because though they only amount to about 4% of his total works, that 4% still amounts to about 15 volumes of books and lectures — and many of the other 335 or more volumes contain additional scattered comments on Christianity. One central point of divergency is Steiner's views on reincarnation and karma; these are explicated in the article on Anthroposophy (see sub-section titled "Anthroposophy in Brief/Reincarnation and Karma").
Steiner also claimed that there were two different Jesus children involved in the Incarnation of the Christ: one child descended from Solomon, as described in the Gospel of Matthew, the other child from Nathan, as described in the Gospel of Luke. (The genealogies given in the two gospels diverge some thirty generations before Jesus' birth, and 'Jesus' was a common name in biblical times.) In Steiner's descriptions, the divine "Christ Spirit", the Son-God of the Trinity, incarnated in the Nathan Jesus at the moment of the baptism by John; up until the moment of the baptism by John in the Jordan, the Nathan Jesus was a very great holy man, but not yet the divine Son of God.
His view of the second coming of Christ is also unusual; he suggested that this would not be a physical reappearance, but meant the Christ being would become manifest in non-physical form, in the "etheric realm" — i.e. visible to spiritual vision and apparent in community life and — for increasing numbers of people beginning around the year 1933. He emphasized that the future would require humanity to recognize this Spirit of Love in all its genuine forms, regardless of how this is named. He also warned that the traditional name, "Christ", might be used yet the true essence of this being of love ignored.
The Christian CommunityEdit
Towards the end of Steiner's life, a group of theology students (Lutheran as well as Catholic) approached Steiner for help in reviving Christianity. They approached a notable Lutheran pastor, Friedrich Rittelmeyer already working with Steiner's insights to join their efforts. Out of their cooperative endeavor, the Movement for Religious Renewal, now generally known as The Christian Community, was born. Steiner emphasized that this help was given independently of his anthroposophical work, as he saw anthroposophy as independent of any particular religion or religious denomination.
Practical work arising out of anthroposophyEdit
- Further information: Rudolf Steiner's Practical initiatives
Practical results of Anthroposophy include work in many fields. These include:
Out of the anthroposophical movement have come nearly a thousand schools world-wide. These are often called Waldorf Schools, after the first such school, founded in 1919; they are also sometimes called Steiner Schools. Some have been supported by the United Nations. The schools receive full or partial governmental funding in some European nations and in parts of the United States (as Waldorf public or charter schools). They have been successful in an unusual range of circumstances and cultures: in the impoverished barrios of San Paulo and the wealthy suburbs of New York City, in India, Egypt, Australia, Holland and Mexico. Usually supported by a vibrant parent community, they are one of the most visible achievements of the anthroposophical movement. In addition, an increasing number of teachers are using 'Waldorf' principles within other school settings, including within state-run schools.
- Main article: Biodynamic agriculture
Biodynamic agriculture began in the 1920s. Numerous bio-dynamic farms now exist in a great number of countries. Steiner must be counted as one of the two original founders of the modern organic farming movement (the other was Sir Albert Howard). Steiner's Agriculture Course was the first published work on organic agriculture, appearing 16 years before Howard's An Agricultural Testament, and significant parts of the present-day organic movement, especially in Europe, can be traced back to people wholly or partially inspired by the biodynamic approach. Bio-dynamic agriculture emphasizes activating the life of the soil and treating each farm as a living organism that includes human beings, animals, plants and the soil.
Steiner gave several series of lectures to physicians, and out of this grew a medical movement that now includes hundreds of M.D.s, chiefly in Europe and North America, and that has its own clinics, hospitals and medical universities. Steiner wanted Anthroposophical medicine to be an extension of, not an alternative to, conventional medical approaches, and from its beginning a conventional medical training has been required to become an anthroposophical doctor. Anthroposophical medicine uses many kinds of remedies and therapies, including many developed on the basis of a revised homeopathy. Several medium-sized pharmaceutical firms (notably Weleda and Wala) specialize in anthroposophical remedies.
Other fields of work include an original cancer therapy based on mistletoe extracts developed by anthroposophical researchers. Though an accepted and widely used medical treatment in Germany and the European Union, this remains controversial in the United States.
Centres for helping the mentally handicapped (including Camphill Villages)Edit
Early in the twentieth century, when proper care for the handicapped was sadly ignored in many countries, anthroposophical homes and communities were founded to give a worthy life-style to the needy. The first was the Sonnenhof in Switzerland, founded by Ita Wegman; slightly later, the Camphill Movement was founded by Karl König in Scotland. The latter in particular has spread widely, and there are now well over a hundred Camphill communities and other anthroposophical homes for both children and adults in more than twenty-two countries around the world.
Organizational development and biography workEdit
Bernard Lievegoed founded a new study of individual and institutional development; this is represented by the NPI Institute for Organisational Development in Holland and sister organizations in many other countries. Clients of these institutions range from some of the world's largest industrial firms to ordinary people trying to understand their own lives. One of the more interesting areas of application has been in transforming impoverished people's lives by bringing them to recognize and begin to realize their own biographical goals. Social work with prisoners shares these goals and has had the effect of bringing new purpose into many lives.
Anthroposophical banks were among the first to emphasize socially-responsible and community-based banking. Today around the world there are a number of innovative banks, companies, charitable institutions, and schools for developing new cooperative forms of business, all working partly out of Steiner’s social ideas. One example is The Rudolf Steiner Foundation, incorporated in 1984, and as of 2004 with estimated assets of $70 million. RSF provides "charitable innovative financial services". According to the independent organizations Co-op America and the Social Investment Forum Foundation, RSF is "one of the top 10 best organizations exemplifying the building of economic opportunity and hope for individuals through community investing." The first bank founded out of Steiner's ideas was the Gemeinschaftsbank für Leihen und Schenken in Bochum, Germany; it was started in 1974.
Steiner himself designed around thirteen buildings, many of them significant works in a unique, organic-expressionistic style. Foremost among these are his two designs for the Goetheanum. Thousands of further buildings have been built by a later generation of anthroposophic architects. Well-known architects who have been strongly influenced by the anthroposophic style include Imre Makovecz (HU), Hans Scharoun and Joachim Eble (DE), Erik Asmussen (SW), Kenji Imai (Japan), Thomas Rau, Anton Alberts and Max van Huut (NL), Christopher Day and Camphill Architects (UK), Thompson and Rose (USA), Denis Bowman (CA), and Gregory Burgess (Australia).
One of the most famous contemporary buildings by an anthroposophical architect is the ING Bank in Amsterdam, which has been given many awards for its ecological design and approach to a self-sustaining ecology as an autonomous building.
In the arts, Steiner's new art of eurythmy gained early renown, gaining a prize at a pre-World War II World Exposition in Paris. Eurythmy seeks to renew the spiritual foundations of dance, transforming speech and music into visible movement. There are now active stage groups and training centers, mostly of modest proportions, in many countries.
Speech and DramaEdit
There are also movements to renew speech and drama. The former go back to the work of Marie Steiner-von Sivers; among the better known of the latter is the approach founded by Michael Chekhov, the nephew of the playwright Anton Chekhov.
Other areas of anthroposophic work include:
- John Wilkes' fountain-like Flowforms. These sculptural forms guide water into rhythmic movement, and are used both decoratively and for water treatment in small to medium-scale applications.
- Astrosophy as opposed to Astrology,
- Phenomenological approaches to science,
- Painting and sculpture.
Social Goals of AnthroposophyEdit
For a period after World War I, Steiner was extremely active and well-known in Germany in part because in many places he gave lectures on social questions. A petition expressing his basic social ideas (signed by Herman Hesse, among others) was very widely circulated. His main book on social questions, Die Kernpunkte der Sozialen Frage (available in English today as Toward Social Renewal) sold tens of thousands of copies.
Steiner's Outlook on Social HistoryEdit
In Steiner's various writings and lectures he held that there were three main spheres of power comprising human society: the cultural, the economic and the political. In ancient times, those who had political power were also generally those with the greatest cultural/religious power and the greatest economic power. Culture, State and Economy were fused (for example in ancient Egypt). With the emergence of classical Greece and Rome, the three spheres began to become more autonomous. This autonomy went on increasing over the centuries, and with the slow rise of egalitarianism and individualism, the failure adequately to separate economics, politics and culture was felt increasingly as a source of injustice.
Anthroposophy has its own concept of history: according to Steiner our present time falls into the post-Atlantean period, since in his view the disaster that he says hit Atlantis in 7227 BC was a significant turning point in the history of man. This post-Atlantean period is divided by him into seven epochs, the current one being the European-American Epoch, which Steiner said would last until about the year 3573.
- Main article: Social Threefolding
There are three kinds of social separations Steiner wanted strengthened. This is known as Social Threefolding ,
- Increased separation between the State and cultural life
- Increased separation between the economy and cultural life
- Increased separation between the State and the economy (stakeholder economics)
Anthroposophy in BriefEdit
According to Steiner, a real spiritual world exists out of which the material one gradually condensed, and evolved. The spiritual world, Steiner held, can in the right circumstances be researched through direct experience, by persons practicing rigorous forms of ethical and cognitive self-discipline. Steiner described many exercises he said were suited to strengthening such self-discipline. Details about the spiritual world, he said, could on such a basis be discovered and reported, not infallibly, but with approximate accuracy.
Steiner regarded his research reports as being important aids to others seeking to enter into spiritual experience. He suggested that a combination of spiritual exercises (for example, concentrating on an object such as a seed), moral development (control of thought, feelings and will combined with openness, tolerance and flexibility) and familiarity with other spiritual researchers' results would best further an individual's spiritual development. He consistently emphasized that any inner, spiritual practice should be undertaken in such a way as not to interfere with one's responsibilities in outer life.
Steiner often advised people avoid turning his work into a doctrine. He emphasized that any researcher, in any field, was able to make mistakes, and that both science and the world continued to evolve, making all results outdated after a certain time.
One of the central exercises of anthroposophy is to focus on a given content (this can be an outer object or a spiritual imagination) for a given time, and then to consciously eliminate the content from one's consciousness, allowing the process of attention to continue. We can become aware, thereby, of the activity of attention itself. A further step is then to dismiss this activity from one's consciousness. Behind the activity, Steiner suggested, would be found another level of spiritual reality. Steiner thus described a gradual experiential path from ordinary conceptual thinking into forms of thinking perceptive of living spiritual beings and mobile realities in the spiritual world.
Body, Soul and SpiritEdit
In his works Steiner described the human being as consisting of an eternal spirit, an evolving soul and a temporal body. Steiner also offered a detailed analysis of each of these three realms, however:
Spirit: though the spirit is eternal in anthroposophy teachings, it is becoming progressively more individualized and consciously experienced. In earthly life, the individuality or ego awakens to self-consciousness through its experience of its reflection in the deeds and suffering of a physical body. This is necessary for a human individuality to retain its self-awareness when not incarnated in the body. Thus, humanity is developing through experiences on earth, in bodily incarnation, to attain a spiritual life independent of bodily existence. This happens for all humanity as part of its natural evolution; spiritual exercises are necessary for those who seek to be pioneers in this respect to go beyond the natural spiritual development of a given age.
Soul: Steiner believed that the human soul passes between stages of existence, incarnating into an earthly body, living a life, leaving the body behind and entering into the spiritual worlds before returning to be born again into a new life on earth. As each human soul evolves through its experience, the earth itself and civilization as a whole also evolves; thus, new types of experience are available at each successive incarnation. The soul passes through stages of development; these larger stages are also recapitulated within every lifetime. Initially, the soul lives through sense experience; the outer world forms and determines the inner life. Gradually, the human being seeks to order, understand and express his or her experience; inner life thus becomes independent of immediate sense-experience. Finally, the soul can become self-reflective, exploring the nature and laws of its own existence.
Body: Steiner uses the term body to describe the aspects of human existence that endure for a single lifetime. The physical body is the most obvious of these. Permeating our physical existence are forces of life, growth and metamorphosis that maintain and develop the physical body; as it is an aspect of a lifetime that falls away after death, Steiner called this the life or etheric body. We also have a framework of consciousness that includes our set feelings, concepts and intentions; Steiner called this the body of consciousness or sentient body. All of these elements are particular to an individual lifetime; they contribute to soul and spiritual development but themselves fall away at the death that terminates a particular life on earth.
Reincarnation and KarmaEdit
In his books Steiner described human existence as a cycle of birth, life, death, spiritual existence and a return to earth. This cycle includes evolution and development, however; it is not an eternal sameness. The individuality born into any earthly life bears with her both abilities and wisdom attained through previous incarnations, and obligations that arise through previous deeds. Much of human life is determined by these factors, but there are also new abilities attained, wisdom achieved and deeds accomplished that are not determined, but free achievements. We may suffer due to something in a past life; we may also suffer to gain the strength for something in a future life; — our sufferings and achievements are not necessarily predetermined.
Steiner described human existence between death and a new birth in detail as, first, a series of stages of laying aside the physical form, life experiences, thoughts, relationships, and cultural context of the last life; then the entry into spiritual experience proper; the decision to return to earth; the passage back, during which the cultural context, relationships, ideas, life experiences and physical form are chosen; and finally the re-entry into physical existence through conception and birth.
Reception of AnthroposophyEdit
Anthroposophy claims many prominent supporters outside of the movement. Among these have been many writers, artists and musicians; these include Nobel Laureate Saul Bellow, Andrej Belyj, Josef Beuys, Wassily Kandinsky, Swedish Nobel Laureate Selma Lagerlöf, Nobel Laureate Albert Schweitzer, Andrei Tarkovsky and Bruno Walter.
Though Rudolf Steiner studied natural science at the Vienna Technical University at the undergraduate level, his doctorate was in philosophy and very little of his work is directly concerned with the traditional realm of science, the natural world. His primary interest was in applying the methodology of science to realms of inner experience and the spiritual worlds:
- "[Anthroposophy's] methodology is to employ a scientific way of thinking, but to apply this methodology, which normally excludes our inner experience from consideration, instead to the human being proper."
The application of scientific methodology to other areas has a rich tradition in Germanic philosophy and culture. Steiner did not call his work natural science (in German what English speakers normally refer to as science would be called Naturwissenschaft, natural science), but Geisteswissenschaft, often translated as spiritual science. In the German language, Geisteswissenschaft is a common term generally referring to the humanities or "human sciences" — but which literally means the science of the objective world-spirit — and includes fields such as philosophy, history, and literature; in Steiner's day, psychology and sociology were also included. Steiner thus identified his own work with fields such as history and philosophy rather than with the natural sciences.
A serious question about his work — indeed about all the Geisteswissenschaften — is whether scientific methodology is able to be applied to these realms, i.e. whether such explorations are truly reproducible and intersubjective. If they are not, they are not scientifically verifiable in the sense of modern natural science. Steiner saw that the results of his spiritual vision were difficult or impossible for others to reproduce through his methodology. He suggested "open-mindedly" exploring and testing the results of his research as an alternative; he also urged others to follow a spiritual training that would allow them to directly apply the methods he used. His claim to have created a spiritual science, however, depends upon the reproducibility of his research methods themselves; this has not been achieved to any significant degree.
Many results of Steiner's research, however, have been investigated and supported by scientists working to further and extend scientific observation in directions Steiner pointed out. A few examples: Genetics and the Manipulation of Life, The Forgotten Factor of Context, by biologist Craig Holdrege; The Wholeness of Nature, Goethe's Way toward A Science of Conscious Participation in Nature, by physicist Henri Bortoft; Developmental Dynamics in Humans and Other Primates, by theoretical chemist Jos Verhulst.
There have been polemical criticisms of anthroposophy's claim to reproducibility and intersubjectivity (thus to a scientific foundation) by Sven Ove Hansson, main founder of the Swedish branch of the Sceptics organisation CSICOP, later professor at the Philosophy Unit of the Swedish Royal Institute of Technology. Hansson's way of using quotes from different works by Rudolf Steiner has been criticized for seriously misrepresenting anthroposophy as developed separate from theosophy, and distorting the argumentation in the context from which quotes from works by Steiner are taken .
There have been criticisms that any spiritual movement, anthroposophy in particular, is necessarily religious in nature. In a 2005 court case brought in California by the anti-Waldorf and anti-anthroposophy activist group PLANS, the judge ruled that it had presented no legally admissible evidence that anthroposophy is a religion; this case is under appeal.
Related to this are criticisms that anthroposophy is a sect or cult. In 2000, a court case was brought in France against a government minister for making this claim publicly; the court decided that the minister's comments were defamatory. In 1999 and 2006, Belgian courts decided for the Anthroposophical Society in a case where anthroposophy had been included in a list of dangerous sects; the group that had made the list was fined.
Accusations have been made of racial bias in Steiner's work, though not in anthroposophy in general. Steiner's views on race and ethnicity are examined and critiqued in Rudolf Steiner's views on race and ethnicity.
- ↑ Steiner, Rudolf, Philosophy of Freedom, 1893 ; Steiner said that "my Philosophy of Freedom is the epistemological basis for the anthroposophically oriented spiritual science that I advocate." (Riddles of the Soul, GA21, p. 62)
- ↑ Rudolf Steier - A Vision for the Millennium - p.10
- ↑ Steiner, Rudolf, Anthroposophical Leading Thoughts. London: Rudolf Steiner Press, (1924) 1998.
- ↑ Rudolf Steiner - The Anthroposophic Movement (lecture 2 - Dornach June 11, 1923 p.33
- ↑ Ahern, G. (1984): Sun at Midnight : the Rudolf Steiner movement and the Western esoteric tradition
- ↑ Rudolf Steiner, Human Thought and Cosmic Thought, lecture 3
- ↑ GA 130, p. 156
- ↑ Verhulst, Jos, Developmental Dynamics in Humans and Other Primates, ISBN 0-9322776-28-0
- ↑ Steiner, Anthroposophy: An Introduction
- ↑ (The German word Geist means both spirit and mind.)
- ↑ Stein, W. J., Die moderne naturwissenschaftliche Vorstellungsart und die Weltanschauung Goethes, wie sie Rudolf Steiner vertritt, reprinted in Meyer, Thomas, W.J. Stein / Rudolf Steiner, pp. 267-75.
- ↑ Steiner, Knowledge of Higher Worlds
- ↑ Steiner, How to Attain Knowledge of the Higher Worlds, "Requirements for esoteric training"
- ↑ Steiner, An Outline of Esoteric Science, "Knowledge of Higher Worlds"
- ↑ Steiner, Rudolf, The East in the Light of the West.
- ↑ This was a common theme for Steiner; see especially:
- Rudolf Steiner, Christus zur Zeit des Mysteriums von Golgotha und Christus im zwanzigsten Jahrhundert, as well as
- Rudolf Steiner, GA130 and GA342, all Rudolf Steiner Verlag, various dates.
- ↑ (Steiner was not referring to the hypothetical ether of 19th century physicists, and on several occasions carefully distinguished his own use of the term from their use of it.)
- ↑ See:
- Steiner, Rudolf. Encyclopædia Britannica Online. <http://search.eb.com/eb/article-9069553>.
- Shepherd, A. P., A Scientist of the Invisible. and
- Barnes, Henry, A Life for the Spirit : Rudolf Steiner in the Crosscurrents of Our Time
- ↑ Publications on organic agriculture
- ↑ History of Organic Agriculture
- ↑ Mistletoe studies:, , , , , , , for background
- ↑ Clay, Bob, Shaping the Flame, Association of Camphill Communities, 2000., List of Camphill communities
- ↑ Sharp, Dennis, Rudolf Steiner and the Way to a New Style in Architecture, Architectural Association Journal, June 1963
- ↑ Raab and Klingborg, Waldorfschule baut, Verlag Freies Geistesleben, 2002.
- ↑ *Raab, Klingborg and Fant, Eloquent Concrete, London: 1979.
- Pearson, David, New Organic Architecture. University of California Press, 2001.
- ↑ For an overview of Steiner's general approach to reincarnation, see his Theosophy: An Introduction to the Spiritual Processes in Human Life and in the Cosmos, Steiner Press, 1904/1994. ISBN 0-88010-373-6. For more detail, see:
- ↑ W. J. Stein, Die moderne naturwissenschaftliche Vorstellungsart und die Weltanschauung Goethes, wie sie Rudolf Steiner vertritt, 1921/1985. P. 256-7.
- ↑ de:Geisteswissenschaft
- ↑ Historically, the German term Geisteswissenschaft comes from a translation of the English moral sciences (John Stuart Mill). (See de:Geisteswissenschaft.) Dilthey and Husserl also defended the traditional Geisteswissenschaften in this sense: rational and thus scientific, yet not based upon empirical studies of the physical world. Dilthey in particular rejected the application of the empiricist criteria of natural science to critical studies of society and the human mind (cf. Dilthey's Einleitungen in die Geisteswissenschaften). Steiner refers explicitly to Dilthey's parallel ideas in Riddles of the Soul, p. 149ff in the German original text.
- ↑ Sven Ove Hansson Is Anthroposophy Science? Conceptus XXV (1991), No. 64, pp. 37-49.
- ↑ Sune Nordwall Is Anthroposophy Science? - Some comments
- ↑ Guyard Guilty of Defamation. Cesnur. URL accessed on [[2006-11-13]].
- ↑ Das Goetheanum, 2006/18, p. 20
- Ahern, G. (1984): Sun at Midnight : the Rudolf Steiner movement and the Western esoteric tradition. Wellingborough : Aquarian Press
- Archiati, Pietro, The Great Religions: Pathways to our Innermost Being, ISBN 1-902636-01-5
- Archiati, Pietro, Reincarnation in Modern Life: Toward a New Christian Awareness. Temple Lodge. ISBN 0-904693-88-0
- Barnes, Henry, A Life for the Spirit : Rudolf Steiner in the Crosscurrents of Our Time, Steiner Books, 1997.
- Davy, John, Hope, Evolution and Change, Hawthorn Press. ISBN 0-9507062-7-2
- Edelglass, S. et al., The Marriage of Sense and Thought, Lindisfarne Books. ISBN 0-940262-82-7
- Forward, William and Blaxland-de Lange, Simon (eds.), Trumpet to the Morn (Golden Blade 2001), ISBN 0-9531600-3-3
- Forward, William and Blaxland-de Lange, Simon (eds.), Working with Destiny II (Golden Blade 1998), ISBN 0-9531600-0-9
- Gleich, Sigismund, The Sources of Inspiration of Anthroposophy, ISBN 0-904693-87-2
- Goebel, Wolfgang and Glöckler, Michaela, A Guide to Child Health. Floris Books. ISBN 0-86315-390-9
- Gulbekian, Sevak (ed.), The Future is Now: Anthroposophy at the New Millennium, ISBN 1-902636-09-0
- Hauschka, Rudolf, At the Dawn of a New Age, ISBN 0-919924-25-5
- Hindes, James H. (1995) Renewing Christianity. Edinburgh : Floris Books
- Klocek, Dennis, The Seer's Handbook: A Guide to Higher Perception, Steinerbooks 2006. ISBN 0-88010-548-8
- König, Karl, The Human Soul, ISBN 0-86315-042-X
- Kühlewind, Georg, The Logos-Structure of the World: Language as a Model of Reality, ISBN 0-940262-48-7
- Lievegoed, Bernard, The Battle for the Soul: The Working Together of Three Great Leaders of Humanity, ISBN 1-869890-64-7
- Lievegoed, Bernard, Man on the Threshold. Hawthorn Press. ISBN 0-9507062-6-4
- McDermott, Robert A., The Essential Steiner: Basic Writings of Rudolf Steiner, Harper, 1984.
- Murphy, Christine (ed.), Iscador: Mistletoe and Cancer Therapy. Lantern Books, 2005. ISBN 1-930051-76-X
- Nesfield-Cookson, B., Michael and the Two-Horned Beast: The Challenge of Evil Today in the Light of Rudolf Steiner's Science of the Spirit, ISBN 0-904693-98-8
- Nesfield-Cookson, B., Rudolf Steiner's Vision of Love : spiritual science and the logic of the heart. Bristol : Rudolf Steiner Press
- Paddock, F. and M. Spiegler, Ed.(2003) Judaism and Anthroposophy. Great Barrington, MA : SteinerBooks
- Pietzner, Carlo, Transforming Earth, Transforming Self, ISBN 0-88010-428-7
- Prokofieff, Sergei, The East in the Light of the West, ISBN 0-904693-57-0
- Prokofieff, Sergei, The Occult Significance of Forgiveness. Temple Lodge Publishing. ISBN 0-904693-71-6.
- Schaefer, Christopher and Voors, Tyno, Vision in Action. Lindisfarne Books. ISBN 0-940262-74-6
- Schwenk, Sensitive Chaos. Rudolf Steiner Press. ISBN 1-85584-055-3
- Shepherd, A. P. 1885-1968 :The Battle for The Spirit : The Church and Rudolf Steiner; an anthology compiled by and with an introduction by David Clement. Stourbridge : Anastasi
- Shepherd, A. P., 1885-1968 : A Scientist of the Invisible : An introduction to the life and work of Rudolf Steiner. Edinburgh : Floris, 1983.
- Soesman, Albert (1990). The Twelve Senses : An Introduction to Anthroposophy Based on Rudolf Steiners Studies of The Senses. Translation by Jakob M. Cornelis. Stroud : Hawthorn
- Steiner, Marie, Esoteric Studies, ISBN 0-904693-58-9
- Steiner, Rudolf, 1861-1925.
- Intuitive Thinking As a Spiritual Path : A Philosophy of Freedom; Steiner Books, 1893/1995. ISBN 0-88010-385-X
- Christianity as Mystical Fact"; trans. by Andrew Welburn. Hudson, N.Y. : Anthroposophic Press, 1902/c1997.
- Theosophy: An Introduction to the Spiritual Processes in Human Life and in the Cosmos, Rudolf Steiner Press, 1904/2005. ISBN 1-85584-131-2
- Cosmic Memory, Steiner Books, 1990.
- How to Know Higher Worlds : a modern path of initiation ; trans. by Christopher Bamford. Hudson, N.Y. : Anthroposophic Press, 1904/c1994.ISBN 0-88010-508-9
- An Outline of Esoteric Science; trans. by Catherine E. Creeger. Hudson, NY : Anthroposophic Press, 1910/c1997.
- Verses and Meditations. Rudolf Steiner Press, 2005. [ISBN 1-85584-197-5]
- Esoteric Development : selected lectures and writings. (Rev. ed.) Great Barrington, MA : SteinerBooks, c2003.
- A Western Approach to Reincarnation and Karma : selected lectures and writings ; ed. and intr. by René Querido. Hudson, NY : Anthroposophic Press, c1997.
- According to Matthew : the gospel of Christ's humanity : lectures by Rudolf Steiner; trans. by C. E. Creeger ; intr. by R. Smoley. Great Barrington, MA : Anthroposophic Press, c2003.
- Evil: selected lectures by Rudolf Steiner ; all lectures trans. or rev. by Matthew Barton ; [comp. and ed. by Michael Kalisch]. London : R. Steiner, 1997.
- Founding a Science of The Spirit : fourteen lectures given in Stuttgart between 22 August and 4 September 1906 [New ed.]; trans. revised by Matthew Barton. London : Rudolf Steiner Press, 1999.
- Towards Social Renewal : rethinking the basis of society [4th ed]; trans. by Matthew Barton. London : Rudolf Steiner Press, 1999.
- The Gospel of St. John and its Relation to the other Gospels (GA112), available online.
- The Apocalypse of St. John, Kessinger Publishing, 2005.
- Steiner, Rudolf and Welburn, Andrew, The Mysteries: Rudolf Steiner's Writings on Spiritual Initation, ISBN 0-86315-243-0
- Suchantke, Andreas, Eco-Geography. Lindisfarne Press. ISBN 0-940262-99-1.
- Swassjan, Karen, The Ultimate Communion of Mankind: A Celebration of Rudolf Steiner's Book "The Philosophy of Freedom", ISBN 0-904693-82-1
- Treichler, Rudolf, Soulways. Hawthorn Press. ISBN 1-869890-13-2
- Verhulst, Jos, Developmental Dynamics in Humans and Other Primates. Adonis Press, 2005. ISBN 0-932776-29-9
- Warren, Edward, Freedom as Spiritual Activity, ISBN 0-904693-60-0
- Welburn, Andrew J. (2004) Rudolf Steiner's Philosophy and the Crisis of Contemporary Thought. Edinburgh: Floris.
- Wilkes, John, Flowforms: The Rhythmic Power of Water. Floris Books. ISBN 0-86315-392-5
- World-wide Anthroposophic Society (Goetheanum)
- Anthroposophical Society in America
- Anthroposophical Society in Great Britain
- Anthroposophical Society in Bulgaria
- Anthroposophical Initiatives in India
- Sociedade Antroposófica no Brasil
- Rudolf Steiner Archive (online works, see especially the Books section)
- Steiner Books and Anthroposophic Press (USA)
- Hawthorn Press (GB)
- The Anthroposophy Network
- Anthroposophical links in Great Britain
- anthromedia.net - Anthroposophy Internet Portal
- Anthroposophy in Words and Images (English and Swedish)
- Article: Rudolf Steiner introduced by Owen Barfield.
- Study by the National Cancer Institute on mistletoe's use for treating cancerbg:Антропософия
da:Antroposofi de:Anthroposophie et:Antroposoofia es:Antroposofía fr:Anthroposophiehe:אנתרופוסופיה hu:Antropozófia nl:Antroposofieno:Antroposofipt:Antroposofia ru:Антропософия sq:Dituria njerëzore sk:Antropozofia fi:Antroposofia sv:Antroposofi
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | 1 | 16 |
<urn:uuid:e3b13951-731a-4628-8ed5-71ab9911fadc> | The UN Food and Agriculture Organisation (FAO) released their monthly index of food prices yesterday (January 5, 2011) which showed that the index reached a record high in December 2010 “surpassing the levels of 2008 when the cost of food sparked riots around the world, and prompting warnings of prices being in “danger territory”" (Source). There are several reasons why food prices will move even higher – the catastrophic floods in Northern Queensland being among them. The rising food prices are once again leading to calls for interest rates to rise in order to minimise the inflationary consequences. That motivated me to write Part 2 of my series on inflation – in this case supply-side motivated inflations. In Part 1 of the series – Modern monetary theory and inflation – Part 1 – I concentrated on demand-side origins.
In their recently released 2010 Report – The State of Food Insecurity in the World – the FAO estimated that:
After increasing from 2006 to 2009 due to high food prices and the global economic crisis, both the number and proportion of hungry people have declined in 2010 as the global economy recovers and food prices remain below their peak levels. But hunger remains higher than before the crises …
The following graph shows the FAO Food Price Index up to December 2010 (from January 1990).
Why is that graph important? Because the rising food prices will reverse the recent trends which saw a modest decline in the number of undernourished people in the world in the last year as a result of falling prices since 2008 (due mostly to the recession). All the analysis in the FAO report – which is mildly positive in terms of trends – is predicated on the drop in food prices that accompanied the recession and some good harvests in developing nations. I expect that positive slant will not be realised in practice and that hunger will rise in the coming year.
Further, in the detailed report you will find nothing to support the view that people are poor because governments have run budget deficits. In fact, the opposite would be suggested given the importance of providing quality education, health care and public infrastructure.
One set of facts which should always be borne in mind when considering whether current neo-liberal policy approaches are working is that in developed countries there were 13 per cent of the total population undernourished (2005-07) whereas in developing countries this proportion was 16 per cent. In other words, the “wealthy” world is not all that much better at feeding its poor.
Anyway, back to the main theme.
In the UK Guardian article (January 5, 2011) – Inflation threat divides economists – we read that so-called experts are “split on question of whether surge in oil and food prices will result in higher inflation and interest rates”.
The optimists (those who do not think there will be an inflationary threat) point to the possible ephemeral nature of the rising food prices and the recent “pick-up in oil prices”.
The consumer price indexes that central banks tend to use trim out ephemeral price rises to bring out underlying price movements.
I am less optimistic given the energy demand from the growing Indian and Chinese economies. I last conjectured on that issue in this blog – Be careful what we wish for ….
The head of the UK National Institute for Economic and Social Research told the Guardian that rising energy demand from Asia could “trigger a sustained rise in inflation that would need to be quelled by higher interest rates”. They also quoted one of UK’s extreme monetarists (still pushing that line) who considered the low interest rates in the UK in response to the crisis to be a “dangerous nonchalance”. He is just rehearsing his recurring theme and the informational content of that theme is low.
The Guardian article then presented the opposite case which I found interesting:
Some economists argue we must forget about raising interest rates and live with higher inflation imported from China and the east. If UK inflation were the result of excess demand in the UK then higher base rates could usefully dampen consumption and moderate inflation. If inflationary prices are driven by excess demand in the east or shortages in Australian wheat – factors beyond the control of UK policymakers – then why choke off our nascent economic revival with higher rates, they say.
So the tension in the policy debate is whether to deal with a supply-side price surge (if it turns out to be significant) via demand-side policies (tightening interest rates and fiscal austerity). Fiscal austerity in this context is considered essential by those who believe in the primacy of monetary policy for discipling the inflation process rather than in the current sense that public debt levels are too high.
The reference to Australian floods is also not insignificant. The following pictures shows two NASA satellite images of the area in Northern Queensland around Rockhampton where the floods are very bad at present. The top image was taken on January 4, 2011 while the bottom (of the same area) was taken on December 14, 2010. They allow you to gauge how bad the situation is – this is a very big land area.
You can see more images and analysis HERE.
While the human and social elements of the floods will be very significant, economists have focused on their impact on real GDP growth and inflation – and given the current neo-liberal mindset – monetary policy.
Economists have predicted that the floods in Queensland that are now – courtesy of south-flowing rivers heading to my state (NSW). The Sydney Morning Herald article – Economists warn of dire short-term repercussions (January 6, 2011) said the floods are:
… expected to ravage the Australian economy this year by hitting growth and driving up inflation … coal exports worth $100 million a day were being lost or postponed due to the floods … [economists] … expected the crisis to wipe about 0.3 per cent off gross domestic product immediately … prices were expected to rise because of the effect on agriculture … [there would be a] … 0.75 per cent rise in inflation …
The growth impact should only be temporary because the public spending involved in the rebuilding (supplemented by the insurance payouts if the insurance companies play if fairly and unlike other crises try to short-change their policy holders with all sorts of loopholes and technicalities) will stimulate demand and hence economic activity.
The disruption to coal mining (many mines are flooded or closed because of drainage issues) will also impact on world coal prices given that “Queensland supplies almost a quarter of the world’s seaborne coal exports, and about half of all exports of coking coal”.
So the public discussion about the floods among economists has now turned to its “inflationary” impact. Energy prices are already rising and food prices will definitely rise over the next few months given that the flooded area is a very significant supplier of many vegetable lines (especially in Winter as the southern farms move to cooler crops).
The most asinine comment I heard yesterday was from our Prime Minister who said that while the Federal Government was going to provide as much fiscal support to the flood-affected communities and businesses they would have to make “savings” elsewhere to make sure they kept true to their pledge to get the budget back into surplus next year.
She clearly hasn’t been briefed by her advisers that if the expected loss of real output (growth declines) then the budget will be heading in the opposite direction courtesy of the automatic stabilisers (falling tax revenue, increased welfare outlays) and she won’t have a hope in hell of getting the budget into surplus.
The deeper question is why should they target a surplus at this point in the business cycle anyway given that we have at least 12.5 per cent of our willing labour resources remaining idle (unemployed or underemployed) and even a higher number outside the labour force who want to work. That is another question which I will address in another blog another day.
Economists were being wheeled out by the media in the last 24 hours and many were arguing that interest rates would have to rise to combat the inflationary impact.
So they were advocating a supply-side event which generates costs pressures being dealt with through a demand-side measure (which intends to squeeze purchasing power and reduce aggregate demand).
In this blog – Modern monetary theory and inflation – Part 1 – I outlined what the demand-side theory of inflation.
Economists distinguish between cost-push and demand-pull inflation although the demarcation between the two “states” is not as clear as one might think. This is a good site for background material.
Demand-pull inflation refers to the situation where prices start accelerating continuously because nominal aggregate demand growth outstrip the capacity of the economy to respond by expanding real output. Remember Gross Domestic Product (GDP) is the market value of final goods and services produced in some period.
So GDP = P.Y where P is the aggregate price level and Y is real output. Aggregate demand (expenditure) is always equal (by national accounting) to GDP or P.Y. So if there is growth in demand that cannot be met by growth in Y then P has to rise.
Keynes outlined the notion of an inflationary gap in his famous article – J.M. Keynes (1940) How to Pay for the War: A radical plan for the Chancellor of the Exchequer. London: Macmillan.
While this was in the context of war-time spending when faced by tight supply constraints (that is, an restricted ability to expand real output), the concept of the inflationary gap has been generalised to describe situations of excess demand (which I outlined above).
When there is excess capacity (supply potential) rising nominal aggregate demand growth will typically impact on real output growth first as firms fight for market share and access idle labour resources and unused capacity without facing rising input costs. As the economy nears full capacity the mix between real output growth and price rises becomes more likely to be biased toward price rises (depending on bottlenecks in specific areas of productive activity). At full capacity, GDP can only grow via inflation (that is, nominal values increase only).
Cost-push inflation (sometimes called “sellers inflation”) has a long tradition in the progressive literature (Marx, Kalecki, Lerner, Kaldor, Weintraub) although it is not exclusively a progressive theory. Milton Friedman considered that wage demands from trade unions were a major threat to inflation although he ultimately considered central bank monetary policy to be the real problem in that they accommodated these wage demands by increasing the “money supply”.
Cost-push inflation is an easy concept to understand and is generally explained in the context of “product markets” (where goods a sold) where firms have price setting power. That is, the perfectly competitive model that pervades the mainstream economics textbooks where firms have no market power and take the price set in the market, is abandoned and instead firms set prices by applying some form of profit mark-up to costs.
Kalecki is notable in that he started his analysis assuming mark-up pricing as an attempt to develop economic theory that was based on how the real world actually operated.
The notion is pretty straightforward although there are many different versions. But generally, firms are considered to have target profit rates which they render operational by the mark-up on unit costs. Unit costs are driven largely by wage costs, productivity movements and raw material prices.
Trade union bargaining power was considered an important component of the capacity of workers to realise nominal wage gains and this power was considered to be pro-cyclical – that is, when the economy is operating at “high pressure” (high levels of capacity utilisation) workers are more able to succeed in gaining money wage gains.
In these models, unemployment is seen as disciplining the capacity of workers to gain wages growth – in line with Marx’s reserve army of unemployed idea.
Workers have various motivations depending on the theory but most accept that real wages growth (increasing the capacity of the nominal or money wage to command real goods and services) is a primary aim of most wage bargaining.
So we get a “battle of the mark-ups” operating – workers try to get more real output for themselves by pushing for higher money wages and firms then resist the squeeze on their profits by passing on the rising cost – that is, increasing prices with the mark-up constant.
At that point there is no inflation – just a once-off rise in prices and no change to the distribution of national income in real terms.
However, if the economy is working at high pressure, workers may resist the attempt by capital to keep their real wage constant (or falling) and hence they may respond to the increasing prices by making further nominal wage demands. If their bargaining power is strong (which from the firm’s perspective is usually in terms of how much damage the workers can inflict via industrial action on output and hence profits) then they are likely to be successful.
At that point there is still no inflation. But if firms are not willing to absorb the squeeze on their real output claims then they will raise prices again and the beginnings of a wage-price spiral begins. If this process continues then you have a cost-push inflation.
The causality may come from firms pushing for a higher mark-up and trying to squeeze workers’ real wages. In this case, we might refer to the unfolding inflationary process as a price-wage spiral.
Conflict theory of inflation
There was a series of articles in Marxism Today in 1974 which advanced the notion of inflation being the result of a distributional conflict between workers and capital. One such article by Pat Devine (1974) ‘Inflation and Marxist Theory’, Marxism Today, March, 70–92 is worth reading if you can find it. As an aside, you can view an limited archive of Marxism Today since 1977 which is a very valuable resource.
Another influential book at the time was Robert Rowthorn’s 1980 book – Capitalism, Conflict and Inflation (Lawrence and Wishart).
The conflict theory derives directly from cost-push theories referred to above. Conflict theory recognises that the money supply is endogenous (as opposed to the Monetarist’s Quantity Theory of Money which assumes, wrongly, that the money supply is fixed).
In this world, firms and unions have some degree of market power (that is, they can influences prices and wage outcomes) without much correspondence to the state of the economy. They both desire some targetted real output share.
In each period, the economy produces a given real output which is shared between the groups with distributional claims. If the desired real shares of the workers and bosses is consistent with the available real output produced then there is no incompatibility and there will be no inflationary pressures.
But when the sum of the distributional claims (expressed in nominal terms – money wage demands and mark-ups) are greater than the real output available then inflation can occurs via the wage-price or price-wage spiral noted above.
The wage-price spiral might also become a wage-wage-price spiral as one section of the workforce seeks to restore relativities after another group of workers succeed in their wage demands.
That is, the conflict over available real output promotes inflation. Various dimensions can then be studied – the extent to which different wage contracts overlap and are adjusted, the rate of growth of productivity (which provides “room” for the wage demands to be accomodated without squeezing the profit margin), the state of capacity utilisation (which disciplines the capacity of the firms to pass on increasing costs), the rate of unemployment (which disciplines the capacity of workers to push for nominal wages growth).
Now here is the complication. Conflict theories of inflation note that for this distributional conflict to become a full-blown inflation the central bank has to ultimately “accommodate” the conflict. What does that mean?
If the central bank pushes up interest rates and makes credit more expensive, firms will be less able to pay the higher money wages (the conceptualisation is that firms access credit to “finance” their working capital needs in advance of realisation via sales). Production becomes more difficult and workers (in weaker bargaining positions) are laid off.
The rising unemployment, in turn, eventually discourages the workers from pursuing their on-going demand for wage increases and ultimately the inflationary process is choked off.
However, if the central bank doesn’t tighten monetary policy and the fiscal authorities do not increase taxes or cut public spending then the incompatible distributional claims will play out and inflation becomes inevitable.
Note I have not considered in any detail in this blog – the open economy interpretations of the conflict theory of inflation which have been developed by several Latin American researchers. That will be in Part 3 in this series which I will write another day.
There are also strong alignments between the conflict theory of inflation and Minksy’s financial instability notion. Both consider the dynamics are variable across the business cycle and so when economic activity is weak, both the distributional claims and the attitude of banks to lending will be benign. As the economic growth gathers pace, the claims increase and the risk-averseness of banks declines and more risky loans are made.
Pat Devine’s article (noted above) also introduced the notion that inflation was a structural construct. He argued that the increased bargaining power of workers (that accompanied the long period of full employment in the Post Second World War period) and the declining productivity in the early 1970s imparted a structural bias towards inflation which manifested in the inflation breakout in the mid-1970s which he says “ended the golden age”.
This notion implicates Keynesian-style approaches to full employment – and says the conduct of fiscal policy which squarely aimed to maintain full employment and high growth rates provided the structure for the biases to emerge. Then with the collapse of the Bretton Woods system of convertible currencies and fixed exchange rates (which provided deflationary forces to economies that had strongly domestic demand growth) these structural biases came to the fore.
Rowthorn says that the mid-1970s crisis – which marked the end of the Keynesian period and the start of the neo-liberal period – was associated with a rising inflation but also an on-going profit squeeze due to declining productivity and increasing external competition for market share. The profit squeeze led to firms reducing their rate of investment (which reduced aggregate demand growth) which combined with harsh contractions in monetary and fiscal policy created the stagflation that bedeviled the world in the second half of the 1970s.
The resolution to the “structural bias” was the policy-motivated attack on the working class bargaining power – both in the form of the persistently high unemployment and specific labour relations legislation. The subsequent redistribution of real income towards profits reduced the inflation spiral as workers were unable to pursue real wages growth and productivity growth outstripped real wages growth.
In one of my early articles (1987) – in the Australian Economic Papers – The NAIRU, Structural Imbalance and the Macroequilibrium Unemployment Rate – I developed the notion of a macroequilibrium unemployment rate. This came from my PhD research on inflation and natural rates. It was the first Australian study of hysteresis and one of the first international studies.
The motivation was clearly that the policy orientation in the UK, the US and in Australia was and remains based on the view that inflation is the basic constraint on expansion (and fuller employment).
The popular belief is that fiscal and monetary policy can no longer attain unemployment rates common in the sixties without ever-accelerating inflation rate of unemployment. The natural rate of unemployment (NRU) which is the rate of unemployment consistent with stable inflation is considered to have risen over time.
The non-accelerating inflation rate of unemployment (NAIRU) is a less rigorous version of the NRU but concurs that a particular, cyclically stable unemployment rate coincides with stable inflation. Labour force compositional changes, government welfare payments, trade-union wage goals among other “structural” influences were all implicated in the rising estimates of the inflationary constraint.
The NAIRU achieved such rapid status among the profession as a policy-conditioning concept that I thought it warranted close scrutiny.
My basic proposition was that persistently weak aggregate demand creates a labour market, which mimics features conventionally associated with structural problems.
The specific hypothesis I examined was whether the equilibrium unemployment rate is a direct function of the actual unemployment rate and hence the business cycle. That is the hysteresis effect.
By developing an understanding of the way the labour market adjusts to swings in aggregate demand and generates hysteresis, providing a strong conceptual and empirical basis for advocating counter-stabilising fiscal policy (aggregate policy expansion in a downturn).
So while it might look like the degree of slack necessary to control inflation may have increased, the underlying cyclical labour market processes that are at work in a downturn can be exploited by appropriate demand policies to reduce the steady state unemployment rate.
In that work I outlined a conceptual unemployment rate, which is associated with price stability, in that it temporarily constrains the wage demands of the employed and balances the competing distributional claims on output.
I introduced a new term the macroequilibrium unemployment rate (the MRU) which I noted was, importantly, sensitive to the cycle due to the impact of the cyclical labour market adjustments on the ability of the employed to achieve their wage demands. In this sense, the MRU is distinguished from the conventional steady state unemployment rate, the NAIRU, which is not conceived to be cyclically variable.
What I wanted to show was that there was an interaction between the actual and MRU which would establish the presence of the hysteresis effect.
To be clear – the significance of hysteresis, if it exists, is that the unemployment rate associated with stable prices, at any point in time should not be conceived of as a rigid non-inflationary constraint on expansionary macro policy.
The equilibrium rate itself can be reduced by policies, which reduce the actual unemployment rate. That is why I chose to use the term MRU, as the non-inflationary unemployment rate, as distinct from the NAIRU, to highlight the hysteresis mechanism.
The idea is that structural imbalance increases in a recession due to the cyclical labour market adjustments commonly observed in downturns, and decreases at higher levels of demand as the adjustments are reserved. Structural imbalance refers to the inability of the actual unemployed to present themselves as an effective excess supply.
The non-wage labour market adjustment that accompany a low-pressure economy, which could lead to hysteresis, are well documented. Training opportunities are provided with entry-level jobs and so the (average) skill of the labour force declines as vacancies fall. New entrants are denied relevant skills (and socialisation associated with stable work patterns) and redundant workers face skill obsolescence. Both groups need jobs in order to update and/or acquire relevant skills. Skill (experience) upgrading also occurs through mobility, which is restricted during a downturn.
So why would there be some unemployment rate that is consistent with stable inflation. Remember this is a non Job Guarantee world. The introduction of a JG would change things considerably (more favourably).
There is an extensive literature that links the concept of structural imbalance to wage and price inflation. A non-inflationary unemployment rate can be defined which is sensitive to the cycle.
My work at the time was contributing to the view that inflation was the product of incompatible distributional claims on available income. So when nominal aggregate demand is growing too quickly, something has to give in real terms for that spending growth to be compatible with the real capacity of the economy to absorb the spending.
Unemployment can temporarily balance the conflicting demands of labour and capital by disciplining the aspirations of labour so that they are compatible with the profitability requirements of capital. That was Kalecki’s argument which I considered in the blog – Michal Kalecki – The Political Aspects of Full Employment.
A lull in the wage-price spiral could thus be termed a macroequilibrium state in the limited sense that inflation is stable. The implied unemployment rate under this concept of inflation is termed in this paper the MRU and has no connotations of voluntary maximising individual behaviour which underpins the NAIRU concept that is at the core of mainstream macroeconomics.
Wage demands are thus inversely related to the actual number of unemployed who are potential substitutes for those currently employed.
Increasing structural imbalance (via cyclical non-wage labour market adjustment) drives a wedge between potential and actual excess labour supply, and to some degree, insulates the wage demands of the employed from the cycle. The more rapid the cyclical adjustment, the higher is the unemployment rate associated with price stability.
Stimulating job growth can decrease the wedge because the unemployed develop new and relevant skills and experience. These upgrading effects provide an opportunity for real growth to occur as the cycle reduces the MRU.
Why will firms employ those without skills? An important reason is that hiring standards drop as the upturn begins. Rather than disturb wage structures firms offer entry-level jobs as training positions.
It is difficult to associate wage demands (in excess of current money wages) with the workforce. While the increased training opportunities increase the threat to those who were insulated in the recession this is offset to some degree by the reduced probability of becoming unemployed.
The subsequent empirical work I did and which has since been built on by others has blown the NAIRU concept out of the water. Please read my blog – The dreaded NAIRU is still about! – for more discussion on this point.
At the time, and since, a lot of progressives have objected to the idea that there is some steady-state unemployment rate that disciplines inflation. They claim is sounds like a NAIRU and is a concession to the mainstream paradigm.
Rowthorn clearly understood that at some level of unemployment – which emerges when the government tightens its policy settings – inflation stabilises. This sounds like a NAIRU.
In my 1987 article I wrote:
Inflation results from incompatible distributional claims on available income, unemployment can temporarily balance the conflicting demands of labour and capital by disciplining the aspirations of labour so that they are compatible with the profitability requirements of capital … The wage-price spiral lull could be termed a macroequilibrium state in the limited sense that inflation is stable. The implied unemployment rate under this concept of inflation is termed in this paper the MRU and has no connotations of voluntary maximising individual behaviour which underpins the NAIRU concept …
That is a crucial distinction – it is no surprise in a capitalist system that if you create enough unemployment you will suppress wage demands given that workers, by definition, have to work to live.
But you can underpin this notion of equilibrium without recourse to the individualistic and optimising behaviour assumed by the mainstream.
Raw material price rises
Raw material shocks can also trigger of a cost-push inflation. They can be imported or domestically-sourced. I will devote a special blog to imported raw material shocks in the future.
But the essence is that an imported resource price shock amounts to a loss of real income for the nation in question. This can have significant distributional implications (as the OPEC oil price shocks in the 1970s had). How the government handles such a shock is critical.
The dynamic is that the imported resources reduces the real income that is available for distribution domestically. Something has to give. The loss has to be shared or borne by one of the claimants or another. If the workers resist the lower real wages or if bosses do not accept that some squeeze on their profit margin is inevitable then a wage-price/price-wage spiral can emerge.
The government can employ a number of strategies when faced with this dynamic. It can maintain the existing nominal demand growth which would be very likely to reinforce the spiral.
Alternatively, it can use a combination of strategies to discipline the inflation process including the tightening of fiscal and monetary policy to create unemployment (the NAIRU strategy); the development of consensual incomes policies and/or the imposition of wage-price guidelines (without consensus).
Progressives argued that the best way to deal with such the likelihood is via an incomes policy saying that the NAIRU strategy is very costly in terms of real output losses.
They consider incomes policies can be developed with mediate the claims on the real income available to render them compatible over time. I will write a separate blog about incomes policies as I did a lot of work on them in the late 1980s and into the 1990s.
I also wrote some papers in the 1980s on having wage-price rules driven by productivity growth in certain sectors (for example, in the so-called Scandinavian Model (SM) of inflation.
This model, originally developed for fixed exchange rates, dichotomises the economy into a competitive sector (C-sector) and a sheltered sector (S-sector). The C-sector produces products, which are traded on world markets, and its prices follow the general movements in world prices. The C-sector serves as the leader in wage settlements. The S-sector does not trade its goods externally.
Under fixed exchange rates, the C-sector maintains price competitiveness if the growth in money wages in its sector is equal to the rate of change in its labour productivity (assumed to be superior to S-sector productivity) plus the growth in prices of foreign goods. Price inflation in the C-sector is equal to the foreign inflation rate if the above rule is applied. The wage norm established in the C-sector spills over into wages growth throughout the economy.
The S-sector inflation rate thus equals the wage norm less its own productivity growth rate. Hence, aggregate price inflation is equal to the world inflation rate plus the difference between the productivity growth rates in the C- and S-sectors weighted by the S-sector share in total output. The domestic inflation rate can be higher than the rate of growth in foreign prices without damaging competitiveness, as long as the rate of C-sector inflation is less than or equal to the world inflation rate.
In equilibrium, nominal labour costs in the C-sector will grow at a rate equal to the room (the sum of the growth in world prices and the C-sector productivity). Where non-wage costs are positive (taxes, social security and other benefits extracted from the employers), nominal wages would have to grow at a lower rate. The long-run tendency is for nominal wages to absorb the room provided. However in the short-run, labour costs can diverge from the permitted growth path. This disequilibrium must emanate from domestic factors.
The main features of the SM can be summarised as follows:
- The domestic currency price of C-sector output is exogenously determined by world market prices and the exchange rate.
- The surplus available for distribution between profits and wages in the C-sector is thus determined by the world inflation rate, the exchange rate and the productivity performance of industries in the C-sector.
- The wage outcome in the C-sector is spread to the S-sector industries either by design (solidarity) or through competition.
- The price of output in the S-sector is determined (usually by a mark-up) by the unit labour costs in that sector. The wage outcome in the C-sector and the productivity performance in the S-sector determine unit labour costs.
An incomes policy would establish wage guidelines which would set national wages growth according to trends in world prices (adjusted for exchange rate changes) and productivity in the C-sector. This would help to maintain a stable level of profits in the C-sector.
Whether this was an equilibrium level depends on the distribution of factor shares prevailing at the time the guidelines were first applied.
Clearly, the outcomes could be different from those suggested by the model if a short-run adjustment in factor shares was required. Once a normal share of profits was achieved the guidelines could be enforced to maintain this distribution.
A major criticism of the SM as a general theory of inflation is that it ignores the demand side. Uncoordinated collective bargaining and/or significant growth in non-wage components of labour costs may push costs above the permitted path. Where domestic pressures create divergences from the equilibrium path of nominal wage and costs there is some rationale for pursuing a consensus based incomes policy.
An incomes policy, by minimising domestic cost fluctuations faced by the exposed sector, could reduce the possibility of a C-sector profit squeeze, help maintain C-sector competitiveness, and avoid employment losses. Significant contributions to the general cost level and hence prices can originate from the actions by government. Payroll taxation, various government charges and the like may in fact be more detrimental to the exposed sector than increased wage demands from the labour market.
Although the SM was originally developed for fixed exchange rates, it can accommodate flexible exchange rates. Exchange rate movements can compensate for world price changes and local price rises. The domestic price level can be completely insulated from the world inflation rate if the exchange rate continuously appreciates (at a rate equal to the sum of the world inflation rate and C-sector productivity growth).
Similarly, if local price rises occur, a stable domestic inflation rate can still be maintained if a corresponding decrease in C-sector prices occur. An appreciating exchange rate discounts the foreign price in domestic currency terms.
What about terms of trade changes? Terms of trade changes, which in the SM justify wage rises, also (in practice) stimulate sympathetic exchange rate changes. This combination locks the economy into an uncompetitive bind because of the relative fixity of nominal wages. Unless the exchange rate depreciates far enough to offset both the price fall and the wage rise, profitability in the C-sector will be squeezed.
It was considered appropriate to ameliorate this problem through an incomes policy. Such a policy could be designed to prevent the destabilising wage movements, which respond to terms of trade improvements. In other words, wage bargaining, consistent with the mechanisms defined by the SM may be detrimental to both the domestic inflation target and the competitiveness of the C-sector, and may need to be supplemented by a formal incomes policy to restore or retain consistency.
I remind all progressives of what Rowthorn (a Marxist economist) noted:
…trade unions cannot afford to be too successful …
Which means in a capitalist system which is driven by the rate of profit, workers can create unemployment by being tpo successful in their wage demands.
Modern Monetary Theory policy considerations
As I explained in Part 1 of this series – Modern monetary theory and inflation – Part 1 – a cost-push inflation requires certain aggregate demand conditions to continue for its fuel. In this regard, the concept of a supply-side inflation blurs with the demand-pull inflation although their originating forces might be quite different.
An imported raw material shock just means that real income is lower and will not cause inflation unless it triggers an on-going distributional conflict. That conflict needs “oxygen” in the form of on-going economic activity in sectors where the spiral is robust.
The preferred approach is to use employment buffer stocks in conjunction with fiscal policy adjustments to allow the available real income to be rendered compatible with the existing claims.
Modern Monetary Theory rejects the NAIRU approach (the current orthodoxy) – that is, the use of unemployment buffer stocks – where inflation is controlled using tight monetary and fiscal policy, which leads to a buffer stock of unemployment. This is a very costly and unreliable target for policy makers to pursue as a means for inflation proofing.
Employment buffer stocks rests on the government exploiting its fiscal power that is embodied in a fiat-currency issuing national government to introduce full employment based on an employment buffer stock approach. The Job Guarantee (JG) model which is central to MMT is an example of an employment buffer stock policy approach.
Under a Job Guarantee, the inflation anchor is provided in the form of a fixed wage (price) employment guarantee.
Full employment requires that there are enough jobs created in the economy to absorb the available labour supply. Focusing on some politically acceptable (though perhaps high) unemployment rate is incompatible with sustained full employment.
In MMT, a superior use of the labour slack necessary to generate price stability is to implement an employment program for the otherwise unemployed as an activity floor in the real sector, which both anchors the general price level to the price of employed labour of this (currently unemployed) buffer and can produce useful output with positive supply side effects.
The employment buffer stock approach (the JG) exploits the imperfect competition introduced by fiat (flexible exchange rate) currency which provides the issuing government with pricing power and frees it of nominal financial constraints.
The JG approach represents a break in paradigm from both traditional Keynesian policies and the NAIRU-buffer stock approach. The difference is a shift from what can be categorised as spending on a quantity rule to spending on a price rule.
I noted interest in this concept today among the comments and I will write more about it in due course. But the point is that under a spending rule (which is the current policy approach), the government budgets a quantity of dollars to be spent at prevailing market prices.
In contrast, under a price rule (the JG option) the government offers a fixed wage to anyone willing and able to work, and thereby lets market forces determine the total quantity of government spending. This is what I call spending based on a price rule.
How does the government decide that net public spending is just right? Answer: the JG is an automatic stabiliser. The last worker that comes into the JG office to accept a wage tells you the limits of the program and the size of the budget commitment.
This becomes more complicated when there are other programs being offered. But given the automatic stabiliser nature of the JG, the government at least knows exactly how much it has to outlay each period to maintain (loose) full employment. What is does in addition to this depends on its policy ambitions and the degree of excess capacity in the non-JG sectors of the economy.
Many economists who are sympathetic to the goals of full employment are sceptical of the JG approach because they fear it will make inflation impossible to control.
However, if the government is buying a resource with zero market bid (the JG workers) and moving resources from the inflating sectors to the fixed price sector then inflation control is possible – no matter the origin.
Some people have argued that the JG could be offered in conjunction with an incomes policy if the implied JG-pool that is required to resolve the inflation spiral is too large.
This is entirely possible if you can devise an effective incomes policy. It is unnecessary for inflation control once the JG is in place but could reduce the size of the shift in resources between the private economy and the JG pool should that be considered problematic.
My main concern about the rising prices at present that are of supply-side origin relate to the FAO issues raised at the outset. The number (and proportion) of people in hunger will rise and that should be a government policy priority.
It certainly will not be a priority as long as governments continue headlong into fiscal austerity.
Further, under current policy approaches based on the NAIRU, if the central banks use demand-side policies to deal with a supply-side motivated problem the costs will be very high. The only way that demand-side policies should be used to effect when there is a supply-side motivated inflation is when there is an employment buffer stock system in place.
Total aside – be careful when builders are around
We are currently getting some work done on our house to make it more functional. So walls are getting torn down and all that sort of thing. When I get home at night I am interested in seeing the progress and so far so good.
Not so for one person in Pittsburgh – see story.
So next time there are builders in your street make sure your house is either scheduled for work or someone else’s is!
That is (more) than enough for today! | 1 | 8 |
<urn:uuid:489f6e94-38ae-4bf5-b9f4-a112ec963468> | The first text editors were line editors oriented on typewriter style terminals and they did not provide a window or screen-oriented display. They usually had very short commands (to minimize typing) that reproduced the current line. Among them were a command to print a selected section(s) of the file on the typewriter (or printer) in case of necessity. An "edit cursor", an imaginary insertion point, could be moved by special commands that operated with line numbers of specific text strings (context). Later, the context strings were extended to regular expressions. To see the changes, the file needed to be printed on the printer. These "line-based text editors" were considered revolutionary improvements over keypunch machines. In case typewriter-based terminals were not available, they were adapted to keypunch equipment. In this case the user needed to punch the commands into the separate deck of cards and feed them into the computer in order to edit the file.
When computer terminals with video screens became available, screen-based text editors became common. One of the earliest "full screen" editors was O26 - which was written for the operator console of the CDC 6000 series machines in 1967. Another early full screen editor is vi. Written in the 1970s, vi is still a standard editor for Unix and Linux operating systems. The productivity of editing using full-screen editors (compared to the line-based editors) motivated many of the early purchases of video terminals.
Some text editors are small and simple, while others offer a broad and complex range of functionality. For example, Unix and Unix-like operating systems have the vi editor (or a variant), but many also include the Emacs editor. Microsoft Windows systems come with the very simple Notepad, though many people—especially programmers—prefer to use one of many other Windows text editors with more features. Under Apple Macintosh's classic Mac OS there was the native SimpleText, which was replaced by TextEdit. Some editors, such as WordStar, have dual operating modes allowing them to be either a text editor or a word processor.
Text editors geared for professional computer users place no limit on the size of the file being opened. In particular, they start quickly even when editing large files, and are capable of editing files that are too large to fit the computer's main memory. Simpler text editors often just read files into an array in RAM. On larger files this is a slow process, and very large files often do not fit.
The ability to read and write very large files is needed by many professional computer users. For example, system administrators may need to read long log files. Programmers may need to change large source code files, or examine unusually large texts, such as an entire dictionary placed in a single file.
Some text editors include specialized computer languages to customize the editor (programmable editors). For example, Emacs can be customized by programming in Lisp. These usually permit the editor to simulate the keystroke combinations and features of other editors, so that users do not have to learn the native command combinations.
Another important group of programmable editors use REXX as their scripting language. These editors permit entering both commands and REXX statements directly in the command line at the bottom of the screen (can be hidden and activated by a keystroke). These editors are usually referred to as "orthodox editors", and most representatives of this class are derivatives of Xedit, IBM's editor for VM/CMS. Among them are THE, Kedit, SlickEdit, X2, Uni-edit, UltraEdit, and Sedit. Some vi derivatives such as Vim also support folding as well as macro languages, and have a command line at the bottom for entering commands. They can be considered another branch of the family of orthodox editors.
Many text editors for software developers include source code syntax highlighting and automatic completion to make programs easier to read and write. Programming editors often permit one to select the name of a subprogram or variable, and then jump to its definition and back. Often an auxiliary utility like ctags is used to locate the definitions.
Most text editors provide methods to duplicate and move text within the file, or between files.
As with word processors, text editors will provide a way to undo and redo the last edit. Often—especially with older text editors—there is only one level of edit history remembered and successively issuing the undo command will only "toggle" the last change. Modern or more complex editors usually provide a multiple level history such that issuing the undo command repeatedly will revert the document to successively older edits. A separate redo command will cycle the edits "forward" toward the most recent changes. The number of changes remembered depends upon the editor and is often configurable by the user.
Reading or merging the contents of another text file into the file currently being edited. Some text editors provide a way to insert the output of a command issued to the operating system's shell.
Some advanced text editors allow you to send all or sections of the file being edited to another utility and read the result back into the file in place of the lines being "filtered". This, for example, is useful for sorting a series of lines alphabetically or numerically, doing mathematical computations, and so on.
Some editors include special features and extra functions, for instance,
A text editor is a program that is run on a computer that can create and change text. The text can be saved into a file that is called a "text file". Text editors can be used for a lot of things. Many people use them to write up documents. Some people write code (like HTML or C++) using a text editor.
Some text editors can edit rich text. Rich text allows the person that is editing text to have bold text, italic text, and other things.
Most text editors today support search and replace. This means that the computer will quickly find some text the user wants, and allow them to replace it with some other text. This can be in part or all of the document.
Copy, cut and paste are other common options in text editors. Copy allows users to instantly add a copy of text in one place to another place. Cut and paste are similar, except that the text is moved from one place to another instead of being copied. This is very useful when a writer wishes to put his paragraphs in a different order, for example.
Sometimes users make mistakes, or need to do the same thing over and over without getting bored. This is what the undo and redo features do. Users can reverse their mistakes, or quickly repeat their actions. Some editors allow many mistakes in a row to be reversed; others allow only one or two. | 1 | 2 |
<urn:uuid:8ad7e54f-ee6c-4a70-b764-eae0a4cc3f0b> | "The screen is lit by LEDs instead of by traditional lamps.
That makes for more brightness and saves power."
Raise your hand if you are familiar with the use of LEDs as a light source in laptop computers.
I don't see many hands. Mine isn't raised either.
They seem to be getting popular, just last month Apple started selling their first laptop computer with LED backlighting. The Sony VAIO TX line of laptops uses LEDs as does their TZ line, due to be released very soon. Sony too, claims that LEDs offer increased brightness and decreased power consumption. In addition, they claim that their LED lit screen offers better colors.
Can LEDs really make laptop screens brighter, consume less power and offer better colors?
For those of us who didn't raise our hands, I turned to screen and monitor expert Alfred Poor for advice. For more than 20 years Alfred wrote for PC Magazine, and was their first Lead Analyst for Business Displays. He is a member of the Society for Information Display and the editor and publisher of HDTV Almanac, a web site with news and commentary about HDTV and related topics.
Starting at the beginning, Alfred explained that the liquid crystals in an LCD panel/monitor don't emit light themselves [insert your own dilithium joke here]. Rather "the molecules move in response to electrical fields, and are used as a shutter to block the light." I was surprised how inefficient the technology is. An LCD screen blocks 95% of the backlight, even when it's showing a full white screen.
Traditionally, Alfred said, the backlight source behind the crystals have been cold-cathode fluorescent lamps (CCFL). The use of LEDs in laptop screens is relatively new. According to Alfred, LEDs "already are commonplace in mobile devices such as GPS receivers, cell phones, and PDAs ... the first desktop monitors probably appeared within the past couple of years. Sony had an LCD TV with LED backlights a couple of years ago. I expect that laptops were the last to get the technology."
Since none of the companies offering LED backlit screens said anything about cost, it's reasonable to assume that LEDs are more expensive than CCFLs.
The M1330 comes with either a CCFL or LED lit screen, so it makes for a handy comparison of the two technologies. The M1330 costs $150 more with the LED lit screen.
With Sony, Toshiba and Apple, the cost of the LED screen is a hidden component of the total price. But these machines aren't cheap. As of July 22nd, the least expensive pre-configured Toshiba Portege R500 was $1,999 and the Sony TZ line started at $2,199.99 (think of it as $2,200). The 15.4 inch Macbook Pro started at $1,999.
Thin and Light
Toshiba claims that in one configuration the Portege R500 is "...the world's thinnest widescreen 12.1 inch notebook PC with an integrated DVD-SuperMulti drive..." Dell claims their M1330 laptop with the optional LED screen is the thinnest laptop computer equipped with a 13.3 inch screen. The Sony VAIO TZ machines are less than an inch thick, but only if measured at the narrowest point. At the highest point, they are 1.17 inches.
Alfred confirmed that LEDs are indeed thinner and therefore the screens can be made thinner. And, they weigh less than cold-cathode fluorescent lamps.
We can see this in the M1330. According to Dell, the LED display "starts at 3.97lbs and is 0.87 inches thick compared to the standard display which starts at 4.28lbs and is 0.97 inches thick." The difference in weight and thickness seem, to me, to be small, but, I suppose if you frequently carry a laptop computer, then perhaps every little bit helps.
Mr. Mossberg gives the impression that by their very nature LEDs save power. Not true, according to our expert. Alfred pointed out that "At present LEDs generally draw more power and produce more heat than CCFL designs." Heat is a problem for all personal computers. It's more of an issue with laptops and still more important in ultraportable models where everything is so closely packed together.
So what is the basis for the claimed power savings? It turns out that the number of LEDs in a screen varies. If the number is low enough, less power is needed and less heat is generated. With a small enough number of LEDs, Alfred said you can "probably save power compared with a CCFL design. This can be used to give either a longer battery life, or to reduce the battery weight and thus get a lighter weight design overall."
I couldn't find anything from Sony, Toshiba or Apple about the number of LEDs in their screens. But in describing the M1330 Dell says "Our optional LED display uses 32 tiny, white LEDs ..." According to Alfred, "32 is a relatively high number for a small screen. Some large HDTV panels using high brightness LEDs could use that count or less for a panel with 8 or 10 times the surface area."
So, if the relatively high number of LEDs means increased heat and no power savings, why does Dell use so many? Alfred explains that LED screens "need a sophisticated lightpipe and diffuser to spread the light evenly behind the LCD panel. The fewer LEDs you use, the more difficult the diffusion process becomes."
As to whether LEDs are brighter, Toshiba claims this is true, but offers no specific numbers. Sony claims "incredibly high brightness levels" and the specs for the screen list it at 11.1 candelas (trust me, you don't want to know the exact definition of a candela). The point is that Sony does not offer the candela ratings for their CCFL screens as a point of comparison.
The owners manual for the Dell M1330 shows the LED panel to be 36% brighter than the CCFL panel. Specifically the luminance of the LED screen is 300 cd/m? vs. 220 cd/m? for CCFL (and no, I can't explain what cd/m? means).
Sony is the most aggressive in making claims about the better colors in their LED screens, using the terms "brilliant", "amplified" and "true-to-life" to describe them. Toshiba says that indoors, "the LED backlit display produces rich color saturation." I couldn't find anything from Dell that mentioned better colors. Alfred said it is possible that "LEDs can offer better color than CCFL, though advances in CCFL phosphor technology are rapidly diminishing this advantage."
Glossy vs. Matte finish
LED backlighting, being in the back, can be used with screens whose front has either a glossy or matte finish. A glossy screen suffers from glare, but produces more vibrant colors. Each laptop vendor has their own marketing term for glossy screens, Apple is the only company I've seen that actually uses the word glossy. A matte finish may be described as anti-glare or anti-reflective.
The Sony TX and TZ laptops have a matte finish. At the Apple online store you can chose either a glossy or matte finish when you order the 15 inch LED backlit Macbook Pro. I can't be sure about the other laptops because the claims of better colors could be either based on the LED backlighting or the glossy screen or both.
I didn't see any marketing material from a laptop manufacturer that mentioned the expected lifespan of LEDs vs. CCFLs. But, a company that manufacturers LEDs did claim they last longer than CCFLs. When I ran this by Alfred, he said:
The difference is probably not important, but yes, CCFLs don't last as long. Even more significant is that their output decreases over time. End of life is when they are half as bright. LEDs are solid state devices, and "fall off the cliff" in failure mode; in other words, they keep working like when they were new until they stop working. Most people aren't going to keep their notebooks long enough for the CCFL aging to show any difference.
Alfred estimates the market share of LEDs at less than five percent, but he expects them to become more common as costs come down. DigiTimes reports that laptop and panel vendors expect that LEDs will be used in about 7% of laptop screens next year (See Nearly 100% of 10-inch-and-smaller LCD panels using LED backlight by Susie Pan and Emily Chuang, July 23, 2007). They estimate that LEDs will be used in 3-5% of laptop computers this year.
To date, LEDs have been popular mostly in smaller displays. In part this is because smaller screens use fewer LEDs which lowers the price differential over CCFL. The DigiTimes article reports that most LCD screens 10 inches and under use LED backlighting. The Sony TX and TZ screens are 11.1 inches, the Toshiba R500 screen is 12.1 inches and the Dell M1330 LED screen is 13.3 inches. The Apple Macbook Pro has the only available 15 inch screen using LEDs, but Apple appears to be having supply problems with them.
Alfred also mentioned that "environmental concerns about heavy metals in the CCFLs" may help to popularize LEDs. Apple seems to be the only laptop vendor using environmental concerns in their marketing. They tout their LED lit screens as being "mercury-free" and the company has long term plans to eliminate mercury from all their products.
Finally, I wondered why Dell and Sony mentioned that the LEDs they use are white. Alfred pointed out that some LED backlights use red, green, and blue, and mix the colors in the diffuser. I didn't bother asking what a diffuser is. | 1 | 3 |
<urn:uuid:37bd524e-488a-4cb5-a3c4-830803b9d322> | The term shrimp is used to refer to some decapod crustaceans, although the exact animals covered can vary. Used broadly, it may cover any of the groups with elongated bodies and a primarily swimming mode of locomotion – chiefly Caridea and Dendrobranchiata. In some fields, however, the term is used more narrowly, and may be restricted to Caridea, to smaller species of either group, or to only the marine species. Under the broader definition, shrimp may be synonymous with prawn, covering stalk-eyed swimming crustaceans with long narrow muscular tails (abdomens), long whiskers (antennae) and slender legs. They swim forwards by paddling with swimmerets on the underside of their abdomens. Crabs and lobsters have strong walking legs, whereas shrimp have thin fragile legs which they use primarily for perching.
Shrimp are widespread and abundant. They can be found feeding near the seafloor on most coasts and estuaries, as well as in rivers and lakes. To escape predators, some species flip off the seafloor and dive into the sediment. They usually live from one to seven years. Shrimp are often solitary, though they can form large schools during the spawning season. There are thousands of species, and usually there is a species adapted to any particular habitat. Any small crustacean which resembles a shrimp tends to be called one.
They play important roles in the food chain and are important food sources for larger animals from fish to whales. The muscular tails of shrimp can be delicious to eat, and they are widely caught and farmed for human consumption. Commercial shrimp species support an industry worth 50 billion dollars a year, and in 2010 the total commercial production of shrimp was nearly 7 million tonnes (see production chart on the right). Shrimp farming took off during the 1980s, particularly in China, and by 2007 the harvest from shrimp farms exceeded the capture of wild shrimp. There are significant issues with excessive bycatch when shrimp are captured in the wild, and with pollution damage done to estuaries when they are used to support shrimp farming. Many shrimp species are small as the term shrimp suggests, about 2 cm (0.79 in) long, but some shrimp exceed 25 cm (9.8 in). Larger shrimp are more likely to be targeted commercially, and are often referred to as prawns, particularly in Britain.
In 1991, archeologists suggested that ancient raised paved areas near the coast in Chiapas, Mexico, were platforms used for drying shrimp in the sun, and that adjacent clay hearths were used to dry the shrimp when there was no sun. The evidence was circumstantial, because the chitinous shells of shrimp are so thin they degrade rapidly, leaving no fossil remains. In 1985 Quitmyer and others found direct evidence dating back to 600 AD for shrimping off the southeastern coast of North America, by successfully identifying shrimp from the archaeological remains of their mandibles (jaws). Clay vessels with shrimp decorations have been found in the ruins of Pompeii. In the 3rd century AD, the Greek author Athenaeus wrote in his literary work, Deipnosophistae; "... of all fish the daintiest is a young shrimp in fig leaves."
In North America, Native Americans captured shrimp and other crustaceans in fishing weirs and traps made from branches and Spanish moss, or used nets woven with fibre beaten from plants. At the same time early European settlers, oblivious to the "protein-rich coasts" all about them, starved from lack of protein. In 1735 beach seines were imported from France, and Cajun fishermen in Louisiana started catching white shrimp and drying them in the sun, as they still do today. In the mid nineteenth century, Chinese immigrants arrived for the California Gold Rush, many from the Pearl River Delta where netting small shrimp had been a tradition for centuries. Some immigrants starting catching shrimp local to San Francisco Bay, particularly the small inch long Crangon franciscorum. These shrimp burrow into the sand to hide, and can be present in high numbers without appearing to be so. The catch was dried in the sun and was exported to China or sold to the Chinese community in the United States. This was the beginning of the American shrimping industry. Overfishing and pollution from gold mine tailings resulted in the decline of the fishery. It was replaced by a penaeid white shrimp fishery on the South Atlantic and Gulf coasts. These shrimp were so abundant that beaches were piled with windrows from their moults. Modern industrial shrimping methods originated in this area.
""For shrimp to develop into one of the world's most popular foods, it took the simultaneous development of the otter trawl... and the internal combustion engine." Shrimp trawling can capture shrimp in huge volumes by dragging a net along the seafloor. Trawling was first recorded in England in 1376, when King Edward III received a request that he ban this new and destructive way of fishing. In 1583, the Dutch banned shrimp trawling in estuaries.
In the 1920s, diesel engine were adapted for use in shrimp boats. Power winches were connected to the engines, and only small crews were needed to rapidly lift heavy nets on board and empty them. Shrimp boats became larger, faster, and more capable. New fishing grounds could be explored, trawls could be deployed in deeper offshore waters, and shrimp could be tracked and caught round the year, instead of seasonally as in earlier times. Larger boats trawled offshore and smaller boats worked bays and estuaries. By the 1960s, steel and fibreglass hulls further strengthened shrimp boats, so they could trawl heavier nets, and steady advances in electronics, radar, sonar, and GPS resulted in more sophisticated and capable shrimp fleets.
As shrimp fishing methods industrialised, parallel changes were happening in the way shrimp were processed. "In the 19th century, sun dried shrimp were largely replaced by canneries. In the 20th century, the canneries were replaced with freezers."
In the 1970s, significant shrimp farming was initiated, particularly in China. The farming accelerated during the 1980s as demand for shrimp exceeded supply, and as excessive bycatch and threats to endangered sea turtle became associated with trawling for wild shrimp. In 2007, the production of farmed shrimp exceeded the capture of wild shrimp.
Shrimp are swimming crustaceans with long narrow muscular abdomens and long antennae. Unlike crabs and lobsters, shrimp have well developed pleopods (swimmerets) and slender walking legs; they are more adapted for swimming than walking. Historically, it was the distinction between walking and swimming that formed the primary taxonomic division into the former suborders Natantia and Reptantia. Members of the Natantia (shrimp in the broader sense) were adapted for swimming while the Reptantia (crabs, lobsters, etc.) were adapted for crawling or walking. Some other groups also have common names that include the word "shrimp"; any small swimming crustacean resembling a shrimp tends to be called one.
|Differences between crabs, lobsters and shrimp|
|Crabs do not look like shrimp. Unlike shrimp, their abdomen is small, and they have short antennae and a short carapace that is wide and flat. They have prominent grasping claws as their front pair of limbs. Crabs are adapted for walking on the seafloor. They have robust legs and usually move about the seafloor by walking sideways. They have pleopods, but they use them for intromission or to hold egg broods, and not for swimming. Whereas shrimp and lobsters escape predators by lobstering, crabs cling to the seafloor and burrow into sediment. Compared to shrimp and lobsters, the carapace of crabs are particularly heavy, hard and mineralised.||Lobsters and spiny lobsters look somewhat like large versions of shrimp. Spiny lobsters lack the large claws, but have long, spiny antennae and a spiny carapace. Some of the biggest decapods are lobsters. Like crabs, lobsters have robust legs and are highly adapted for walking on the seafloor, though they do not walk sideways. Some species have rudimentary pleopods, which give them some ability to swim, and like shrimp they can lobster with their tail to escape predators, but their primary mode of locomotion is walking, not swimming. Lobsters are an intermediate development between shrimp and crabs.||Shrimp are slender with long muscular abdomens. They look somewhat like small lobsters, but not like crabs. The abdomens of crabs are small and short, whereas the abdomens of lobsters and shrimp are large and long. The lower abdomens of shrimp support pleopods which are well adapted for swimming. The carapace of crabs are wide and flat, whereas the carapace of lobsters and shrimp are more cylindrical. The antennae of crabs are short, whereas the antennae of lobsters and shrimp are usually long, reaching more than twice the body length in some shrimp species.|
The diagram on the right and the following description refers mainly to the external anatomy of the common European shrimp, Crangon crangon, as a typical example of a decapod shrimp. The body of the shrimp is divided into two main parts: the head and thorax which are fused together to form the cephalothorax, and a long narrow abdomen. The shell which protects the cephlathorax is harder and thicker than the shell elsewhere on the shrimp, and is called the carapace. The carapace typically surrounds the gills, through which water is pumped by the action of the mouthparts. The rostrum, eyes, whiskers and legs also issue from the carapace. The rostrum, from the Latin rōstrum meaning beak, looks like a beak or pointed nose at the head of the shrimps head. It is a rigid forward extension of the carapace, and can be used for attack or defence. It may also stabilize the shrimp when it swims backwards. Two bulbous eyes on stalks sit either side of the rostrum. These are compound eyes which have panoramic vision and are very good at detecting movement. Two pairs of whiskers (antennae) also issue from the head. One of these pairs is very long, and can be twice the length of the shrimp, while the other pair are quite short. The antennae have sensors on them which allow the shrimp to feel where they touch, and also allow them to "smell" or "taste" things by sampling the chemicals in the water. The long antennae help the shrimp orientate itself with regard to its immediate surroundings, while the short antennae help assess the suitability of prey.
Eight pairs of appendages issue from the cephlathorax. The first three pairs, the maxillipeds, Latin for "jaw feet", are used as mouthparts. In Crangon crangon, the first pair, the maxillula, pumps water into the gill cavity. After the maxilliped come more five pairs of appendages, the pereiopods. These form the ten decapod legs. In Crangon crangon, the first two pairs of pereiopods have claws or chela. The chela can grasp food items and bring them to the mouth. They can also be used for fighting and grooming. The remaining six legs are long and slender, and are used for walking or perching.
The muscular abdomen has six segments and has a thinner shell than the carapace. Each segment has a separate overlapping shell, which can be transparent. The first five segments each have a pair of appendages on the underside, which are shaped like paddles and are used for swimming forward. The appendices are called pleopods or swimmerets, and can be used for more purposes than just swimming. Some shrimp species use them for brooding eggs, others have gills on them for breathing, and the males in some species use the first pair or two for insemination. The sixth segment terminates in the telson flanked by two pairs of appendages called the uropods. The uropods allow the shrimp to swim backwards, and function like rudders, steering the shrimp when it swims forward. Together, the telson and uropods form a splayed tail fan. If a shrimp is alarmed, it can flex the its tail fan in a rapid movement. This results in a backward dart called the caridoid escape reaction (lobstering).
Shrimp are widespread, and can be found near the seafloor of most coasts and estuaries, as well as in rivers and lakes. There are numerous species, and usually there is a species adapted to any particular habitat. Most shrimp species are marine, although about a quarter of the described species are found in fresh water. Marine species are found at depths of up to 5,000 metres (16,000 ft), and from the tropics to the polar regions.
There are many variations in the ways different types of shrimp look and behave. Even within the core group of caridean shrimp, the small delicate Pederson's shrimp (above) looks and behaves quite unlike the large commercial pink shrimp or the snapping pistol shrimp. The caridean family of pistol shrimp are characterized by big asymmetrical claws, the larger of which can produce a loud snapping sound. The family is diverse and worldwide in distribution, consisting of about 600 species. Colonies of snapping shrimp are a major source of noise in the ocean and can interfere with sonar and underwater communication. The small emperor shrimp has a symbiotic relationship with sea slugs and sea cucumbers, and may help keep them clear of ectoparasites.
Most shrimp are omnivorous, but some are specialised for particular modes of feeding. Some are filter feeders, using their setose (bristly) legs as a sieve; some scrape algae from rocks. Cleaner shrimp feed on the parasites and necrotic tissue of the reef fish they groom. In turn, shrimp are eaten by various animals, particularly fish and seabirds, and frequently host bopyrid parasites.
There is little agreement among taxonomists concerning the phylogeny of crustaceans. Within the decapods "every study gives totally different results. Nor do even one of these studies match any of the rival morphology studies". Some taxonomists identify shrimp with the infraorder Caridea and prawns with the suborder Dendrobranchiata. While different experts give different answers, there is no disagreement that the caridean species are shrimp. There are over 3000 caridean species. Occasionally they are referred to as "true shrimp".
Traditionally decapods were divided into two suborders: The Natantia or swimmers, and the Reptantia or walkers. The Natantia or swimmers included the shrimp. They were defined by their abdomen which, together with its appendages was well adapted for swimming. The Reptantia or walkers included the crabs and lobsters. These species have small abdominal appendages, but robust legs well adapted for walking. The Natantia was thought to be paraphyletic, that is, it was thought that originally all decapods were like shrimp.
However, classifications are now based on clades, and the paraphyletic suborder Natantia has been discontinued. "On this basis, taxonomic classifications now divide the order Decapoda into the two suborders: Dendrobranchiata for the largest shrimp clade, and Pleocyemata for all other decapods. The Pleocyemata are in turn divided into half a dozen infra-orders"
|Major shrimp groups of the Natantia|
|Decapoda||Dendrobranchiata||533||giant tiger prawn pictured, typically have three pairs of claws, though their claws are less conspicuous than those of other shrimp. They do not brood eggs like the caridean, but shed them directly into the water. There gills are branching, whereas the gills of caridean shrimp are lamellar. The segments on their abdomens are even-sized, and there is no pronounced bend in the abdomen.The species in this suborder tend to be larger than the caridean shrimp species below, and many are commercially important. They are sometimes referred to as prawns. Dendrobranchiata, such as the|
|Pleocyemata||Caridea||3438||The numerous species in this infraorder are known as caridean shrimp, though only a few are commercially important. They are usually small, nocturnal, difficult to find (they burrow in the sediment), and of interest mainly to marine biologists. Caridean shrimp, such as the pink shrimp pictured, typically have two pairs of claws. Female carideans attach eggs to their pleopods and brood them there. The second abdominal segment overlaps both the first and the third segment, and the abdomen shows a pronounced caridean bend.|
|Procarididea||6||A minor sister group to the Caridea (immediately above)|
|Stenopodidea||71||Known as boxer shrimp, the members of this infraorder are often cleaner shrimp. Their third pair of walking legs (pereiopods) are greatly enlarged. The banded coral shrimp (pictured) is popular in aquariums. The Stenopodidea are a much smaller group than the Dendrobranchia and Caridea, and have no commercial importance.|
Other decapod crustaceans also called shrimp are the ghost or mud shrimp belonging to the infraorder Thalassinidea. In Australia they are called yabbies. The monophyly of the group is not certain; recent studies have suggested dividing the group into two infraorders, Gebiidea and Axiidea.
Although there are thousands of species of shrimp worldwide, only about 20 of these species are commercially significant. The following table contains the principal commercial shrimp, the seven most harvested species. All of them are decapods; most of them belong to the Dendrobranchiata and four of them are penaeid shrimp.
|Principal commercial shrimp species|
|Group||Common name||Scientific name||Description||Max length (mm)||Depth (m)||Habitat||FAO||WoRMS||2010 production (thousand tonnes)|
|Dendrobranchiata||Whiteleg shrimp||Litopenaeus vannamei (Boone, 1931)||The most extensively farmed species of shrimp. Listed on the Greenpeace seafood redlist.||230||0–72||marine, estuarine||1||2721||2722|
|Giant tiger prawn||Penaeus monodon Fabricius, 1798||Listed on the Greenpeace seafood redlist.||336||0–110||marine, estuarine||210||782||992|
|Akiami paste shrimp||Acetes japonicus Kishinouye, 1905||Most intensively fished species. They are small with black eyes and red spots on the uropods. Only a small amount is sold fresh, most is dried, salted or fermented.||30||shallow||marine||574||574|
|Southern rough shrimp||Trachysalambria curvirostris (Stimpson, 1860)||Easier to catch at night, and fished only in waters less than 60 m (200 ft) deep. Most of the harvest is landed in China.||98||13–150||marine||294||294|
|Fleshy prawn||Fenneropenaeus chinensis (Osbeck, 1765)||Trawled in Asia where it is sold frozen. Exported to Western Europe. Cultured by Japan and South Korea in ponds.||183||90–180||marine||108||45||153|
|Banana prawn||Fenneropenaeus merguiensis (De Man, 1888)||Typically trawled in the wild and frozen, with most catches made by Indonesia. Commercially important in Australia, Pakistan and the Persian Gulf. Cultured in Indonesia and Thailand. In India it tends to be confused with Fenneropenaeus indicus, so its economic status is unclear.||240||10–45||marine, estuarine||93||20||113|
|Caridea||Northern prawn||Pandalus borealis (Krøyer, 1838)||Widely fished since the early 1900s in Norway, and later in other countries following Johan Hjort's practical discoveries of how to locate them. They have a short life which contributes to a variable stock on a yearly basis. They are not considered overfished.||165||20–1380||marine||361||361|
|All other species||1490||220||1710|
Shrimp trawling can result in very high incidental catch rates of non-target species. In 1997, the FAO found discard rates up to 20 pounds for every pound of shrimp. The world average was 5.7 pounds for every pound of shrimp. Trawl nets in general, and shrimp trawls in particular, have been identified as sources of mortality for species of finfish and cetaceans. Bycatch is often discarded dead or dying by the time it is returned to the sea, and may alter the ecological balance in discarded regions. Worldwide, shrimp trawl fisheries generate about 2% of the world's catch of fish in weight, but result in more than one third of the global bycatch total.
The most extensively fished species are the akiami paste shrimp, the northern prawn, the southern rough shrimp, and the giant tiger prawn. Together these four species account for nearly half of the total wild capture. In recent years, the global capture of wild shrimp has been overtaken by the harvest from farmed shrimp.
A shrimp farm is an aquaculture business for the cultivation of marine shrimp or prawns for human consumption. Commercial shrimp farming began in the 1970s, and production grew steeply, particularly to match the market demands of the United States, Japan and Western Europe. The total global production of farmed shrimp reached more than 1.6 million tonnes in 2003, representing a value of nearly 9 billion U.S. dollars. About 75% of farmed shrimp are produced in Asia, in particular in China, Thailand and in the Philippines. The other 25% are produced mainly in Latin America, where Brazil is the largest producer. The largest exporting nation is Thailand.
As can be seen from the global production chart on the left, significant aquaculture production started slowly in the 1970s and then rapidly expanded during the 1980s. After a lull in growth during the 1990s, due to pathogens, production took off again and by 2007 exceeded the capture from wild fisheries. By 2010, the aquaculture harvest was 3.9 million tonnes, compared to 3.1 million tonnes for the capture of wild shrimp.
In the earlier years of marine shrimp farming the preferred species was the large giant tiger prawn. This species is reared in circular holding tanks where they think they are in the open ocean, and swim in "never ending migration" around the circumference of the tank. In 2000, global production was 630,984 tonnes, compared to only 146,362 tonnes for whiteleg shrimp. Subsequently these positions reversed, and by 2010 the production of giant tiger prawn increased modestly to 781,581 tonnes while whiteleg shrimp rocketed nearly twenty-fold to 2,720,929 tonnes. The whiteleg shrimp is currently the dominant species in shrimp farming. It is a moderately large shrimp reaching a total length of 230 mm, and is particularly suited to farming because it "breeds well in captivity, can be stocked at small sizes, grows fast and at uniform rates, has comparatively low protein requirements... and adapts well to variable environmental conditions." In China, prawns are cultured along with sea cucumbers and some fish species, in integrated multi-trophic systems.
The major producer of farmed shrimp is China. Other significant producers are Thailand, Indonesia, India, Vietnam, Brazil, Ecuador and Bangladesh. Most farmed shrimp is exported to the United States, the European Union and Japan. Greenpeace has challenged the sustainability of tropical shrimp farming practices on the grounds that farming these species "has led to the destruction of vast areas of mangroves in several countries, over-fishing of juvenile shrimp from the wild to supply farms, and significant human rights abuses." A number of the prominent tropical shrimp species that are farmed commercially have been placed on their seafood red list, including the whiteleg shrimp, Indian prawn and giant tiger shrimp.
Shrimp are marketed and commercialized with several issues in mind. Most shrimp are sold frozen and marketed based on their categorization of presentation, grading, colour, and uniformity. Shrimp have high levels of omega-3 fatty acids and low levels of mercury. Usually shrimp is sold whole, though sometimes only the meat of shrimp is marketed.
As with other seafood, shrimp is high in calcium, iodine and protein but low in food energy. A shrimp-based meal is also a significant source of cholesterol, from 122 mg to 251 mg per 100 g of shrimp, depending on the method of preparation. Shrimp consumption, however, is considered healthy for the circulatory system because the lack of significant levels of saturated fat in shrimp means that the high cholesterol content in shrimp actually improves the ratio of LDL to HDL cholesterol and lowers triglycerides.
Shrimp and other shellfish are among the most common food allergens. They are not kosher and thus are forbidden in Jewish cuisine. Shrimp are halal according to some madhāhib, and therefore permissible to most, but not all, Muslims.
A wide variety of non decapod crustaceans are also commonly referred to as shrimp. This includes the brine shrimp, clam shrimp, fairy shrimp and tadpole shrimp belonging to the brachiopods, the lophogastridan shrimp, opossum shrimp and skeleton shrimp belonging the Malacostraca; and seed shrimp which are ostracods. Many of these species look quite unlike like the commercial decapod shrimp that are eaten as seafood. For example, skeleton shrimp have short legs and a slender tail like a scorpion tail, fairy shrimp swim upside down with swimming appendages that look like leaves, and the tiny seed shrimp have bivalved carapaces which they can open or close. Krill resemble miniature shrimp, and are sometimes called "krill shrimp".
|Other species groups commonly known as shrimp|
|Branchiopoda||Branchiopoda comes from the Greek branchia meaning gills, and pous meaning feet. They have gills on their feet or mouthparts.|
|brine shrimp||8||Brine shrimp belong to the genus Artemia. They live in inland saltwater lakes in unusually high salinities, which protects them from most predators. They produce eggs, called cysts, which can be stored in a dormant state for long periods and then hatched on demand. This has led to the extensive use of brine shrimp as fish feed in aquaculture. Brine shrimp are sold as novelty gifts under the marketing name Sea-Monkeys.|
|clam shrimp||150||Clam shrimp belong to the group Conchostraca. These freshwater shrimp have a hinged bivalved carapace which can open and close.|
|fairy shrimp||300||Fairy shrimp belong to the class Anostraca. These 1–10 cm long freshwater or brackish shrimp have no carapace. They swim upside down with their belly uppermost, with swimming appendages that look like leaves. Most fairy shrimp are herbivores, and eat only the algae in the plankton. Their eggs can survive drought and temperature extremes for years, reviving and hatching after the rain returns.|
|tadpole shrimp||20||Tadpole shrimp belong to the family Notostraca. Thsee living fossils have not much changed since the Triassic. They are drought-resistant and can be found preying on fairy shrimp and small fish at the bottom of shallow lakes and temporary pools. The longtail tadpole shrimp (pictured) has three eyes and up to 120 legs with gills on them. It lives for 20–90 days. Different populations can be bisexual, unisexual or hermaphroditic.|
|Malacostraca||Malacostraca comes from the Greek malakós meaning soft and óstrakon meaning shell. The name is misleading, since normally the shell is hard, and is soft only briefly after moulting.|
|Lophogastrida||56||These marine pelagic shrimp make up the order Lophogastrida. They mostly inhabit relatively deep pelagic waters throughout the world. Like the related opossum shrimp, females lophogastrida carry a brood pouch.|
|mantis shrimp||400||Mantis shrimp, so called because they resemble a praying mantis, make up the order Stomatopoda. They grow up to 38 cm (15 in) long, and can be vividly coloured. Some have powerful spiked claws which they punch into their prey, stunning, spearing and dismembering them. They have been called "thumb splitters" because of the severe gashes they can inflict if handled carelessly.|
|opossum shrimp||1,000||Opossum shrimp belong to the order Mysida. They are called opossum shrimp because the females carry a brood pouch. Usually less than 3 cm long, they are not closely related to caridean or penaeid shrimp. They are widespread in marine waters, and are also found in some brackish and freshwater habitats in the Northern hemisphere. Marine mysids can form large swarms and are an important source of food for many fish. Some freshwater mysids are found in groundwater and anchialine caves.|
|skeleton shrimp||Skeleton shrimp, sometimes known as ghost shrimp, are amphipods. Their threadlike slender bodies allow them to virtually disappear among fine filaments in seaweed. Males are usually much larger than females. For a good account of a specific species, see Caprella mutica.|
|Ostracoda||Ostracod comes from the Greek óstrakon meaning shell. In this case, the shells are in two parts, like those of bivalves or clams.|
|seed shrimp||13,000||Seed shrimp make up the class Ostracoda. This is a class of numerous small crustacean species which look like seeds, typically about one millimetre (0.04 in) in size. Their carapace looks like a clam shell, with two parts held together by a hinge to allow the shell to open and close. Some marine seed shrimp drift as pelagic plankton, but most live on the sea floor and burrow in the upper sediment layer. There are also freshwater and terrestrial species. The class includes carnivores, herbivores, filter feeders and scavengers.|
Some mantis shrimp are a foot long, and have bulging eyes, a flattened tail and formidable claws equipped with clubs or sharp spikes, which it can use to knock out its opponents.
The terms shrimp and prawn are common names, not scientific names. They are vernacular or colloquial terms which lack the formal definition of scientific terms. They are not taxa, but are terms of convenience with little circumscriptional significance. There is no reason to avoid using the terms shrimp or prawn when convenient, but it is important not to confuse them with the names or relationships of actual taxa.
According to the crustacean taxonomist Tin-Yam Chan, "The terms shrimp and prawn have no definite reference to any known taxonomic groups. Although the term shrimp is sometimes applied to smaller species, while prawn is more often used for larger forms, there is no clear distinction between both terms and their usage is often confused or even reverse in different countries or regions."
A lot of confusion surrounds the scope of the term shrimp. Part of the confusion originates with the association of smallness. That creates problems with shrimp-like species that are not small. The expression "jumbo shrimp" can be viewed as an oxymoron, a problem that doesn't exist with the commercial designation "jumbo prawns".
The term shrimp originated around the 14th century with the Middle English shrimpe, akin to the Middle Low German schrempen, and meaning to contract or wrinkle; and the Old Norse skorpna, meaning to shrivel up. It is not clear where the term prawn originated, but early forms of the word surfaced in England in the early 15th century as prayne, praine and prane. According to the linguist Anatoly Liberman it is unclear how shrimp, in English, came to be associated with small. "No Germanic language associates the shrimp with its size... The same holds for Romance... it remains unclear in what circumstances the name was applied to the crustacean."
Taxonomic studies in European on shrimp and prawns were shaped by the common shrimp and the common prawn, both found in huge numbers along the European coastlines. The common shrimp, Crangon crangon was categorised in 1758 by Carl Linnaeus, and the common prawn was categorised in 1777 by Thomas Pennant. The common shrimp is a small burrowing species aligned with the notion of a shrimp as being something small, whereas the common prawn is much larger. The terms true shrimp or true prawn are sometimes used to mean what a particular person thinks is a shrimp or prawn. This varies with the person using the terms. But such terms are not normally used in the scientific literature, because the terms shrimp and prawn themselves lack scientific standing. Over the years the way shrimp and prawn are used has changed, and nowadays the terms are almost interchangeable. Although from time to time some biologists declare certain common names should be confined to specific taxa, the popular use of these names seems to continue unchanged.
Several types of shrimp are kept in home aquaria. Some are purely ornamental, while others are useful in controlling algae and removing debris. Freshwater shrimp commonly available for aquaria include the Bamboo shrimp, Japanese marsh shrimp (Caridina multidentata, also called "Amano shrimp," as their use in aquaria was pioneered by Takashi Amano), cherry shrimp (Neocaridina heteropoda), and ghost or glass shrimp (Palaemonetes spp.). Popular saltwater shrimp include the cleaner shrimp Lysmata amboinensis, the fire shrimp (Lysmata debelius) and the harlequin shrimp (Hymenocera picta).
|Freshwater aquaria variant shrimp come in many colours|
Various coastal settlements in the United States have claimed the title "Shrimp Capital of the World". For example, the claim was made earlier in the nineteenth century for the Port of Brunswick in Georgia, and Fernandina and Saint Augustine in Florida. More recent claims have been made for Aransas Pass and Brownsville in Texas, as well as Morgan City in Louisiana. The claim has also been made for Mazatlán in Mexico.
▪ Premium designs
▪ Designs by country
▪ Designs by U.S. state
▪ Most popular designs
▪ Newest, last added designs
▪ Unique designs
▪ Cheap, budget designs
▪ Design super sale
DESIGNS BY THEME
▪ Accounting, audit designs
▪ Adult, sex designs
▪ African designs
▪ American, U.S. designs
▪ Animals, birds, pets designs
▪ Agricultural, farming designs
▪ Architecture, building designs
▪ Army, navy, military designs
▪ Audio & video designs
▪ Automobiles, car designs
▪ Books, e-book designs
▪ Beauty salon, SPA designs
▪ Black, dark designs
▪ Business, corporate designs
▪ Charity, donation designs
▪ Cinema, movie, film designs
▪ Computer, hardware designs
▪ Celebrity, star fan designs
▪ Children, family designs
▪ Christmas, New Year's designs
▪ Green, St. Patrick designs
▪ Dating, matchmaking designs
▪ Design studio, creative designs
▪ Educational, student designs
▪ Electronics designs
▪ Entertainment, fun designs
▪ Fashion, wear designs
▪ Finance, financial designs
▪ Fishing & hunting designs
▪ Flowers, floral shop designs
▪ Food, nutrition designs
▪ Football, soccer designs
▪ Gambling, casino designs
▪ Games, gaming designs
▪ Gifts, gift designs
▪ Halloween, carnival designs
▪ Hotel, resort designs
▪ Industry, industrial designs
▪ Insurance, insurer designs
▪ Interior, furniture designs
▪ International designs
▪ Internet technology designs
▪ Jewelry, jewellery designs
▪ Job & employment designs
▪ Landscaping, garden designs
▪ Law, juridical, legal designs
▪ Love, romantic designs
▪ Marketing designs
▪ Media, radio, TV designs
▪ Medicine, health care designs
▪ Mortgage, loan designs
▪ Music, musical designs
▪ Night club, dancing designs
▪ Photography, photo designs
▪ Personal, individual designs
▪ Politics, political designs
▪ Real estate, realty designs
▪ Religious, church designs
▪ Restaurant, cafe designs
▪ Retirement, pension designs
▪ Science, scientific designs
▪ Sea, ocean, river designs
▪ Security, protection designs
▪ Social, cultural designs
▪ Spirit, meditational designs
▪ Software designs
▪ Sports, sporting designs
▪ Telecommunication designs
▪ Travel, vacation designs
▪ Transport, logistic designs
▪ Web hosting designs
▪ Wedding, marriage designs
▪ White, light designs
▪ Magento store designs
▪ OpenCart store designs
▪ PrestaShop store designs
▪ CRE Loaded store designs
▪ Jigoshop store designs
▪ VirtueMart store designs
▪ osCommerce store designs
▪ Zen Cart store designs
▪ Flash CMS designs
▪ Joomla CMS designs
▪ Mambo CMS designs
▪ Drupal CMS designs
▪ WordPress blog designs
▪ Forum designs
▪ phpBB forum designs
▪ PHP-Nuke portal designs
ANIMATED WEBSITE DESIGNS
▪ Flash CMS designs
▪ Silverlight animated designs
▪ Silverlight intro designs
▪ Flash animated designs
▪ Flash intro designs
▪ XML Flash designs
▪ Flash 8 animated designs
▪ Dynamic Flash designs
▪ Flash animated photo albums
▪ Dynamic Swish designs
▪ Swish animated designs
▪ jQuery animated designs
▪ WebMatrix Razor designs
▪ HTML 5 designs
▪ Web 2.0 designs
▪ 3-color variation designs
▪ 3D, three-dimensional designs
▪ Artwork, illustrated designs
▪ Clean, simple designs
▪ CSS based website designs
▪ Full design packages
▪ Full ready websites
▪ Portal designs
▪ Stretched, full screen designs
▪ Universal, neutral designs
CORPORATE ID DESIGNS
▪ Corporate identity sets
▪ Logo layouts, logo designs
▪ Logotype sets, logo packs
▪ PowerPoint, PTT designs
▪ Facebook themes
VIDEO, SOUND & MUSIC
▪ Video e-cards
▪ After Effects video intros
▪ Special video effects
▪ Music tracks, music loops
▪ Stock music bank
GRAPHICS & CLIPART
▪ Pro clipart & illustrations, $19/year
▪ 5,000+ icons by subscription
▪ Icons, pictograms
|Custom Logo Design $149 ▪ Web Programming ▪ ID Card Printing ▪ Best Web Hosting ▪ eCommerce Software ▪ Add Your Link|
|© 1996-2013 MAGIA Internet Studio ▪ About ▪ Portfolio ▪ Photo on Demand ▪ Hosting ▪ Advertise ▪ Sitemap ▪ Privacy ▪ Maria Online| | 1 | 10 |
<urn:uuid:57f0d67f-b783-41e6-8226-2f6a89c39d7d> | Nelson Rolihlahla Mandela (Xhosa pronunciation: [xoˈliːɬaɬa manˈdeːla]; born 18 July 1918) is a South African anti-apartheid revolutionary and politician who served as President of South Africa from 1994 to 1999. He was the first black South African to hold the office, and the first elected in a fully representative, multiracial election. His government focused on dismantling the legacy of apartheid through tackling institutionalised racism, poverty and inequality, and fostering racial reconciliation. Politically an African nationalist and democratic socialist, he served as the President of the African National Congress (ANC) from 1991 to 1997. Internationally, Mandela was the Secretary General of the Non-Aligned Movement from 1998 to 1999.
OM AC CC OJ OSJ QC GCH BR RSO NPK
|President of South Africa|
10 May 1994 – 14 June 1999
F. W. de Klerk
|Preceded by||F. W. de Klerk|
|Succeeded by||Thabo Mbeki|
18 July 1918
Mvezo, South Africa
|Political party||African National Congress|
|Spouse(s)||Evelyn Ntoko Mase (1944–1957)
Winnie Madikizela (1958–1996)
Graça Machel (1998–present)
|Residence||Houghton Estate, Johannesburg, Gauteng, South Africa|
|Alma mater||University of Fort Hare
University of London External System
University of South Africa
University of the Witwatersrand
Nelson Rolihlahla Mandela (Xhosa pronunciation: [xoˈliːɬaɬa manˈdeːla]; born 18 July 1918) is a South African anti-apartheid revolutionary and politician who served as President of South Africa from 1994 to 1999. He was the first black South African to hold the office, and the first elected in a fully representative, multiracial election. His government focused on dismantling the legacy of apartheid through tackling institutionalised racism, poverty and inequality, and fostering racial reconciliation. Politically an African nationalist and democratic socialist, he served as the President of the African National Congress (ANC) from 1991 to 1997. Internationally, Mandela was the Secretary General of the Non-Aligned Movement from 1998 to 1999.
A Xhosa born to the Thembu royal family, Mandela attended Fort Hare University and the University of Witwatersrand, where he studied law. Living in Johannesburg, he became involved in anti-colonial politics, joining the ANC and becoming a founding member of its Youth League. After the Afrikaner nationalists of the National Party came to power in 1948 and began implementing the policy of apartheid, he rose to prominence in the ANC's 1952 Defiance Campaign, was elected President of the Transvaal ANC Branch and oversaw the 1955 Congress of the People. Working as a lawyer, he was repeatedly arrested for seditious activities and, with the ANC leadership, was prosecuted in the Treason Trial from 1956 to 1961 but was found not guilty. Although initially committed to non-violent protest, in association with the South African Communist Party he co-founded the militant Umkhonto we Sizwe (MK) in 1961, leading a bombing campaign against government targets. In 1962 he was arrested, convicted of sabotage and conspiracy to overthrow the government, and sentenced to life imprisonment in the Rivonia Trial.
Mandela served 27 years in prison, first on Robben Island, and later in Pollsmoor Prison and Victor Verster Prison. An international campaign lobbied for his release, which was granted in 1990. Becoming ANC President, Mandela published his autobiography and led negotiations with President F.W. de Klerk to abolish apartheid and establish multi-racial elections in 1994, in which he led the ANC to victory. He was elected President and formed a Government of National Unity. As President, he established a new constitution and initiated the Truth and Reconciliation Commission to investigate past human rights abuses, while introducing policies to encourage land reform, combat poverty and expand healthcare services. Internationally, he acted as mediator between Libya and the United Kingdom in the Pan Am Flight 103 bombing trial, and oversaw military intervention in Lesotho. He declined to run for a second term, and was succeeded by his deputy Thabo Mbeki, subsequently becoming an elder statesman, focusing on charitable work in combating poverty and HIV/AIDS through the Nelson Mandela Foundation.
Controversial for much of his life, right-wing critics denounced Mandela as a terrorist and communist sympathiser. He has nevertheless received international acclaim for his anti-colonial and anti-apartheid stance, having received over 250 awards, including the 1993 Nobel Peace Prize, the US Presidential Medal of Freedom and the Soviet Order of Lenin. He is held in deep respect within South Africa, and has been described as "the father of the nation". He is often referred to by his Xhosa clan name of Madiba.
Mandela was born on 18 July 1918 in the village of Mvezo in Umtatu, then a part of South Africa's Cape Province. Given the forename Rolihlahla, a Xhosa term colloquially meaning "troublemaker", in later years he became known by his clan name, Madiba. His patrilineal great-grandfather, Ngubengcuka, was ruler of the Thembu people in the Transkeian Territories of South Africa's modern Eastern Cape province. One of this king's sons, named Mandela, became Nelson's grandfather and the source of his surname. Because Mandela was only the king's child by a wife of the Ixhiba clan, a so-called "Left-Hand House", the descendants of his cadet branch of the royal family were morganatic, ineligible to inherit the throne but recognized as hereditary royal councillors. Nonetheless, his father, Gadla Henry Mphakanyiswa, was a local chief and councillor to the monarch; he had been appointed to the position in 1915, after his predecessor was accused of corruption by a governing white magistrate. In 1926, Gadla, too, was sacked for corruption, but Nelson would be told that he had lost his job for standing up to the magistrate's unreasonable demands. A devotee of the god Qamata, Gadla was a polygamist, having four wives, four sons and nine daughters, who lived in different villages. Nelson's mother was Gadla's third wife, Nosekeni Fanny, who was daughter of Nkedama of the Right Hand House and a member of the amaMpemvu clan of Xhosa.
Later stating that his early life was dominated by "custom, ritual and taboo", Mandela grew up with two sisters in his mother's kraal in the village of Qunu, where he tended herds as a cattle-boy, spending much time outside with other boys. Both his parents were illiterate, but being a devout Christian, his mother sent him to a local Methodist school when he was about seven. Baptised a Methodist, Mandela was given the English forename of "Nelson" by his teacher. When Mandela was about nine, his father came to stay at Qufu, where he died of an undiagnosed ailment which Mandela believed to be lung disease. Feeling "cut adrift", he later said that he inherited his father's "proud rebelliousness" and "stubborn sense of fairness".
His mother took Mandela to the "Great Place" palace at Mqhekezweni, where he was entrusted under the guardianship of Thembu regent, Chief Jongintaba Dalindyebo. Raised by Jongintaba and his wife Noengland alongside their son Justice and daughter Nomafu, Mandela felt that they treated him as their son, but would not see his mother for many years. As Mandela attended church services every Sunday with his guardians, Christianity became a significant part of his life. He attended a Methodist mission school located next to the palace, studying English, Xhosa, history and geography. He developed a love of African history, listening to the tales told by elderly visitors to the palace, and becoming influenced by the anti-imperialist rhetoric of Chief Joyi; he nevertheless considered the European colonialists as benefactors, not oppressors. Aged 16, he, Justice and several other boys travelled to Tyhalarha to undergo the circumcision ritual that symbolically marked their transition from boys to men; the rite over, he was given the name "Dalibunga".
Intending to gain skills needed to become a privy councillor for the Thembu royal house, Mandela began his secondary education at Clarkebury Boarding Institute in Engcobo, a Western-style institution that was the largest school for black Africans in Thembuland. Made to socialise with other students on an equal basis, he claimed that he lost his "stuck up" attitude, becoming best friends with a girl for the first time; he began playing sports and developed his lifelong love of gardening. Completing his Junior Certificate in two years, in 1937 he moved to Healdtown, the Methodist college in Fort Beaufort attended by most Thembu royalty, including Justice. The headmaster emphasised the superiority of English culture and government, but Mandela became increasingly interested in native African culture, making his first non-Xhosa friend, a Sotho language-speaker, and coming under the influence of one of his favourite teachers, a Xhosa who broke taboo by marrying a Sotho. Spending much of his spare time long-distance running and boxing, in his second year Mandela became a prefect.
With Jongintaba's backing, Mandela began work on a Bachelor of Arts (BA) degree at the University of Fort Hare, an elite black institution in Alice, Eastern Cape with around 150 students. There he studied English, anthropology, politics, native administration and Roman Dutch law in his first year, desiring to become an interpreter or clerk in the Native Affairs Department. Mandela stayed in the Wesley House dormitory, befriending Oliver Tambo and his own kinsman, K.D. Matanzima. Continuing his interest in sport, Mandela took up ballroom dancing, and performed in a drama society play about Abraham Lincoln. A member of the Students Christian Association, he gave Bible classes in the local community, and became a vocal supporter of the British war effort when the Second World War broke out. Although having friends connected to the African National Congress (ANC) and the anti-imperialist movement, Mandela avoided any involvement. Helping found a first-year students' House Committee which challenged the dominance of the second-years, at the end of his first year he became involved in a Students' Representative Council (SRC) boycott against the quality of food, for which he was temporarily suspended from the university; he left without receiving a degree.
Returning to Mqhekezweni in December 1940, Mandela found that Jongintaba had arranged marriages for him and Justice; dismayed, they fled to Johannesburg via Queenstown, arriving in April 1941. Mandela found work as a night watchman at Crown Mines, his "first sight of South African capitalism in action", but was fired when the induna (headman) discovered he was a runaway. Staying with a cousin in George Goch Township, Mandela was introduced to the realtor and ANC activist Walter Sisulu, who secured him a job as an articled clerk at law firm Witkin, Sidelsky and Edelman, run by a liberal Jew, Lazar Sidelsky, who took a keen interest in the education of indigenous Africans. At night Mandela worked on his BA through a University of South Africa correspondence course. Living off a small wage, he rented a room in the house of the Xhoma family in the Alexandra township; rife with poverty, crime and pollution, Alexandra "occupie[d] a treasured place in [his] heart". Although embarrassed by his poverty, he briefly courted a Swazi woman, Ellen Nkabinde, before unsuccessfully pursuing Didi Xhoma, his landlord's daughter.
At the firm, he befriended Gaur Redebe, a Xhosa member of the ANC and Communist Party, as well as Nat Bregman, a Jewish Communist who became Mandela's first white friend. Attending Communist talks and parties, he was impressed that Europeans, Africans, Indians and Coloureds were mixing as equals. However, he stated later that he did not join the Party because its atheism conflicted with his Christian faith, and because he saw the South African struggle as being racially based rather than class warfare. Redebe encouraged Mandela to join the ANC, and in August 1943 Mandela marched in support of a successful bus boycott to reverse fare rises. Finding the rent cheaper, Mandela moved into the compound of the Witwatersrand Native Labour Association; living among miners of various tribes, he met the Queen Regent of Basutoland. In late 1941, Jongintaba visited, forgiving Mandela for running away. On returning to Thembuland, the regent died in winter 1942; Mandela and Justice arrived a day late for the funeral. After passing his BA exams in early 1943, Mandela returned to Johannesburg to follow a political path as a lawyer rather than become a privy councillor in Thembuland. He later stated that he experienced no epiphany, but that he "simply found myself doing so, and could not do otherwise."
Beginning law studies at the University of Witwatersrand, Mandela was the only native African in the faculty, and though facing racism, he befriended a number of liberal and communist European, Jewish, and Indian students, among them Joe Slovo, Harry Schwarz and Ruth First. Joining the ANC, Mandela was increasingly influenced by Sisulu, spending much time with other activists at Sisulu's Orlando house, including old friend Oliver Tambo. In 1943, Mandela met Anton Lembede, an African nationalist virulently opposed to a racially united front against imperialism and to an alliance with the communists. Mandela initially shared these beliefs, despite his friendships with non-blacks and communists. Deciding on the need for a youth wing to mass mobilise Africans in opposition to their subjugation, Mandela was among a delegation that approached ANC President Alfred Bitini Xuma on the subject at his home in Sophiatown; the African National Congress Youth League (ANCYL) was founded on Easter Sunday 1944 in the Bantu Men's Social Centre in Eloff Street, with Lembede as President and Mandela as a member of the executive committee.
At Sisulu's house, Mandela met Evelyn Mase, an ANC activist and nurse from Engcobo, Transkei. Married on 5 October 1944, after initially living with her relatives, they rented House no. 8115 in Orlando from early 1946. Their first child, Madiba "Thembi" Thembekile, was born in February 1946, while a daughter named Makaziwe was born in 1947, dying nine months later of meningitis. Mandela enjoyed home life, welcoming his mother and sister Leabie to stay with him. In early 1947, his three years of articles ended at Witkin, Sidelsky and Edelman, and he decided to become a full-time student, subsisting on loans from the Bantu Welfare Trust.
In July 1947, Mandela rushed Lembede to hospital, where he died; he was succeeded as ANCYL president by the more moderate Peter Mda, who agreed to co-operate with communists and non-blacks, appointing Mandela ANCYL secretary. Mandela disagreed with Mda's approach, in December 1947 supporting an unsuccessful measure to expel communists from the ANCYL, considering their ideology un-African. In 1947, Mandela was elected to the executive committee of the Transvaal ANC, serving under regional president C.S. Ramohanoe. When Ramohanoe acted against the wishes of the Transvaal Executive Committee by co-operating with Indians and communists, Mandela was one of those who forced his resignation.
In the South African general election, 1948, in which only whites were permitted to vote, the Afrikaner-dominated Herenigde Nasionale Party under Daniel François Malan took power, soon uniting with the Afrikaner Party to form the National Party. Openly racialist, the party codified and expanded racial segregation with the new apartheid legislation. Gaining increasing influence in the ANC, Mandela and his cadres began advocating direct action, such as boycotts and strikes, influenced by the tactics of the Indian South African community. Xuma did not support these measures and was removed from the presidency in a vote of no confidence, replaced by James Moroka and a more militant cabinet containing Sisulu, Mda, Tambo and Godfrey Pitje; Mandela later related that "We had now guided the ANC to a more radical and revolutionary path." Having devoted his time to politics, Mandela failed his final year at Witwatersrand three times; he was ultimately denied his degree in December 1949.
Mandela took Xuma's place on the ANC National Executive in March 1950. That month, the Defend Free Speech Convention was held in Johannesburg, bringing together African, Indian and communist activists to call an anti-apartheid general strike. Mandela opposed the strike because it was not ANC-led, but a majority of black workers took part, resulting in increased police repression and the introduction of the Suppression of Communism Act, 1950, affecting the actions of all protest groups. In 1950, Mandela was elected national president of the ANCYL; at the ANC national conference of December 1951, he continued arguing against a racially united front, but was outvoted. Thenceforth, he altered his entire perspective, embracing such an approach; influenced by friends like Moses Kotane and by the Soviet Union's support for wars of national liberation, Mandela's mistrust of communism also broke down. He became influenced by the texts of Karl Marx, Friedrich Engels, Vladimir Lenin, Joseph Stalin and Mao Zedong, and embraced dialectical materialism. In April 1952, Mandela began work at the H.M. Basner law firm, though his increasing commitment to work and activism meant he spent less time with his family.
In 1952, the ANC began preparation for a joint Defiance Campaign against apartheid with Indian and communist groups, founding a National Voluntary Board to recruit volunteers. Deciding on a path of nonviolent resistance influenced by Mohandas Gandhi, some considered it the ethical option, but Mandela instead considered it pragmatic. At a Durban rally on 22 June, Mandela addressed an assembled crowd of 10,000, initiating the campaign protests, for which he was arrested and briefly interned in Marshall Square prison. With further protests, the ANC's membership grew from 20,000 to 100,000; the government responded with mass arrests, introducing the Public Safety Act, 1953 to permit martial law. In May, authorities banned Transvaal ANU President J. B. Marks from making public appearances; unable to maintain his position, he recommended Mandela as his successor. Although the ultra-Africanist Bafabegiya group opposed his candidacy, Mandela was elected regional president in October.
On 30 July 1952, Mandela was arrested under the Suppression of Communism Act and stood trial as a part of the 21 accused – among them Moroka, Sisulu and Dadoo – in Johannesburg. Found guilty of "statutory communism", their sentence of nine months' hard labour was suspended for two years. In December, Mandela was given a six-month ban from attending meetings or talking to more than one individual at a time, making his Transvaal ANU presidency impractical. The Defiance Campaign meanwhile petered out. In September 1953, Andrew Kunene read out Mandela's "No Easy Walk to Freedom" speech at a Transvaal ANC meeting; the title was taken from a quote by Indian independence leader Jawaharlal Nehru, a seminal influence on Mandela's thought. The speech laid out a contingency plan for a scenario in which the ANC was banned. This Mandela Plan, or M-Plan, involved dividing the organisation into a cell structure with a more centralised leadership.
Mandela obtained work as an attorney for the firm Terblanche and Briggish, before moving to the liberal-run Helman and Michel, passing qualification exams to become a full-fledged attorney. In August 1953, Mandela and Oliver Tambo opened their own law firm, Mandela and Tambo, operating in downtown Johannesburg. The only African law firm in the country, it was popular with aggrieved Africans, often dealing with cases of police brutality. The authorities removed the firm's office permit under the Group Areas Act, forcing them to relocate, and their customers dwindled. Though a second daughter, Makaziwe Phumia, was born in May 1954, Mandela's relationship with Evelyn became strained, and she accused him of adultery. Evidence has emerged indicating that he was having affairs with ANC member Lillian Ngoyi and secretary Ruth Mompati; persistent but unproven claims assert that the latter bore Mandela a child. Disgusted by her son's behaviour, Nosekeni returned to Transkei, while Evelyn embraced the Jehovah's Witnesses and rejected Mandela's obsession with politics.
Mandela came to the opinion that the ANC "had no alternative to armed and violent resistance" after taking part in the unsuccessful protest to prevent the demolition of the all-black Sophiatown suburb of Johannesburg in February 1955. He advised Sisulu to request weaponry from the People's Republic of China, but while supporting the anti-apartheid struggle, China's government believed the movement insufficiently prepared for guerilla warfare. With the involvement of the South African Indian Congress, the Coloured People's Congress, the South African Congress of Trade Unions and the Congress of Democrats, the ANC planned a Congress of the People, calling on all South Africans to send in proposals for a post-apartheid era. Based on the responses, a Freedom Charter was drafted by Rusty Bernstein, calling for the creation of a democratic, non-racialist state with the nationalisation of major industry. When the charter was adopted at a June 1955 conference in Kliptown attended by 3000 delegates, police cracked down on the event, but it remained a key part of Mandela's ideology.
Following the end of a second ban in September 1955, Mandela went on a working holiday to Transkei to discuss the implications of the Bantu Authorities Act, 1951 with local tribal leaders, also visiting his mother and Noengland before proceeding to Cape Town. In March 1956 he received his third ban on public appearances, restricting him to Johannesburg for five years, but he often broke it. His marriage broke down as Evelyn left Mandela, taking their children to live with her brother. Initiating divorce proceedings in May 1956, she claimed that Mandela had physically abused her; he denied the allegations, and fought for custody of their children. She withdrew her petition of separation in November, but Mandela filed for divorce in January 1958; the divorce was finalised in March, with the children placed in Evelyn's care. During the divorce proceedings, he began courting and politicising a social worker, Winnie Madikizela, who he married in Bizana on 14 June 1958. She later became involved in ANC activities, spending several weeks imprisoned.
On 5 December 1956, Mandela was arrested alongside most of the ANC Executive for "high treason" against the state. Held in Johannesburg Prison amid mass protests, they underwent a preparatory examination in Drill Hall on 19 December, before being granted bail. The defence's refutation began on 9 January 1957, overseen by defence lawyer Vernon Berrangé, and continued until adjourning in September. In January 1958, judge Oswald Pirow was appointed to the case, and in February he ruled that there was "sufficient reason" for the defendants to go on trial in the Transvaal Supreme Court. The formal Treason Trial began in Pretoria in August 1958, with the defendants successfully applying to have the three judges – all linked to the Nationalist Party – replaced. In August, one charge was dropped, and in October the prosecution withdrew its indictment, submitting a reformulated version in November which argued that the ANC leadership committed high treason by advocating violent revolution, a charge the defendants denied.
In April 1959, militant Africanists dissatisfied with the ANC's united front approach founded the Pan-African Congress (PAC); Mandela's friend Robert Sobukwe was elected president, though Mandela thought the group "immature". Both parties campaigned for an anti-pass campaign in May 1960, in which Africans burned the passes that they were legally obliged to carry. One of the PAC-organized demonstrations was fired upon by police, resulting in the deaths of 69 protesters in the Sharpeville massacre. In solidarity, Mandela publicly burned his pass as rioting broke out across South Africa, leading the government to proclaim martial law. Under the State of Emergency measures, Mandela and other activists were arrested on 30 March, imprisoned without charge in the unsanitary conditions of the Pretoria Local prison, while the ANC and PAC were banned in April. This made it difficult for their lawyers to reach them, and it was agreed that the defence team for the Treason Trial should withdraw in protest. Representing themselves in court, the accused were freed from prison when the state of emergency was lifted in late August. Mandela used his free time to organise an All-In African Conference near Pietermaritzburg, Natal, in March, at which 1,400 anti-apartheid delegates met, agreeing on a stay-at home protest to mark 31 May, the day South Africa became a republic. On 29 March 1961, after a six-year trial, the judges produced a verdict of not guilty, embarrassing the government.
Disguising himself as a chaffeur, Mandela travelled the country in cognito, organising the ANC's new cell structure and a mass stay-at-home strike for 29 May. Referred to as the "Black Pimpernel" in the press–a reference to Emma Orczy's 1905 novel The Scarlet Pimpernel–the police put out a warrant for his arrest. Mandela held secret meetings with reporters, and after the government failed to prevent the strike, he warned them that many anti-apartheid activists would soon resort to violence through groups like the PAC's Poqo. He himself believed that the ANC should form an armed group to channel some of this violence, convincing both ANC leader Albert Luthuli – who was morally opposed to violence – and allied activist groups of its necessity.
Inspired by Fidel Castro's 26th of July Movement in the Cuban Revolution, in 1961 Mandela co-founded Umkhonto we Sizwe ("Spear of the Nation", abbreviated MK) with Sisulu and the communist Joe Slovo. Becoming chairman of the militant group, he gained ideas from illegal literature on guerilla warfare by Mao and Che Guevara. Officially separate from the ANC, in later years MK became the group's armed wing. Most early MK members were white communists; after hiding in communist Wolfie Kodesh's flat in Berea, Mandela moved to the communist-owned Liliesleaf Farm in Rivonia, there joined by Raymond Mhlaba, Slovo and Bernstein, who put together the MK constitution. Operating through a cell structure, the MK agreed to acts of sabotage to exert maximum pressure on the government with minimum casualties, bombing military installations, power plants, telephone lines and transport links at night, when civilians were not present. Mandela noted that should these tactics fail, MK would resort to "guerilla warfare and terrorism." Soon after ANC leader Luthuli was awarded the Nobel Peace Prize, the MK publicly announced its existence with 57 bombings on Dingane's Day (16 December) 1961, followed by further attacks on New Year's Eve.
The ANC agreed to send Mandela as a delegate to the February 1962 Pan-African Freedom Movement for East, Central and Southern Africa (PAFMECSA) meeting in Addis Ababa, Ethiopia. Traveling there in secret, Mandela met with Emperor Haile Selassie I, and gave his speech after Selaisse's at the conference. After the conference, he travelled to Cairo, Egypt, admiring the political reforms of President Gamal Abdel Nasser, and then went to Tunis, Tunisia, where President Habib Bourguiba gave him £5000 for weaponry. He proceeded to Morocco, Mali, Guinea, Sierra Leone, Liberia and Senegal, receiving funds from Liberian President William Tubman and Guinean President Ahmed Sékou Touré. Leaving Africa for London, England, he met anti-apartheid activists, reporters and prominent leftist politicians. Returning to Ethiopia, he began a six-month course in guerrilla warfare, but completed only two months before being recalled to South Africa.
On 5 August 1962, police captured Mandela along with Cecil Williams near Howick. Jailed in Johannesburg's Marshall Square prison, he was charged with inciting workers' strikes and leaving the country without permission. Representing himself with Slovo as legal advisor, Mandela intended to use the trial to showcase "the ANC's moral opposition to racism" while supporters demonstrated outside the court. Moved to Pretoria, where Winnie could visit him, in his cell he began correspondence studies for a Bachelor of Laws (LLB) degree from the University of London. His hearing began on 15 October, but he disrupted proceedings by wearing a traditional kaross, refusing to call any witnesses, and turning his plea of mitigation into a political speech. Found guilty, he was sentenced to five years' imprisonment; as he left the courtroom, supporters sang Nkosi Sikelel iAfrika.
On 11 July 1963, police raided Liliesleaf Farm, arresting those they found there and uncovering paperwork documenting MK's activities, some of which mentioned Mandela. The subsequent Rivonia Trial began at Pretoria Supreme Court on 9 October, with Mandela and his comrades charged with four counts of sabotage and conspiracy to violently overthrow the government. Their chief prosecutor was Percy Yutar, who called for them to receive the death penalty. Judge Quartus de Wet soon threw out the prosecution's case for insufficient evidence, but Yutar reformulated the charges, presenting his new case from December until February 1964, calling 173 witnesses and bringing thousands of documents and photographs to the trial.
With the exception of James Kantor, who was innocent of all charges, Mandela and the accused admitted sabotage but denied that they had ever agreed to initiate guerilla war against the government. They used the trial to highlight their political cause; one of Mandela's speeches – inspired by Castro's "History Will Absolve Me" speech – was widely reported in the press despite official censorship. The trial gained international attention, with global calls for the release of the accused from such institutions as the United Nations and World Peace Council. The University of London Union voted Mandela to its presidency, and nightly vigils for him were held in St. Paul's Cathedral, London. However, deeming them to be violent communist agitators, South Africa's government ignored all calls for clemency, and on 12 June 1964 de Wet found Mandela and two of his co-accused guilty on all four charges, sentencing them to life imprisonment rather than death.
Mandela and his co-accused were transferred from Pretoria to the prison on Robben Island, remaining there for the next 18 years. Isolated from non-political prisoners in Section B, Mandela was imprisoned in a damp concrete cell measuring 8 feet (2.4 m) by 7 feet (2.1 m), with a straw mat on which to sleep. Verbally and physically harassed by several white prison warders, the Rivonia Trial prisoners spent their days breaking rocks into gravel, until being reassinged in January 1965 to work in a lime quarry. Mandela was initially forbidden to wear sunglasses, and the glare from the lime permanently damaged his eyesight. At night, he worked on his LLB degree, but newspapers were forbidden, and he was locked in solitary confinement on several occasions for possessing smuggled news clippings. Classified as the lowest grade of prisoner, Class D, he was permitted one visit and one letter every six months, although all mail was heavily censored.
The political prisoners took part in work and hunger strikes – the latter considered largely ineffective by Mandela – to improve prison conditions, viewing this as a microcosm of the anti-apartheid struggle. ANC prisoners elected him to their four-man "High Organ" along with Sisulu, Govan Mbeki and Raymond Mhlaba, while he also involved himself in a group representing all political prisoners on the island, Ulundi, through which he forged links with PAC and Yu Chi Chan Club members. Initiating the "University of Robben Island," whereby prisoners lectured on their own areas of expertise, he debated topics such as homosexuality and politics with his comrades, getting into fierce arguments on the latter with Marxists like Mbeki and Harry Gwala. Though attending Christian Sunday services, Mandela studied Islam. He also studied Afrikaans, hoping to build a mutual respect with the warders and convert them to his cause. Various official visitors met with Mandela; most significant was the liberal parliamentary representative Helen Suzman of the Progressive Party, who championed Mandela's cause outside prison. In September 1970 he met British Labour Party MP Dennis Healey. South African Minister of Justice Jimmy Kruger visited in December 1974, but he and Mandela did not get on. His mother visited in 1968, dying shortly after, and his firstborn son Thembi died in a car accident the following year; Mandela was forbidden from attending either funeral. His wife was rarely able to visit, being regularly imprisoned for political activity, while his daughters first visited in December 1975; Winnie got out of prison in 1977 but was forcibly settled in Brandfort, still unable to visit him.
From 1967, prison conditions improved, with black prisoners given trousers rather than shorts, games being permitted, and food quality increasing. In 1969, an escape plan for Mandela was developed by Gordon Bruce, but it was abandoned after being infiltrated by an agent of the South African Bureau of State Security (BOSS), who hoped to see Mandela shot during the escape. In 1970, Commander Piet Badenhost became commanding officer. Mandela, seeing an increase in the physical and mental abuse of prisoners, complained to visiting judges, who had Badenost reassigned. He was replaced by Commander Willie Willemse, who developed a co-operative relationship with Mandela and was keen to improve prison standards. By 1975, Mandela had become a Class A prisoner, allowing greater numbers of visits and letters; he corresponded with anti-apartheid activists like Mangosuthu Buthelezi and Desmond Tutu. That year, he began his autobiography, which was smuggled to London, but remained unpublished at the time; prison authorities discovered several pages, and his study privileges were stopped for four years. Instead he devoted his spare time to gardening and reading until he resumed his LLB degree studies in 1980.
By the late 1960s, Mandela's fame had been eclipsed by Steve Biko and the Black Consciousness Movement (BCM). Seeing the ANC as ineffectual, the BCM called for militant action, but following the Soweto uprising of 1976, many BCM activists were imprisoned on Robben Island. Mandela tried to build a relationship with these young radicals, although he was critical of their racialism and contempt for white anti-apartheid activists. Renewed international interest in his plight came in July 1978, when he celebrated his 60th birthday. He was awarded an honourary doctorate in Lesotho, the Nehru Prize for International Understanding in India in 1970, and the Freedom of the City of Glasgow, Scotland in 1980. In March 1980 the slogan "Free Mandela!" was developed by journalist Percy Qoboza, sparking an international campaign that led the UN Security Council to call for his release. Despite increasing foreign pressure, the government refused, relying on powerful foreign Cold War allies in US President Ronald Reagan and UK Prime Minister Margaret Thatcher; Thatcher considered Mandela a communist terrorist and supported the suppression of the ANC.
In April 1982 Mandela was transferred to Pollsmoor Prison in Tokai, Cape Town along with senior ANC leaders Walter Sisulu, Andrew Mlangeni, Ahmed Kathrada and Raymond Mhlaba; they believed that they were being isolated to remove their influence on younger activists. Conditions at Pollsmoor were better than at Robben Island, although Mandela missed the camaraderie and scenery of the island. Getting on well with Pollsmoor's commanding officer, Brigadier Munro, Mandela was permitted to create a roof garden, also reading voraciously and corresponding widely, now permitted 52 letters a year. He was appointed patron of the multi-racial United Democratic Front (UDF), founded to combat reforms implemented by South African President P.W. Botha. Botha's National Party government had permitted Coloured and Indian citizens to vote for their own parliaments which would have control over education, health, and housing, but black Africans were excluded from the system; like Mandela, the UDF saw this as an attempt to divide the anti-apartheid movement on racial lines.
Violence across the country escalated, with many fearing civil war. Under pressure from an international lobby, multinational banks stopped investing in South Africa, resulting in economic stagnation. Numerous banks and Thatcher asked Botha to release Mandela – then at the height of his international fame – to defuse the volatile situation. Although considering Mandela a dangerous "arch-Marxist", in February 1985 Botha offered him a release from prison on condition that he '"unconditionally rejected violence as a political weapon". Mandela spurned the offer, releasing a statement through his daughter Zindzi stating "What freedom am I being offered while the organisation of the people [ANC] remains banned? Only free men can negotiate. A prisoner cannot enter into contracts."
In 1985 Mandela underwent surgery on an enlarged prostate gland, before being given new solitary quarters on the ground floor. He was met by "seven eminent persons", an international delegation sent to negotiate a settlement, but Botha's government refused to co-operate, in June calling a state of emergency and initiating a police crackdown on unrest. The anti-apartheid resistance fought back, with the ANC committing 231 attacks in 1986 and 235 in 1987. Utilising the army and right-wing paramilitaries to combat the resistance, the government secretly funded Zulu nationalist movement Inkatha to attack ANC members, furthering the violence. Mandela requested talks with Botha but was denied, instead secretly meeting with Minister of Justice Kobie Coetsee in 1987, having a further 11 meetings over 3 years. Coetsee organised negotiations between Mandela and a team of four government figures starting in May 1988; the team agreed to the release of political prisoners and the legalisation of the ANC on the condition that they permanently renounce violence, break links with the Communist Party and not insist on majority rule. Mandela rejected these conditions, insisting that the ANC would only end the armed struggle when the government renounced violence.
Mandela's 70th birthday in July 1988 attracted international attention, with the BBC organising the Nelson Mandela 70th Birthday Tribute music gig at London's Wembley Stadium. Although presented globally as a heroic figure, he faced personal problems when ANC leaders informed him that Winnie had set herself up as head of a criminal gang, the "Mandela United Football Club", who had been responsible for torturing and killing opponents – including children – in Soweto. Though some encouraged him to divorce her, he decided to remain loyal until she was found guilty by trial.
Recovering from tuberculosis caused by dank conditions in his cell, in December 1988 Mandela was moved to Victor Verster Prison near Paarl. Here, he was housed in the relative comfort of a warders house with a personal cook, using the time to complete his LLB degree. Allowed many visitors, Mandela organised secret communications with exiled ANC leader Oliver Tambo. In 1989, Botha suffered a stroke, retaining the state presidency but stepping down as leader of the National Party, to be replaced by the conservative F. W. de Klerk. In a surprise move, Botha invited Mandela to a meeting over tea in July 1989, an invitation Mandela considered genial. Botha was replaced as state president by de Klerk six weeks later; the new president believed that apartheid was unsustainable and unconditionally released all ANC prisoners except Mandela. Following the fall of the Berlin Wall in November 1989, de Klerk called his cabinet together to debate legalising the ANC and freeing Mandela. Although some were deeply opposed to his plans, de Klerk met with Mandela in December to discuss the situation, a meeting both men considered friendly, before releasing Mandela unconditionally and legalising all formerly banned political parties on 2 February 1990.
Leaving Victor Verster on 11 February, Mandela held Winnie's hand in front of amassed crowds and press; the event was broadcast live across the world. Driven to Cape Town's City Hall through crowds, he gave a speech declaring his commitment to peace and reconciliation with the white minority, but made it clear that the ANC's armed struggle was not over, and would continue as "a purely defensive action against the violence of apartheid." He expressed hope that the government would agree to negotiations, so that "there may no longer be the need for the armed struggle", and insisted that his main focus was to bring peace to the black majority and give them the right to vote in national and local elections. Staying at the home of Desmond Tutu, in the following days, Mandela met with friends, activists, and press, giving a speech to 100,000 people at Johannesburg's Soccer City.
Mandela proceeded on an African tour, meeting supporters and politicians in Zambia, Zimbabwe, Namibia, Libya and Algeria, continuing to Sweden where he was reunited with Tambo, and then London, where he appeared at the Nelson Mandela: An International Tribute for a Free South Africa concert in Wembley Stadium. Encouraging foreign countries to support sanctions against the apartheid government, in France he was welcomed by President François Mitterrand, in the Vatican City by Pope John Paul II, and in England he met Thatcher. In the United States, he met President George H.W. Bush, addressed both Houses of Congress and visited eight cities, being particularly popular among the African-American community. In Cuba he met President Fidel Castro, whom he had long emulated, with the two becoming friends. In Asia he met President R. Venkataraman in India, President Suharto in Indonesia and Prime Minister Mahathir Mohamad in Malaysia, before visiting Australia and Japan; he notably did not visit the Soviet Union, a longtime ANC supporter.
In May 1990, Mandela led a multiracial ANC delegation into preliminary negotiations with a government delegation of 11 Afrikaner males. Mandela impressed them with his discussions of Afrikaner history, and the negotiations led to the Groot Schuur Minute, in which the government lifted the state of emergency. In August Mandela – recognising the ANC's severe military disadvantage – offered a ceasefire, the Pretoria Minute, for which he was widely criticised by MK activists. He spent much time trying to unify and build the ANC, appearing at a Johannesburg conference in December attended by 1600 delegates, many of whom found him more moderate than expected. At the ANC's July 1991 national conference in Durban, Mandela admitted the party's faults and announced his aim in building a "strong and well-oiled task force" for securing majority rule. At the conference, he was elected ANC President, replacing the ailing Tambo, while a 50-strong multiracial, multi-gendered national executive was elected.
Mandela was given an office in the newly purchased ANC headquarters at Shell House, central Johannesburg, while moving with Winnie to her large Soweto home. Their marriage was increasingly strained as he learned of her affair with Dali Mpofu, but he supported her during her trial for kidnap and assault. He gained funding for her defence from International Defence and Aid and from Libyan leader Muammar Gaddafi, but in June 1991 she was found guilty and sentenced to six years, reduced to two on appeal. On 13 April 1992, Mandela publicly announced his separation from Winnie, while the ANC forced her to step down from the national executive for misappropriating ANC funds; Mandela moved into the mostly-white Johannesburg suburb of Houghton. Mandela's reputation was further damaged by the increase in "black-on-black" violence, particularly between ANC and Inkatha supporters in KwaZulu-Natal, in which thousands died. Mandela met with Inkatha leader Buthelezi, but the ANC prevented further negotiations on the issue. Mandela recognised that there was a "third force" within the state intelligence services fuelling the "slaughter of the people" and openly blamed de Klerk – whom he increasingly distrusted – for the Sebokeng massacre. In September 1991 a national peace conference was held in Johannesburg in which Mandela, Buthelezi and de Klerk signed a peace accord, though the violence continued.
The Convention for a Democratic South Africa (CODESA) began in December 1991 at the Johannesburg World Trade Center, attended by 228 delegates from 19 political parties. Although Cyril Ramaphosa led the ANC's delegation, Mandela remained a key figure, and after de Klerk used the closing speech to condemn the ANC's violence, he took to the stage to denounce him as "head of an illegitimate, discredited minority regime". Dominated by the National Party and ANC, little negotiation was achieved. CODESA 2 was held in May 1992, in which de Klerk insisted that post-apartheid South Africa must use a federal system with a rotating presidency to ensure the protection of ethnic minorities; Mandela opposed this, demanding a unitary system governed by majority rule. Following the Boipatong massacre of ANC activists by government-aided Inkatha militants, Mandela called off the negotiations, before attending a meeting of the Organisation of African Unity in Senegal, at which he called for a special session of the UN Security Council and proposed that a UN peacekeeping force be stationed in South Africa to prevent "state terrorism". The UN subsequently sent special envoy Cyrus Vance to the country to aid negotiations. Calling for domestic mass action, in August the ANC organised the largest-ever strike in South African history, while supporters marched on Pretoria.
Following the Bisho massacre, in which 28 ANC supporters and one soldier were shot dead by the Ciskei Defence Force during a protest march, Mandela realised that mass action was leading to further violence and resumed negotiations in September. He agreed to do so on the conditions that all political prisoners be released, that Zulu traditional weapons be banned, and that Zulu hostels would be fenced off, the latter two measures to prevent further Inkatha attacks; under increasing pressure, de Klerk reluctantly agreed. The negotiations agreed that a multiracial general election would be held, resulting in a five-year coalition government of national unity and a constitutional assembly that gave the National Party continuing influence. The ANC also conceded to safeguarding the jobs of white civil servants; such concessions brought fierce internal criticism. The duo agreed on an interim constitution, guaranteeing separation of powers, creating a constitutional court, and including a US-style bill of rights; it also divided the country into nine provinces, each with its own premier and civil service, a concession between de Klerk's desire for federalism and Mandela's for unitary government.
The democratic process was threatened by the Concerned South Africans Group (COSAG), an alliance of far-right Afrikaner parties and black ethnic-secessionist groups like Inkatha; in June 1993 the white supremacist Afrikaner Weerstandsbeweging (AWB) attacked the Kempton Park World Trade Centre. Following the murder of ANC leader Chris Hani, Mandela made a publicised speech to calm rioting, soon after appearing at a mass funeral in Soweto for Tambo, who had died from a stroke. In July 1993, both Mandela and de Klerk visited the US, independently meeting President Bill Clinton and each receiving the Liberty Medal. Soon after, they were jointly awarded the Nobel Peace Prize in Norway. Influenced by young ANC leader Thabo Mbeki, Mandela began meeting with big business figures, and played down his support for nationalisation, fearing that he would scare away much-needed foreign investment. Although criticised by socialist ANC members, he was encouraged to embrace private enterprise by members of the Chinese and Vietnamese Communist parties at the January 1992 World Economic Forum in Switzerland. Mandela also made a cameo appearance as a schoolteacher reciting one of Malcolm X's speeches in the final scene of the 1992 film Malcolm X.
With the election set for 27 April 1994, the ANC began campaigning, opening 100 election offices and hiring advisor Stanley Greenberg. Greenberg orchestrated the foundation of People's Forums across the country, at which Mandela could appear; though a poor public speaker, he was a popular figure with great status among black South Africans. The ANC campaigned on a Reconstruction and Development Programme (RDP) to build a million houses in five years, introduce universal free education and extend access to water and electricity. The party's slogan was "a better life for all", although it was not explained how this development would be funded. With the exception of the Weekly Mail and the New Nation, South Africa's press opposed Mandela's election, fearing continued ethnic strife, instead supporting the National or Democratic Party. Mandela devoted much time to fundraising for the ANC, touring North America, Europe and Asia to meet wealthy donors, including former supporters of the apartheid regime. He also urged a reduction in the voting age from 18 to 14; rejected by the ANC, this policy became the subject of ridicule.
Concerned that COSAG would undermine the election, particularly in the wake of the Battle of Bop and Shell House Massacre – incidents of violence involving the AWB and Inkatha, respectively – Mandela met with Afrikaner politicians and generals, including P.W. Botha, Pik Botha and Constand Viljoen, persuading many to work within the democratic system, and with de Klerk convinced Inkatha's Buthelezi to enter the elections rather than launch a war of secession. As leaders of the two major parties, de Klerk and Mandela appeared on a televised debate; although de Klerk was widely considered the better speaker at the event, Mandela's offer to shake his hand surprised him, leading some commentators to consider it a victory for Mandela. The election went ahead with little violence, although an AWB cell killed 20 with car bombs. Mandela voted at the Ohlange High School in Durban, but publicly accepted that the election had been marred by instances of fraud and sabotage. Having taken 62% of the national vote, the ANC was just short of the two-thirds majority needed to change the constitution. The ANC was also victorious in 7 provinces, with Inkatha and the National Party each taking another.
Mandela's inauguration took place in Pretoria on 10 May 1994, televised to a billion viewers globally. The event was attended by 4000 guests, including world leaders from disparate backgrounds. South Africa's first black President, Mandela became head of a Government of National Unity dominated by the ANC – which alone had no experience of governance – but containing representatives from the National Party and Inkatha. In keeping with earlier agreements, de Klerk became first Deputy President, while Thabo Mbeki was selected as second. Although Mbeki had not been his first choice for the job, Mandela would grow to rely heavily on him throughout his presidency, allowing him to organise policy details. Moving into the presidential office at Tuynhuys in Cape Town, Mandela allowed de Klerk to retain the presidential residence in the Groote Schuur estate, instead settling into the nearby Westbrooke manor, which he renamed "Genadendal", meaning "Valley of Mercy" in Afrikaans. Retaining his Houghton home, he also had a house built in his home village of Qunu, which he visited regularly, walking around the area, meeting with locals, and judging tribal disputes.
Aged 76, he faced various ailments, and although exhibiting continued energy, he felt isolated and lonely. He often entertained celebrities, such as Michael Jackson, Whoopi Goldberg, and the Spice Girls, and befriended a number of ultra-rich businessman, like Harry Oppenheimer of Anglo-American, as well as British monarch Elizabeth II on her March 1995 state visit to South Africa, resulting in strong criticism from ANC anti-capitalists. Despite his opulent surroundings, Mandela lived simply, donating a third of his 552,000 rand annual income to the Nelson Mandela Children's Fund, which he had founded in 1995. Although speaking out in favour of freedom of the press and befriending many journalists, Mandela was critical of much of the country's media, noting that it was overwhelmingly owned and run by middle-class whites and believing that it focused too much on scaremongering around crime. Changing clothes several times a day, after assuming the presidency, one of Mandela's trademarks was his use of Batik shirts, known as "Madiba shirts", even on formal occasions.
In December 1994, Mandela's autobiography, Long Walk to Freedom, was finally published. In late 1994 he attended the 49th conference of the ANC in Bloemfontein, at which a more militant National Executive was elected, among them Winnie Mandela; although she expressed an interest in reconciling, Nelson initiated divorce proceedings in August 1995. By 1995 he had entered into a relationship with Graça Machel, a Mozambican political activist 27 years his junior who was the widow of former president Samora Machel. They had first met in July 1990, when she was still in mourning, but their friendship grew into a partnership, with Machel accompanying him on many of his foreign visits. She turned down Mandela's first marriage proposal, wanting to retain some independence and dividing her time between Mozambique and Johannesburg.
Presiding over the transition from apartheid minority rule to a multicultural democracy, Mandela saw national reconciliation as the primary task of his presidency. Having seen other post-colonial African economies damaged by the departure of white elites, Mandela worked to reassure South Africa's white population that they were protected and represented in "the Rainbow Nation". Mandela attempted to create the broadest possible coalition in his cabinet, with de Klerk as first Deputy President while other National Party officials became ministers for Agriculture, Energy, Environment, and Minerals and Energy, and Buthelezi was named Minister for Home Affairs. The other cabinet positions were taken by ANC members, many of whom – like Joe Modise, Alfred Nzo, Joe Slovo, Mac Maharaj and Dullah Omar – had long been comrades, although others, such as Tito Mboweni and Jeff Radebe, were much younger. Mandela's relationship with de Klerk was strained; Mandela thought that de Klerk was intentionally provocative, while de Klerk felt that he was being intentionally humiliated by the president. In January 1995, Mandela heavily chastised him for awarding amnesty to 3,500 police just before the election, and later criticised him for defending former Minister of Defence Magnus Malan when the latter was charged with murder.
Mandela personally met with senior figures of the apartheid regime, including Hendrik Verwoerd's widow Betsie Schoombie and the lawyer Percy Yutar; emphasising personal forgiveness and reconciliation, he announced that "courageous people do not fear forgiving, for the sake of peace." He encouraged black South Africans to get behind the previously hated national rugby team, the Springboks, as South Africa hosted the 1995 Rugby World Cup. After the Springboks won an epic final over New Zealand, Mandela presented the trophy to captain Francois Pienaar, an Afrikaner, wearing a Springbok shirt with Pienaar's own number 6 on the back. This was widely seen as a major step in the reconciliation of white and black South Africans; as de Klerk later put it, "Mandela won the hearts of millions of white rugby fans." Mandela's efforts at reconciliation assuaged the fears of whites, but also drew criticism from more militant blacks. His estranged wife, Winnie, accused the ANC of being more interested in appeasing whites than in helping blacks.
More controversially, Mandela oversaw the formation of a Truth and Reconciliation Commission to investigate crimes committed under apartheid by both the government and the ANC, appointing Desmond Tutu as its chair. To prevent the creation of martyrs, the Commission granted individual amnesties in exchange for testimony of crimes committed during the apartheid era. Dedicated in February 1996, it held two years of hearings detailing rapes, torture, bombings, and assassinations, before issuing its final report in October 1998. Both de Klerk and Mbeki appealed to have parts of the report suppressed, though only de Klerk's appeal was successful. Mandela praised the Commission's work, stating that it "had helped us move away from the past to concentrate on the present and the future".
Mandela's administration inherited a country with a huge disparity in wealth and services between white and black communities. Of a population of 40 million, around 23 million lacked electricity or adequate sanitation, 12 million lacked clean water supplies, with 2 million children not in school and a third of the population illiterate. There was 33% unemployment, and just under half of the population lived below the poverty line. Government financial reserves were nearly depleted, with a fifth of the national budget being spent on debt repayment, meaning that the extent of the promised Reconstruction and Development Programme (RDP) was scaled back, with none of the proposed nationalisation or job creation. Instead, the government adopted liberal economic policies designed to promote foreign investment, adhering to the "Washington consensus" advocated by the World Bank and International Monetary Fund.
Under Mandela's presidency, welfare spending increased by 13% in 1996/97, 13% in 1997/98, and 7% in 1998/99. The government introduced parity in grants for communities, including disability grants, child maintenance grants, and old-age pensions, which had previously been set at different levels for South Africa's different racial groups. In 1994, free healthcare was introduced for children under six and pregnant women, a provision extended to all those using primary level public sector health care services in 1996. By the 1999 election, the ANC could boast that due to their policies, 3 million people were connected to telephone lines, 1.5 million children were brought into the education system, 500 clinics were upgraded or constructed, 2 million people were connected to the electricity grid, water access was extended to 3 million people, and 750,000 houses were constructed, housing nearly 3 million people.
The Land Restitution Act of 1994 enabled people who had lost their property as a result of the Natives Land Act, 1913 to claim back their land, leading to the settlement of tens of thousands of land claims. The Land Reform Act 3 of 1996 safeguarded the rights of labour tenants who live and grow crops or graze livestock on farms. This legislation ensured that such tenants could not be evicted without a court order or if they were over the age of sixty-five. The Skills Development Act of 1998 provided for the establishment of mechanisms to finance and promote skills development at the workplace. The Labour Relations Act of 1995 promoted workplace democracy, orderly collective bargaining, and the effective resolution of labour disputes. The Basic Conditions of Employment Act of 1997 improved enforcement mechanisms while extending a "floor" of rights to all workers, while the Employment Equity Act of 1998 was passed to put an end to unfair discrimination and ensure the implementation of affirmative action in the workplace.
Many domestic problems however remained. Critics like Edwin Cameron accused Mandela's government of doing little to stem the HIV/AIDS pandemic in the country; by 1999, 10% of South Africa's population were HIV positive. Mandela later admitted that he had personally neglected the issue, leaving it for Mbeki to deal with. Mandela also received criticism for failing to sufficiently combat crime, with South Africa having one of the world's highest crime rates; this was a key reason cited by the 750,000 whites who emigrating in the late 1990s. Mandela's administration was mired in corruption scandals, with Mandela being perceived as "soft" on corruption and greed.
Following the South African example, Mandela encouraged other nations to resolve conflicts through diplomacy and reconciliation. He echoed Mbeki's calls for an "African Renaissance", and was greatly concerned with issues on the continent; he took a soft diplomatic approach to removing Sani Abacha's military junta in Nigeria but later became a leading figure in calling for sanctions when Abacha's regime increased human rights violations. In 1996 he was appointed Chairman of the Southern African Development Community (SADC) and initiated unsuccessful negotiations to end the First Congo War in Zaire. In South Africa's first post-apartheid military operation, Mandela ordered troops into Lesotho in September 1998 to protect the government of Prime Minister Pakalitha Mosisili after a disputed election prompted opposition uprisings.
In September 1998, Mandela was appointed Secretary-General of the Non-Aligned Movement, who held their annual conference in Durban. He used the event to criticise the "narrow, chauvinistic interests" of the Israeli government in stalling negotiations to end the Israeli-Palestinian conflict and urged India and Pakistan to negotiate to end the Kashmir conflict, for which he was criticised by both Israel and India. Inspired by the region's economic boom, Mandela sought greater economic relations with East Asia, in particular with Malaysia, although this was scuppered by the 1997 Asian financial crisis. He attracted controversy for his close relationship with Indonesian President Suharto, whose regime was responsible for mass human rights abuses, although privately urged him to withdraw from the occupation of East Timor.
Mandela faced similar criticism from the west for his personal friendships with Fidel Castro and Muammar Gaddafi. Castro visited in 1998, to widespread popular acclaim, while Mandela met Gaddafi in Libya to award him the Order of Good Hope. When western governments and media criticised these visits, Mandela lambasted the criticisms as having racist undertones. Mandela hoped to resolve the long-running dispute between Libya and the US and Britain over bringing to trial the two Libyans, Abdelbaset al-Megrahi and Lamin Khalifah Fhimah, who were indicted in November 1991 and accused of sabotaging Pan Am Flight 103. Mandela proposed that they be tried in a third country, which was agreed to by all parties; governed by Scots law, the trial was held at Camp Zeist in the Netherlands in April 1999, and found one of the two men guilty.
The new Constitution of South Africa was agreed upon by parliament in May 1996, enshrining a series of institutions to check political and administrative authority within a constitutional democracy. De Klerk however opposed the implementation of this constitution, withdrawing from the coalition government in protest. The ANC took over the cabinet positions formerly held by the National Party, with Mbeki becoming sole Deputy President. When both Mandela and Mbkei were out of the country in one occasion, Buthelezi was appointed "Acting President", marking an improvement in his relationship with Mandela.
Mandela stepped down as ANC President at the December 1997 conference, and although hoping that Ramaphosa would replace him, the ANC elected Mbeki to the position; Mandela admitted that by then, Mbeki had become "de facto President of the country". Replacing Mbeki as Deputy President, Mandela and the Executive supported the candidacy of Jacob Zuma, a Zulu who had been imprisoned on Robben Island, but he was challenged by Winnie, whose populist rhetoric had gained her a strong following within the party; Zuma defeated her in a landslide victory vote at the election.
Mandela's relationship with Machel had intensified; in February 1998 he publicly stated that "I'm in love with a remarkable lady", and under pressure from his friend Desmond Tutu, who urged him to set an example for young people, he set a wedding for his 80th birthday, in July. The following day he held a grand party with many foreign dignitaries. Mandela had never planned on standing for a second term in office, and gave his farewell speech on 29 March 1999, after which he retired.
Retiring in June 1999, Mandela sought a quiet family life, to be divided between Johannesburg and Qunu. He set about authoring a sequel to his first autobiography, to be titled The Presidential Years, but it was abandoned before publication. Finding such seclusion difficult, he reverted to a busy public life with a daily programme of tasks, meeting with world leaders and celebrities, and when in Johannesburg worked with the Nelson Mandela Foundation, founded in 1999 to focus on combating HIV/AIDS, rural development and school construction. Although he had been heavily criticised for failing to do enough to fight the pandemic during his presidency, he devoted much of his time to the issue following his retirement, describing it as "a war" that had killed more than "all previous wars", and urged Mbeki's government to ensure that HIV+ South Africans had access to retrovirals. In 2000, the Nelson Mandela Invitational charity golf tournament was founded, hosted by Gary Player. Mandela was successfully treated for prostate cancer in July 2001.
In 2002, Mandela inaugurated the Nelson Mandela Annual Lecture, and in 2003 the Mandela Rhodes Foundation was created at Rhodes House, University of Oxford, to provide postgraduate scholarships to African students. These projects were followed by the Nelson Mandela Centre of Memory and the 46664 campaign against HIV/AIDS. He gave the closing address at the XIII International AIDS Conference in Durban in 2000, and in 2004, spoke at the XV International AIDS Conference in Bangkok, Thailand.
Publicly, Mandela became more vocal in criticising Western powers. He strongly opposed the 1999 NATO intervention in Kosovo and called it an attempt by the world's powerful nations to police the entire world. In 2003 he spoke out against the plans for the US and UK to launch the War in Iraq, describing it as "a tragedy" and lambasting US President George W. Bush and UK Prime Minister Tony Blair for undermining the UN. He attacked the US more generally, asserting that it had committed more "unspeakable atrocities" across the world than any other nation, citing the atomic bombing of Japan; this attracted international controversy, although he would subsequently reconcile his relationship with Blair. Retaining an interest in Libyan-UK relations, he visited Megrahi in Barlinnie prison, and spoke out against the conditions of his treatment, referring to them as "psychological persecution."
In June 2004, aged 85 and amid failing health, Mandela announced that he was "retiring from retirement" and retreating from public life, remarking "Don't call me, I will call you." Although continuing to meet with close friends and family, the Foundation discouraged invitations for him to appear at public events and denied most interview requests. He retained some involvement in international affairs and encouraged Zimbabwean President Robert Mugabe to resign over growing human rights abuses in the country. When this proved ineffective, he spoke out publicly against Mugabe in 2007, asking him to step down "with residual respect and a modicum of dignity." That year, Mandela, Machel, and Desmond Tutu convened a group of world leaders in Johannesburg to contribute their wisdom and independent leadership to some of the world's toughest problems. Mandela announced the formation of this new group, The Elders, in a speech delivered on his 89th birthday.
Mandela's 90th birthday was marked across the country on 18 July 2008, with the main celebrations held at Qunu, and a concert in his honour in Hyde Park, London. In a speech marking the event, Mandela called for the rich to help the poor across the world. Throughout Mbeki's presidency, Mandela continued to support the ANC, although usually overshadowed Mbeki at any public events that the two attended. Mandela was more at ease with Mbeki's successor Jacob Zuma, although the Nelson Mandela Foundation were upset when his grandson, Chief Mandla Mandela, flew him out to the Eastern Cape to attend a pro-Zuma rally in the midst of a storm in 2009.
Since 2004, Mandela had successfully campaigned for South Africa to host the 2010 FIFA World Cup, declaring that there would be "few better gifts for us in the year" marking a decade since the fall of apartheid. Despite maintaining a low-profile during the event, Mandela made a rare public appearance during the closing ceremony, where he received a "rapturous reception". In February 2011, he was briefly hospitalised with a respiratory infection, attracting international attention, before being re-hospitalised for a lung infection and gallstone removal in in December 2012. After a successful medical procedure in early March 2013, his lung infection reoccurred and he was briefly hospitalised in Pretoria. On 8 June 2013, his lung infection worsened, and he was rehospitalized in Pretoria, reportedly in a serious condition. After four days, it was reported that he had stabilized and remained in a "serious, but stable condition".
Across the world, Mandela came to be seen as "a moral authority" with a great "concern for truth". Considered friendly and welcoming, Mandela exhibited a "relaxed charm" when talking to others, including his opponents. Although often befriending millionaires and dignitaries, he enjoyed talking with their staff when at official functions. In later life, he was known for looking for the best in everyone, even defending political opponents to his allies, though some thought him too trusting of others. He was renowned for his stubbornness and loyalty, and exhibited a "hot temper" which could flare up in anger in certain situations, also being "moody and dejected" away from the public eye. He also had a mischievous sense of humour.
Very conscious of his image, throughout his life he sought fine quality clothes, carrying himself in a "regal style" stemming from his childhood in the Thembu royal house, and during his presidency was often compared to a constitutional monarch. Considered a "master of imagery and performance", he excelled at presenting himself well in press photographs and producing soundbites.
Mandela was an African nationalist, an ideological position he held since joining the ANC, also being "a democrat, and a socialist". Although he presented himself in an autocratic manner in several speeches, Mandela was a devout believer in democracy and would abide by majority decisions even when deeply disagreeing with them. He held a conviction that "inclusivity, accountability and freedom of speech" were the fundamentals of democracy, and was driven by a belief in natural and human rights.
A democratic socialist, Mandela was "openly opposed to capitalism, private land-ownership and the power of big money". Influenced by Marxism, during the revolution Mandela advocated scientific socialism, although he denied being a communist during the Treason Trial. Biographer David James Smith thought this untrue, stating that Mandela "embraced communism and communists" in the late 1950s and early 1960s, though was a "fellow traveller" rather than a party member. In the 1955 Freedom Charter, which Mandela had helped create, it called for the nationalisation of banks, gold mines, and land, believing it necessary to ensure equal distribution of wealth. Despite these beliefs, Mandela nationalised nothing during his presidency, fearing that this would scare away foreign investors. This decision was in part influenced by the fall of the socialist states in the Soviet Union and Eastern Bloc during the early 1990s.
Mandela has been married three times, has fathered six children, has 17 grandchildren, and a growing number of great-grandchildren. Considered physically undemonstrative with his children, he could be stern and demanding of them, although was more affectionate with his grandchildren.
Mandela's first marriage was to Evelyn Ntoko Mase, who was also from the Transkei, although they met in Johannesburg before being married in October 1944. The couple broke up in 1957 after 13 years, divorcing under the multiple strains of his adultery and constant absences, devotion to revolutionary agitation, and the fact she was one of Jehovah's Witnesses, a religion which requires political neutrality. The couple had two sons, Madiba "Thembi" Thembekile (1946–1969) and Makgatho Mandela (1950–2005), and two daughters, both named Makaziwe Mandela (known as Maki; born 1947 and 1953). Their first daughter died aged nine months, and they named their second daughter in her honour. Mase died in 2004, and Mandela attended her funeral. Makgatho's son, Mandla Mandela, became chief of the Mvezo tribal council in 2007.
Mandela's second wife, Winnie Madikizela-Mandela, also came from the Transkei area, although they, too, met in Johannesburg, where she was the city's first black social worker. They had two daughters, Zenani (Zeni), born 4 February 1958, and Zindziswa (Zindzi) Mandela-Hlongwane, born 1960. Zindzi was only 18 months old when her father was sent to Robben island. Later, Winnie would be deeply torn by family discord which mirrored the country's political strife; while her husband was serving a life sentence in the Robben Island prison, her father became the agriculture minister in the Transkei. The marriage ended in separation (April 1992) and divorce (March 1996), fueled by political estrangement. Mandela was still in prison when his daughter Zenani was married to Prince Thumbumuzi Dlamini, elder brother of King Mswati III of Swaziland, in 1973. Although she had vivid memories of her father, from the age of four up until sixteen, South African authorities did not permit her to visit him. In July 2012, Zenani was appointed ambassador to Argentina, becoming the first of Mandela's three remaining children to enter public life.
Mandela remarried on his 80th birthday in 1998, to Graça Machel née Simbine, widow of Samora Machel, the former Mozambican president and ANC ally who was killed in an air crash 12 years earlier.
Within South Africa, Mandela is widely considered to be "the father of the nation", and "the founding father of democracy", being seen as "the national liberator, the saviour, its Washington and Lincoln rolled into one". In 2004, Johannesburg granted Mandela the freedom of the city, with Sandton Square being renamed Nelson Mandela Square, after a Mandela statue was installed there. In 2008, another Mandela statue was unveiled at Groot Drakenstein Correctional Centre, formerly Victor Verster Prison, near Cape Town, standing on the spot where Mandela was released from the prison.
He has also received international acclaim. In 1993, he received the joint Nobel Peace Prize with de Klerk. In November 2009, the United Nations General Assembly proclaimed Mandela's birthday, 18 July, as "Mandela Day", marking his contribution to the anti-apartheid struggle. It called on individuals to donate 67 minutes to doing something for others, commemorating the 67 years that Mandela had been a part of the movement.
Awarded the US Presidential Medal of Freedom, and the Order of Canada, he was the first living person to be made an honorary Canadian citizen. The last reciprocent of the Soviet Union's Lenin Peace Prize from the Soviet Union, in 1990 he received the Bharat Ratna Award from the government of India, and in 1992 received Pakistan's Nishan-e-Pakistan. In 1992 he was awarded the Atatürk Peace Award by Turkey. He refused the award, citing human rights violations committed by Turkey at the time, but later accepted the award in 1999. Elizabeth II awarded him the Bailiff Grand Cross of the Order of St. John and the Order of Merit.
Many artists have dedicated songs to Mandela. One of the most popular was from The Special AKA who recorded the song "Free Nelson Mandela" in 1983, which Elvis Costello also recorded and had a hit with. Stevie Wonder dedicated his 1985 Oscar for the song "I Just Called to Say I Love You" to Mandela, resulting in his music being banned by the South African Broadcasting Corporation. In 1985, Youssou N'Dour's album Nelson Mandela was the Senegalese artist's first US release. Other artists who released songs or videos honouring Mandela include Johnny Clegg, Hugh Masekela, Brenda Fassie, Beyond, Nickelback, Raffi, and Ampie du Preez and AB de Villiers.
In the United Kingdom, various monuments have been erected to Mandela. In 2001, Nelson Mandela Gardens in Millennium Square, Leeds was officially opened, with Mandela being awarded the freedom of the city. In Leicester, Nelson Mandela Park was created. On 2007, a Mandela statue was unveiled at Parliament Square in London by actor Richard Attenborough, Mayor Ken Livingstone, Donald Woods' widow Wendy Woods, and Prime Minister Gordon Brown. Mandela stated that it represented not just him, but all those who have resisted oppression, especially those in South Africa.
Mandela has been depicted in cinema and television on multiple occasions. The 1997 film Mandela and de Klerk starred Sidney Poitier as Mandela, while Dennis Haysbert played him in Goodbye Bafana (2007). In the 2009 BBC television film Mrs Mandela, Nelson Mandela was portrayed by David Harewood, and Morgan Freeman portrayed him in Invictus (2009).
|Find more about Nelson Mandela at Wikipedia's sister projects|
|Media from Commons|
|Learning resources from Wikiversity|
|News stories from Wikinews|
|Quotations from Wikiquote|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
F. W. de Klerk
|President of South Africa
|Party political offices|
|President of the African National Congress
Andrés Pastrana Arango
|Secretary General of Non-Aligned Movement | 1 | 6 |
<urn:uuid:669df2c1-1f71-4046-91e2-f23c9a3ddf6b> | “Cisco Ethernet switches to play broader roles” – says an article under Trend Analysis, on page 11 of March 22, 2010 issue of Network World…
* But did you know? That Ethernet Switches aren’t affected by a looming change of IP Standards (See my IPv6 article below)? Nope it will just hop across them the same as IPv4 does.
Ethernet switches operate at the Data Link Layer, the second rung up the ladder on the way to your software (Application) on your computer. There are 7 layers altogether in this model, which is used as a Reference Model, for how things actually work. Layer 1, the Physical Layer is where wires are connected together to Ethernet Switches and Computers. This is where “signaling” occurs and things are pretty much encoded and decoded in binary. That’s a pretty low level, eh?
Layer 2 deals with “Physical Addressing” but that doesn’t mean IP Addressing it means “MAC” addressing. Thats those long Hexadecimal Addresses that every network card from wired to wireless has. At this level the Ethernet Switch doesn’t know and doesn’t care about IP Addressing. You could be talking about frogs or military aircraft and Layer 2 wouldn’t be any wiser about it. In an Ethernet Switch, as opposed to an old-style Network Hub (which basically just blasted every message to ever computer wether they wanted it or not), *it* keeps track of which MAC addresses are present on each of its ports (those jacks that you plug CAT5 or CAT6 RJ45 connector-type cables into) and builds a table for “Fast Switching” of Ethernet Frames (ethernet smallest unit of messaging) to the correct port. That’s how traffic gets to a port on an Ethernet Switch.
So if an Ethernet Switch is dealing with Ethernet Frames and Mac Addresses – how in the heck do you get IP Traffic (Internet Traffic) to a computer?
Enter “Arp” – Address Resolution Protocol. All computers, in their TCP/IP implementation know how to use a broadcast protocol called ARP. Arp basically are messages sent out by your computer, by the TCP/IP Stack over your Ethernet Card, saying “Arp who has 192.168.1.1?”. The computer that actually has the IP Address 192.168.1.1 answers something like this: “Arp 192.168.1.1 is *me* at MAC address aa:bb.cc:dd:ee:ff:a1:b2″. And from then on, for a little while, all traffic for that IP Address is sent to that MAC address … which our friendly Ethernet Switch knows is on one particular port.
Wireless, forget about the 802.11a/b/g/n protocols, works pretty much the same way. A wireless access point acts as if it were a Port on an Ethernet Switch. Aside from any router functionality that might be in an Access/Router combo unit, it’s just a fancy “wireless Ethernet Switch”.
How about that????
DBA Alan Spicer Telcom / Alan Spicer Marine Telecom
Computer Services, Wired/Wireless Networking,
Cell/Sat/Landline Communications, General Consulting…
Marine, Business, Small Office and Home Office (SOHO)
* Cost Savings and Integration of Multiple Internet Technologies
on board Sail and Motor Yachts * Documentation, Operating
Instructions, and Support after the Sale *
Mobile Internet! Step up to the HSPA 3G Fast Internet!
Ericsson W35 released in the USA. This you’ve gotta SEE!!
Better looking presentation than W25 (you might not want to
hide this one in the Doghouse!) + High Speed Upload which
the W25 did not have.
Livewire: Access Controller (Service Selector): | 1 | 3 |
<urn:uuid:c117d65a-75ce-4b5e-9d1e-0f248809b87c> | Since the current Arduino tools do not support in-circuit debugging, you will have to rely heavily on the serial print outs when tracking down those hard-to-find bugs unless you are one of those few elites whose code just works 100% every time. It is all good when you are doing your development when a computer is readily available. But what if you need to capture the outputs when you do not have the access to a computer? I found myself running into this situation quite often.
One way to solve this problem is to use a serial monitor (like this one I built before) to output the values onto an LCD display. But if your application generates a lot of messages, it would still be hard to spot the relevant information as you can only see the last couple of lines of the data.
So my solution is to add a none-volatile off-screen buffer to the serial display so that multiple rows of data can be captured during run time and retained for later debugging.
While ATmega328 has built-in EEPROM of 1Kbytes, there is not nearly enough space to store much information for debugging purposes, so clearly some kind of external memory is needed.
For the external memory, I prefer using FRAM over EEPROM as FRAM can be written to almost instantaneously whereas for EEPROM some delay is required for the write operation. Also, FRAM can handle many more read/write cycles which makes it ideal for this kind of applications in which data needs to be refreshed all the time. In one of my previous blog postings, I wrote about how to interface FRAM with Arduino. You can refer to that posting for more details.
In my implementation, I used two Ramtron FM25C160‘s which provide 4 Kbytes of storage space. And I used a standard 16×2 character LCD for display. If we allow each line to contain up to 32 bytes of data (16 bytes will be off screen, but can be viewed via scroll function) we will have 128 lines available to us in the buffer. If this is not sufficient, you can always add more memory chips or choose one that offers bigger storage space.
To store the information sent to the serial monitor in the off-screen buffer, a circular buffer construct is needed. The serial data stream is stored into the circular buffer as it comes in and when the buffer becomes full, the pointer that points to the buffer’s bottom address is incremented, purging the oldest record in the buffer and making room for another a new row in the mean time. So the circular buffer always contains the latest serial outputs (up to the latest 128 lines in this setup). You can find the implementation details in the full code listing at the end.
To simply the coding a little bit, the values of the pointers for tracking the buffer top, bottom and the current scroll position are kept in volatile memory (RAM), they are flushed to the ATmega’s EEPROM on demand by pressing the “save” button. So if you want to save the data recorded so far (note that the actual data is always stored in the buffer, but without saving the buffer pointers, there is no way to retrieve it), you can press the “save” button before powering off the MCU. When the MCU is first powered up, it checks to see whether the buffer location pointers were previously stored, and if so the previously saved values are loaded and the displayed content is scrolled to that position so that user can continue to work on the data in exactly the same state as before. The save button also functions as a “clear” button in my implementation. If you hold down the “save” button for more than a second, all buffer pointer locations are reset to zero and all information previously stored in the FRAM is cleared. You can examine the source code for more details.
Of course we could have just used a few dedicated locations in the FRAM to store the buffer pointer information on every single buffer write instead of saving this information in EEPROM on the MCU. This alternative approach would probably be a little more convenient as the buffer pointers are always updated and saved automatically each time when a data row is saved. There is no particular reason why I couldn’t have done that. It is totally up you to decide which is the best method to meet your particular need.
In the picture below you can see the finished serial display. Besides the save/clear button mentioned earlier, there are four additional buttons for scrolling.
Note that I omitted the UART to RS232 converter chip since most of my debugging are done via UART. But you can easily add one if you intend to use it with RS232 voltage level compatible devices.
Here is a typical usage scenario: connect the serial display to the circuit you are debugging via serial cables (Rx, Tx and Gnd). Make sure that both sides operate at the same baud rate (in this case it’s 9600 bps). Start your testing. The serial outputs are then stored into the none-volatile memory on the serial display board. When you are done capturing the data, press the save button and you can then power off and disconnect the serial display from your test circuit and analyze the stored information later. | 1 | 2 |
<urn:uuid:49ffbe6d-f59e-49f0-8b84-5f3fd2377f26> | Protecting Life on Earth: An Introduction to the Science of Conservation - , Peter B. Moyle
- Add To Basket
Instant Download from ebook-reader, digital version
Works on PC, Mac and modern smartphones and tablets!
- Create an Adobe account.
- Install/update Adobe Digital Edition.
Buy this book on TRADEBIT.COM.
See the How-To!
Standard iOS and Android reader apps work, too!
Author: Marchetti, Michael P.
Author: Moyle, Peter B.
Publisher: University of California Press
Title: Protecting Life on Earth: An Introduction to the Science of Conservation
Pages: 00240 (Encrypted EPUB) / 00240 (Encrypted PDF)
On Sale: 2010-07-02
Category: Nature : Environmental Conservation & Protection - General
More Files From This User
- Big Ecology: The Emergence of Ecosystem Science - Prof. Coleman, David C.
- Leopolds Shack and Rickettss Lab: The Emergence of Environmentalism - Michael J. Lannoo
- Absolute Music, Mechanical Reproduction - Arved Ashby
- Reading between the Wines - Terry Theise
- The Maternal Factor: Two Paths to Morality - Nel Noddings
Saving Forests, Protecting People?: Environmental Conservation In Central America - , Max J. Pfeffer
By examining the connections among local values, material needs, and environmental management regimes, Saving Forests, Protecting People? explores that diffi......
Zoo Conservation Biology - , Stephan M. Funk
In the face of ever-declining biodiversity, zoos have a major role to play in species conservation. Written by professionals involved in in-situ conservation......
Responsible Tourism: Critical Issues For Conservation And Development
?Responsible Tourism presents a wide variety of valuable lessons learned in responsible tourism initiatives in Southern Africa that many tourism practitioner......
Too Many People?: Population, Immigration, And The Environmental Crisis - , Simon Butler
A clear, evocative, and well-documented refutation of the idea that overpopulation is at the root of many environmental problems. Author: Angus, Ian Author:...... | 1 | 2 |
<urn:uuid:4d11a740-c18c-45f1-9f13-4d3b665b5bee> | - Collection definition, the act of collecting. See more. something that is collected; a group of objects or an amount of material accumulated in one location, esp. for. — “Collection | Define Collection at ”,
- collection n. The act or process of collecting. A group of objects or works to be seen, studied, or kept together. — “collection: Definition, Synonyms from ”,
- The Collection. The Phillips Collection invites visitors to experience an extraordinary collection ranging from masterpieces of French impressionism and American modernism to art of the present day. Today, the museum's collection includes nearly 3,000 works by American and European. — “The Collection”,
- Promotion Item Windsor Collection, Promote Your Business or Organization, , An Inc 500 Award Winner Can Help!. — “Promotion Item Windsor Collection PG6 - ”,
- a : something collected; especially : an accumulation of objects gathered for study, comparison, or exhibition or as a hobby b : group, aggregate c : a set of apparel designed for sale usually in a particular season. Examples of COLLECTION. a system of tax collection. — “Collection - Definition and More from the Free Merriam”, merriam-
- Definition of collection in the Legal Dictionary - by Free online English dictionary and encyclopedia. What is collection? Meaning of collection as a legal term. What does collection mean in law?. — “collection legal definition of collection. collection”, legal-
- Begun at the Museum's inception in 1981, today the collection numbers approximately 125,000 artifacts. The collection is constantly growing as collectors, studios, manufacturers, and industry workers donate new artifacts. The Museum loans objects from its collection to other cultural. — “Museum of the Moving Image”, movingimage.us
- Buy collection, Collectibles items on eBay. Find great deals on Books, DVDs Movies items and get what you want now!. — “collection items - Get great deals on Collectibles, Books”,
- With a history dating back to 1888, the Milwaukee Art Museum's Collection includes nearly 25,000 works from antiquity to the present, encompassing painting, drawing, sculpture, decorative arts, prints, video art and installations, and textiles. — “Milwaukee Art Museum | collection”,
- Collection laws and collection agency resources. Research State and Federal collection laws and locate collection agencies. — “Collection Laws, Collection Agency, Collection Agencies”, debt-collection-
- Indulge in the most popular beauty care, fragrance, makeup, bath and body products at Victoria's Secret. Choose from the world's top beauty brands and anti-aging Introducing Midnight Glamour, a limited-edition holiday collection with the season’s hottest shadesâ€"touched by the allure of night. — “VS Makeup - Victoria's Secret”, www2
- One of the world's largest video sites, serving the best videos, funniest movies and clips. Film Trailer | Topics: THE SIX MILLION DOLLAR MAN: THE COMPLETE COLLECTION, Lee Majors, Richard Anderson, Martin E. Brooks. — “Videos tagged with Collection - Metacafe”,
- Since our inception in 2003, Shaila's by N Collection has enjoyed a phenomenal response from its customers attributed to its unique designs. Today, N Collection designs are labeled as "exclusive and wearable work of art" and are sold at prices ranging from AED 300 to AED 499 only. — “NCollection - Unique collection of designer shaila and abaya”, ncollection.ae
- LAMBORGHINI GALLARDO LP560-4 - SUPER TROFEO - BLACK / BLANCPAIN #1 MCLAREN F1 - MAGNESIUM SILVER/METALLIC SILVER. More Info. KOENIGSEGG CCX - BLACK. New. More. — “AUTOart -”, aa-
- This is the official web site of The Frick Collection If you are planning a visit to The Frick Collection, please note that not all artworks are on view at all times. — “Frick Collection and Frick Art Reference Library, NY”,
- collection (plural collections) A set of items or objects procured or gathered together by a person, group, or other agent. The attic contains a remarkable collection of antiques, oddities, and random junk. The asteroid belt consists of a collection of dust, rubble, and minor planets. — “collection - Wiktionary”,
- Definition of collection in the Online Dictionary. Meaning of collection. Pronunciation of collection. Translations of collection. collection synonyms, collection antonyms. Information about collection in the free online English dictionary and. — “collection - definition of collection by the Free Online”,
- Ohhhhh, they dug their own grave it sounds like. Violations galore. Go to the county court clerk and request a "complete" copy of the case file. Do that ASAP From what you stated the letter said, I'm betting that there is no judgment filed. If. — “How do I handle collection agency bogus court papers?”,
- 12 talks at the West Collection. 12 talks is a series of monthly discussions and artist On November 16th at 12:30 pm, Tristin Lowe, creator of "Big Chair", will be visiting the West Collection, to discuss his monumental sculpture, with SEI employees and other visitors. West Collection tour to follow. — “Home”,
- Collection (museum), objects in a particular field forms the core basis for the museum Collection (Oxford Colleges), a beginning-of-term exam or Principal's Collections. — “Collection - Wikipedia, the free encyclopedia”,
- Buy Home decorators collection from top rated stores. Comparison shopping for the best price. — “Home decorators collection Living Room Furniture at Bizrate”,
- collection - definition of collection from : General: Process of recovering amounts owed to a firm by its customers. — “collection definition”,
related images for collection
- http www verresetdecors com images collection incontournables t amuses bouches jpg http www verresetdecors com images collection incontournables collection argent jpg http www verresetdecors com images collection incontournables collection cuivre jpg http www verresetdecors com images collection incontournables t collection argent jpg
- IMG http i644 photobucket com albums uu168 tristanv16 collection jpg my collection not included on photo Tomb of the Mutilated cassette Hammer Smashed Face cassette Live Cannibalism
- The Canon S100 Digital Elph is our favorite project camera It is well designed well constructed and readily available We have been amassing a collection of S100 cameras both working and non working units from various sources
- up on PSN AcFreeze on XBL and PSN I own every system just pointing out facts since you people don t seem to know em Also here s a pic incase you think I m bias to one single company http i114 photobucket com albums n collection jpg
- Click here to browse the Fashion Collection Catalogue The Fashion Collection from My Bag Hanger offer 20 colorful funcky designs The designs bag hanger is covered with an expoxy coating
- Powered by WP Greet Box If you are reading this chances are you are interested in retro gaming and it follows that you might have a retro gaming collection whether it be just a few consoles or a full blown
- Collection bar 2 jpg
- paleobotany collection jpg
- to Transformers and TMNT don t really have many of either yet but i buy the occasional figure from other lines as well My collection is pretty miniscule right now heres a pic though Collection Theres a couple more TMNT i got yesterday but thats pretty much it I tend to only buy figures that i really like
- Daddy http i107 photobucket com albums m rett123001 jpg The guys http i107 photobucket com albums m collection jpg The twin Dalton Assassins http i107 photobucket com albums m 6 assassin jpg
- David and Sadie making a plant collection
- style emoticons default biggrin gif i also have multypliers i also have never mastered them IMG http www anglersnet co uk forums style emoticons default sad gif IMG http www go fishing org upload files collection JPG the rest are out of sight being a small room i couldnt get far enough away 21 April 2005 10 19 AM Message edited by chesters1
- I was wondering if anyone else had a collection representing there Fav Breed Pet I love collecting Rotti stuff Here is a pic of my very dusty collection http i130 photobucket com albums p collection jpg I need to add this years Prezzies but I love it Wondered if anyone else collected
- finding the rest of the skeleton if the fossil is that of a vertebrate Secondly my fossil collection is mostly Estonian Palaeozoic in nature and my Isle of Wight fossil collection is fairly pants and full of rubbish no one would pay for and finally I think that it s much
- 9 inches tall and the smaller cats just under 7 inches tall I have them for sale on Ebay at the moment although no one is buying and any left I will be taking to my Summer Fete in July http i20 photobucket com albums b2 Collection jpg
- Ok referring to my own a href= http otte us maidenaustria maidenpics collection 1203 JPG target= blank collection photo a the first edition was blue emo img src= style emoticons EMO DIR smile
- http www themadhattersband com themadhattersband jpg http www themadhattersband com FWThumbnails collection jpg
- Racedriver Grid 360 Meine kleine aber feine PS3 Collection Auszug aus der Preußischen Militärverordnung von 1791 B Dem Untergebenen geziemt es nicht seinen eigenen beschränkten Maßstab an die
- ABS Another Blue Scooby http i31 photobucket com albums c3 collection jpg http i31 photobucket com albums c3 8 23120811 jpg
- Oh and copped new rack from Ikea for the collection the shoes stacked above don t have boxes but all the good *** is stacked in their box One day I ll get em all out and do a massive shot but who can be assed
- No seriously I always love seeing pictures of these massive DVD collections online Some of them are truly amazing these for example http www hypothermia us dvd collection jpg http twowiresthin com dvd storage index html
- http www freetattoodesigns org images zodiac tattoos jpg tribal celtic tat s http www deserttrends com tribal collection jpg and how are yuo gonna put a poem on your arm like that because poems are long but if you would like for me to look for a fairly short
- Collection Jaguar XJ200 1506 JPG
- I almost have all of the phones I ve used in the past I m just missing the 9110 i http www netikka net jollemi Collection jpg
- http i282 photobucket com albums kk255 dave1205 sweetscape jpg http i282 photobucket com albums kk242 RevivedSin OHDESIRE jpg http i282 photobucket com albums kk267 RRD313 collection 1 jpg http i282 photobucket com albums kk269 ismael543 rodriguez jpg
- Da Bats http i457 photobucket com albums q collection jpg
- Thought I d already posted this photo but perhaps not This is most of my pre PE Space 1999 collection I ve also got a blank Panini album with all the stickers separate
- Oct 16 2005 1 15 pm HI I m from Italy and I Can t receive a guitar world magazine december 2005 but i Would like to have Full more of these articles and photos about dimebags http www deanguitars com gw dime collection jpg http www deanguitars com gw dime GFH1 jpg
- insect collection jpg
- Collection Jaguar XJR 24 JPG
- Post em Either pics or a list IMG http i224 photobucket com albums dd210 FP 666 collection jpg
- Probably my first post with a pic Be gentle Dont believe the hype
- Lee diecast by Ertl yesterday I bought a couple cars at Walgreens It s a lifelong habit Most of the cars are hotrods or customs with a focus on 1960 s Cadillacs and 1949 50 Mercurys BIB What is your most prized possession in your collection ooo000ooo That s easy It s my big toy The 1962 Cadillac Coupe De Ville
- 5 jpg
- kfredricks flat collection preview jpg
- 02 25 2004 05 30 PM I m sorry for hoarding It s AF s fault Hiroboy nis k a and Malsheem you guys need to share yours with us as well http www spc org uk wheels sm hiroboy wheel collection jpg evillol evillol evillol Now I m going to get Hate Pm s now naughty These have been collected over the past 2 3 years
- OPI has released the new Suede Nail Polish Collection for Fall 2009 that features the look of suede in six of its best selling colors Using matte + shimmer combination the Suede collection
- away park for free lot and waited more than an hour to catch the shuttle but we finally got there and had a great time together My afghan won 2nd place in my division And Henry s collection won 2nd place in his So that was very exciting to see We also looked at the animals and some of the other crafts and hobbies and took a quick walk through the Design in
- Assinatura Lista de Jogos http backloggery com main php user=badaro Foto da Coleção http img413 imageshack us img413 4914 collection jpg
related videos for collection
- My Dvd Collection Update 6/25/09 Join My new Message board donmurphboard1 Dvd Collection update for June 25th 2009 showing the dvds Ive gotten over the last 2 weeks. In the update I talk about and give my reviews of the dvds and the movies themselves. The Dvds I got and Review in this update are: * Backwoods : Directed by Marty Weiss - Starring Ryan Merriman, Danny Nucci, Haylie Duff, Mark Rolston, Troy Winbush, Deborah Van Valkenburgh, Craig Zimmerman, Mimi Michaels, Jamison Yang, Jonathan Slavin, Eric Larkin, Willow Geer, Robert Allen Mukes, John Hemphill, Blake Lindsley * Friday the 13th Remake : Directed by Marcus Nispel - Jared Padalecki, Danielle Panabaker, Amanda Righetti, Travis Van Winkle, Aaron Yoo, Derek Mears, Jonathan Sadowski, Julianna Guill, Ben Feldman, Arlen Escarpeta, Ryan Hansen, Willa Ford, Nick Mennell, America Olivo and Kyle Davis * Ghost Busters : Directed by Ivan Reitman - Starring Bill Murray, Dan Aykroyd, Sigourney Weaver, Harold Ramis, Rick Moranis, Annie Potts, William Atherton and Ernie Hudson * Star Trek the original Series season 1 - Starring Leonard Nimoy, William Shatner, DeForest Kelley, Nichelle Nichols, James Doohan, Eddie Paskey, Bill Blackburn and George Takei . * The Three Stooges Volume 6 : 1949 - 1951 Starring Moe Howard, Larry Fine and Shemp Howard * The Money Pit : Directed by Richard Benjamin - Starring Tom Hanks, Shelley Long, Alexander Godunov, Maureen Stapleton, Joe Mantegna and Brian Backer *Invasion Iowa : Starring William Shatner, Desi Lydic ...
- MEGA MICHAEL JACKSON ROBOT & MOONWALK COLLECTION (NEW CLIPS) MY MEGA MJ ROBOT MOONWALK DANCE COLLECTION woo.. MJ MJ MJ MJ Michael jacksons the best dancer ever and i wanted to show u this video which i had fun making over the weekend, No Fancy Crap! just Pure Entertainment from MICHAEL JACKSON :D enjoy :)
- Makeup Collection/Organization /My Setup **This is my personal makeup area and my own personal makeup. NOT my makeup I use professionally or any part of my kit-- I'll do a video on that eventually. My most requested video ever! (I think...) Hope this is helpful! Click here for my closet organization video: Follow me on twitter for more updates!: Ring: House of Harlow
- Funny Videos/ Creative Ads Collection Part 4
- Expand Your Shoe Collection! The title says it all! Chriselle is going to show you how to expand your shoe collection without buying new shoes! Enjoy :D Music by Pink Martini Buy the song here bit.ly Please follow Chriselle and I at ♥ ♥ ♥ If you have a Facebook, please add us! ♥ ♥
- Meganheartsmakeup Makeup Collection! Hi! This is my makeup collection video! I hope you enjoy! No hate comments please! You must be subscribed to me to enter, and you must leave a video response of you makeup collection, or comment your favorite makeup item! PLEASE ONLY ENTER 1 TIME! Do not spam the comments! Thank you!...
- My Xbox 360 Collection Showing off my 360 games. Just note that I only buy multi-platform games for the Xbox 360. Join me on Facebook: Fan page: Profile page: Follow me on Twitter: Listen to my Podcast All Gen Gamers with co-hosts HappyConsoleGamer, Gamester81, and TheEMUreview: AllGenGamers YouTube Channel Visit my website: Join the Forum Community Watch me live on my Justin.TV channel www.Justin.tv
- the BEST colt .45 1911 animation collection ( with labeled parts ) here are some of the best 1911 pistol animations on the tube, i've compiled them and put on a background music. i've also labeled the parts on the assembly animation using cyberlink power director. videos are copy righted from sterling roth: Mexicoxican@ and M1911.ORG URL :
- Makeup Collection and Storage I am NOT trying to brag in any way, shape, or form. please do not watch this if you feel you might get offended at all.
- Updated (Again) Makeup Collection, Organization, and Storage subscribe to my vlog channel: follow me on twitter: SHOP! www.glitzy- brush holders glitzy- coupon codes for glitzy-glam HOLIDAY10 for 10% off orders up to $100 HOLIDAY15 for 10% off orders over $100 ** the back to mac program is where you take any 6 empty mac makeup containers back to mac you get a free lipstick (at counters) or a free lipstick, gloss, or shadow (at stores.)
- My Perfume Collection Thanks so much for watching everyone! WHERE ELSE TO FIND ME: My Vlog Channel! Read my blog! Follow me on Twitter? http Be my friend?! LINKS TO PRODUCTS MENTIONED: Britney Spears Midnight Fantasy: Matthew Williamson Eau de Parfum: Marc Jacobs Lola: Miller Harris Geranium Bourbon: Beyonce Heat: Chloe Love, Chloe: Escada Ocean Lounge: Paco Robanne Lady Million: Vivenne Westwood Let it Rock: Jo Malone Dark Amber & Ginger Lily: WHAT I'M WEARING: Nails: MAC Boom (Limited Edition) Top: Zara & Abercombie Cardigan: TKMaxx Makeup: Chanel Vitalumiere Aqua Virgin Vie Concealer Palette Stage Line Proffesional Liquid Eyeliner MAC False Lash Mascara Benefit Bella Bamba Blush New CID Lipgloss in Honey Pot Disclaimer: I am not being paid by any of the companies mentioned to make this video, or to give favorable reviews. All opinions as always are 100% honest and my own. Some of these products were given to me as gifts, some were purchased with my own money. The links above are affiliate links.
- The World's Largest Record Collection Rocketboom Spotlight on The Archive, a film by Sean Dunne. Paul Mawhinney was born and raised in Pittsburgh, PA. Over the years he has amassed what has become the world's largest record collection. Due to health issues and a struggling record industry Paul is being forced to sell his collection. This is the story of a man and his records. View the entire documentary at http
- Castlevania Music: BLOODY TEARS COLLECTION All the versions of Bloody Tears 0:00 Castlevania II: Simon's Quest (1988) [Famicom/NES/Wii] 0:29 Haunted Castle (1988) [Arcade/PS2] 1:00 Super Castlevania IV (1991) [Super Famicom/SNES/Wii] 1:59 Akumajo Dracula X68000 (1993) [The Sharp X68000] 3:01 Akumajo Dracula X: Chi no Rondo (1993) [PC-Engine: Turbo Duo/PSP] 4:16 Castlevania Bloodlines (1994) [Sega Genesis] 4:48 Castlevania Dracula X / Vampire's Kiss (1995) [Super Famicom/SNES] 6:00 Akumajo Dracula X: Gekka no Yasoukyoku - Nocturne in the Moonlight - Skeleton Leader Battle (1998) [Sega Saturn] 7:03 Castlevania Legends (1998) [GB] 7:41 Castlevania Chronicles (2001) [Playstation] 8:15 Castlevania: Dawn of Sorrow (2005) [NDS] 8:59 Castlevania: Order of Shadows (2007) [Mobile Phones] This video was supposed to include The Dracula X Chronicles' version as well as Castlevania Jugdment's version of Bloody Tears as the last two songs, but it exceeded the 10 minute time limit. Those are the rules. For the following missing tracks, click this link: Castlevania: The Dracula X Chronicles (2007) [PSP] Castlevania Judgment (2008) [Wii] For the following missing track, click this link: Akumajou Dracula X: Gekka no Yasoukyoku - Nocturne in the Moonlight - Boss Fight (1998) [Sega Saturn] PLUS A BONUS TRACK!!!!!!!!!!!! Just in case you are wondering, the game for Sega Saturn called Akumajo Dracula X: Gekka no Yasoukyoku - Nocturne in the Moonlight is actually the original Japanese name for Castlevania ...
- Jewelry Collection Dear FTC Man, As you know, I am not affiliated nor did I get free jewelry from Macy's, Target, or any Other Company and was not paid one dime to show my Favorite Jewelry Collection. Your offer for me to Stop calling You" FTC Man" and to call You "Tiger" is Unacceptable. No, I do not own Golf Clubs. Yes, I do know CPR and NO I will not join The Mile High Club with you! NO! I will NOT tell you which YouTube Guru's can be Bribed! (Although I think Many COULD be for the RIGHT price.) Again, Go home to your wife...I told You, It's OVER between us! ~Lana PS. Yes, I can meet you in Barbados next Tuesday.
- THE LAZER COLLECTION - 1, 2 AND 3. Now with captions for the deaf! (english only) Shoop da collection. This isn't mine, this was created by Dom Fera. I've cut out most of the interruptions like the one at the end of episode two. Please, Dom Fera, make more! Also, I like that fact that alot of people are copying my idea of making a collection of 1, 2 and 3. Great work guys, now think up your own ideas.
- My Puzzle Collection Some I solve, some I don't. My Puzzle collection hasn't changed (much) since I made the video, so no I won't make another one.
- Calvin Klein Collection 2011 Featuring models Lara Stone and Tyson Ballou. Directed by Fabien Baron.
- Super Mario Galaxy 2 Tricks and Shortcuts Collection This is a collection of most of the known tricks and shortcuts in SMG2. A lot of these I found on my own, but there are a good few as well that were posted on the SDA forum first. Thanks to: MAST3RLINKX (the Yoshi Infinite Flutter technique), TTom (Fluffy Bluff shortcut), Manocheese (Space Storm tower tricks, first and third tricks in Clockwork Ruins), ComboKing (for posting about the second Rightside Down trick, although it's intentional), neskamikaze (jumping on the wall in Bowser Jr.'s Fiery Flotilla), LinksDarkArrows (the second Hightail Falls jump and the first Bowser's Galaxy Generator trick), and logitechSDAZ (the Cosmic Cove trick, although he says someone else found it first). I found everything else on my own, although I was beaten to the punch on a few of them. That said - there's a large variety of tricks in here, from minor 5-second timesavers to tricks that skip more than half the level. 0:00 - Sky Station Galaxy - Route for the second planet. 0:24 - Spin-Dig Galaxy - Skip drilling. 0:30 - Spin-Dig Galaxy - Skip drilling in a different spot. 0:36 - Fluffy Bluff Galaxy - Major shortcut that skips half the level. 1:05 - Rightside Up Galaxy - Skip a gravity switch. 1:15 - Rightside Up Galaxy - Skip the ending 2D section. [yes, I already know there are 1ups in the corners] 1:37 - Bowser Jr.'s Fiery Flotilla - Skip almost everything up until the boss. 1:56 - Puzzle Plank Galaxy - Skip ground pounding a few pegs, then skip wall jumping up a wall. 2:15 - Hightail ...
- Updated perfume collection! Let it be said that lollipop26 likes them long - her videos that is! :) Updated perfume collection as requested featuring newly dyed barnet! I didnt mention a couple of other fragrances - they are not with me at the moment - an Arabic perfume called Shams and a perfume oil by Ebba called Miss Marissa. Dress worn is by American Apparel (I can happily live in AA dresses during warm weather) Necklace is from Etsy (talked about on my blog and cost the grand sum of $41!) Ring is from Forever 21 Cuff from Topshop Hairdo is Jessica Simpson and I think came in the darkest brown shade. Nail polish is OPI's Jade is the New Black...mmm it looks alright when you wear black I think! Follow me on twitter:
- Smoky eyes using CHANEL Contraste de Chanel collection. Chanel visual and collection can be seen here PRODUCTS USED: CHANEL Vitalumiere in 20 Yaby Firewood e/s CHANEL Les 4 Ombres Enigma e/s palette LY38 Socket brush http;// MAC 214 brush CHANEL Inimitable Mascara in Purple Bobbi Brown creamy concealer in Sand MAC Prep & Prime powder CHANEL Joues Contraste Blush in Plum Attraction CHANEL Loose Powder in 20 CHANEL lipgloss in 148 Petit Peche
- The Lazer Collection 3 Shoop da Trilogy. --------- Written/Directed/Animated by DOM FERA Music by DOM FERA CAST -------- DOM FERA - Randal/ Dr. Octogonapus/ "Hello" Guy/ Ron Weasly/ Jelly Bean Guy/ Aladdin/ Genie's Friend/ Genie With A Dirty Mind/ Flute Guy/ Jimmy/Soda Shaker/ Soda Shaker's Friend SETH KING - Senior Officer/ Harry Potter/ Hero/ Edward Cullen/ Mediocre Employee TIM KISH - Spider Monkey/ Bobby (Helicopter Guy) ANT CARDONA - Cool Jelly Bean Guy ROBERT BENFER (KNOX) - Dirty Boy ------- DFEAR STUDIOS 2009
- A collection of misheard lyrics A bunch of songs that can be misheard, enjoy! I have filtered all the Danish songs out, leaving only the English songs in the video :) ***ies or groupies: Tracks: Celine Dion - My heart will go on Robbie Williams - She's Madonna Vengaboys - We're going to Ibiza Bryan Adams - Summer of 69 Anastacia - Paid my dues Eiffel 65 - Blue (da ba dee) Pink Floyd - Another brick in the wall Bon Jovi - Living on a prayer Will Smith - Fresh prince of Bel Air Tina Dickow - On the run Manfred Mann - Blinded by the light Ray Jr. Parker - Ghostbusters Pat Benatar - Hit me with your best shot Nickelback - How you remind me Duffy - Mercy Led Zeppelin - Stairway to heaven N-Trance - Stayin' alive Shakira - Underneath your clothes Mariah Carey - Without you Lady Gaga - Just dance Phil Collins - Easy lover ***cat Dolls - When I grow up Prodigy - Smack my *** up The Beatles - 8 days a week AC/DC - Hard as a rock Men without hats - Safety dance Jimi Hendrix - Purple haze Foo Fighters - My hero Queen - Another one bites the dust Green day - Jaded Toto - Africa Moody Blues - Question Agnes - Release me Bette Midler - Beast of burden Joan Jett - I love rock and roll Pearl jam - Even flow The lonely island feat. T-pain - I'm on a boat Britney Spears - You want a piece of me White stripes - Blue orchid Black eyed peas - Boom boom pow
- What is Love Gif Collection 1 SUBSCRIBETODAY ! A Collection Of What is Love Animated Gifs From ytmnd. PLS Rate And Comment
- Mike Tyson Knockout Collection Mike Tyson insome of his classic battles and knockouts. Enjoy! Song Info: (RIP) Rest in Peace artist: Vell Rob album: Variety Pakkk
- My VCR collection VHS recorder If you likes my collection show me you intresting hobby, own videos and show in the hole world when factory models is stupid drop off car*** when maschine still running life.
- My NES Collection - Its show & tell time! There hasnt been an AVGN video in so long, I figured this would help fill the void. Ive mentioned this several times, but might as well say it again. Im waiting for the Gametrailers contract to be renewed so I can continue releasing episodes. In the meantime, I made this special video to say thanks for being patient, thanks to all the fans who donated these games, and thanks in general for the overall support. Go to the site: If you want to donate games, contact Mike@ By the way, new AVGN episode soon!
- Tank Tour: World's Largest Private Collection of Historic Military Vehicles (BB Video) In today's edition of Boing Boing Video, guest-host Todd Lappin explores a massive collection of historical military vehicles tanks collected by an eccentric Silicon Valley multimillionaire. The recently-departed Jacques Littlefield amassed one of the world's largest and most significant collections of this type, and his collection is now overseen by the nonprofit Military Vehicle Technology Foundation. Snip from their description: "Our goal is to acquire, restore, and interpret the historical significance of 20th and 21st century military vehicles. Domestic and foreign combat vehicles such as tanks, armored cars, self-propelled artillery, and other technically interesting mobile platforms are the focus of the collection. We also maintain an extensive technical library that describes many vehicles down to the part level. Aside from the vehicles, there are towed artillery, antitank, and antiaircraft guns. Military support equipment, inert ordnance, and accessories round out the collection." The foundation is supported by public donations, and you can make one at their website if you dig what they do. To make arrangements for tours, you can email tours.mvtf at . To arrange access to the collection for commercial purposes: permissions.mvtf at . The "tank tour" BBV shot for this episode was organized by BB pal Karen Marcelo and Dorkbot SF. BOING BOING POST:
- My Puzzle Collection II Everyone should have some sort of collection. I think puzzles is more fun than coins or stamps though, don't you? **** **** **** **** My puzzle collection includes (sorted by Shape first, and function second, for your convenience): Cube Shaped Puzzles: ************************ (~15x) 3x3s: It's hard to keep track of the exact number. Most of these are various DIYs, some are RNAs (Rubik's Brand), some are knockoff brands. I also have one Studio Cube, which is very nice. (3x) 4x4s: Two Rubik's Revenge and one Eastsheen 4x4. (3x) 5x5s: Rubik's, Eastsheen, and V-Cubes. I like all three. A 7x7: Very impressive mechanism. I love this puzzle, it's one that I think all collectors should have. (4x) Square-1s: Three black DiYs, and a white DiY. A Skewb: Mine is a 3D Skewb. Strange mechanism. (5x) 2x2s: Two normal Rubik's, one Ice Cube, an (modded) Eastsheen, and a giant 57mm Rubik's. A Keychain 3x3: This will likely be modding-fodder later. A 1x1x1 and a 1x1x2: For ***s and giggles. :) A Siamese Cube: This is a mod you have to do yourself, it is very easy to make. A 3x3x4 Extended: This is also something you have to make, and it is even easier than the Siamese. An Icon: This is just a greyscale 3x3. An Oilers Hockey Cube: Ummm, yeah. A So-Du-Cube: This is lame. Dodecahedron Shaped Puzzles (12 sides): **************** ********* ********* *********** (5x) Megaminxes: Three PVC Megaminxes, an MF8 Megaminx, and a Chinese Megaminx. I recommend either the PVC or MF8. Message me for more ...
- My Dvd Collection Update 6/10/2009 Follow me on Twitter Dvd Collection update for June 10th 2009 showing the dvds Ive gotten over the last 2 weeks. In the update I talk about and give my reviews of the dvds and the movies themselves. The Dvds I got and Review in this update are: * Rugrats the complete 1st season : Featuring the voice talents of Melanie Chartoff, Michael Bell, Kath Soucie, Christine Cavanaugh, Elizabeth Daily, Cheryl Chase and Jack Riley * Spring Breakdown : Directed by Ryan Shiraki - Starring Amy Poehler, Parker Posey, Rachel Dratch, Amber Tamblyn, Seth Meyers, Sophie Monk, Jonathan Sadowski, Missi Pyle and Jane Lynch * Seems like old times : Directed by Jay Sandrich - Starring Goldie Hawn, Chevy Chase and Charles Grodin and Judd Omen * Canadian Bacon : Directed by Michael Moore - Starring John Candy, Alan Alda, Rhea Perlman, Kevin Pollak, Rip Torn, Kevin J. O'Connor, Bill Nunn, GD Spradlin, Steven Wright, James Belushi, Brad Sullivan and Wallace Shawn * Mum and Dad ( Mun & Dad ) : Directed by Steven Sheil - Starring Perry Benson, Dido Miles, Olga Fedori, Ainsley Howard, Toby Alexander and Micaiah Dring * The X files season 1 and 2 dvd set - Starring Gillian Anderson and David Duchovny * Star Trek the Orginal Series best of set : Starring Leonard Nimoy, William Shatner, DeForest Kelley, Nichelle Nichols, James Doohan, Eddie Paskey, Bill Blackburn and George Takei * Star Trek The next Generation best of set - Starring Patrick Stewart, Jonathan Frakes, LeVar Burton, Marina Sirtis ...
- Private View of Petter Solberg's Car Collection Dec 2010 In a secret Swedish location, 2003 World Rally Champion Petter Solberg gives Neil Cole an exclusive tour of his some of his amazing cars, with history attached. Also, an insight into the real situation he finds himself in with regard to a deal for driving in WRC 2011. See more here bit.ly
- WSITN: Makeup Collection / Storage My mac and otherwise makeup collection and how i store it. there was going to be a second part to this video where i layed everything out and told you all the names but i couldnt fit it in . let me know if you still want to see that.
- MNT's Rainwater Collection System with Manifold This is my first YouTube submission, so be kind ;-) As explained in the video there are 4 55-gallon barrels connected to a manifold. The piping is 3/4" Schedule 40 PVC. Please leave comments and questions (if you have any). Thanks for watching!
- Makeup collection This is how it stands at the moment - I hope that I can make some improvements in how I store things or I hope to buy less and use up more :)
- The Lazer Collection 5 Trailer THAT'S RIGHT. THE FOURTH ONE COUNTS.
- Toy Story Collection: Buzz Lightyear Review In my opinion, this is the definitive Buzz Lightyear figure. ... Unless they make one that does all this AND karate chop action... Update: I was finally able to get a picture taken of Buzz glowing in the dark. Check it out here:
- Special Effects Collection (Adobe After Effects) [--NOTES--] Collection of some basic special effects made by me, Kevin Lin "KL1054." All effects composited in Adobe After Effects CS3. Yes I am the kid in the video. I was 11 when this video was made, but I'm 13 now. [email protected] www.theaeblog.tk [--DOWNLOADS--] Video Music: [--CREDITING--] Editing KEVIN LIN Software ADOBE AFTER EFFECTS & WINDOWS MOVIE MAKER Music NAL1200 - CRYING SOUL (SCRATCH RMX)... as heard from NewGrounds: Assistance TUTORIALS BY YOUTUBERS AND ANDREW KRAMER
- The Ultimate "Boy Meets World" Funny Moments Collection This is a collection of the funniest moments from the show "Boy Meets World". I grew up watching this show and I felt like composing all of its funniest moments together in a video for myself and other fans of the show. The majority of these scenes were taken from seasons six and seven, because there were so many great Eric moments in those seasons. Eric is definitely the funniest character of the show and you'll see him and his antics the most throughout this video. So here it is, The Ultimate "Boy Meets World" Funny Moments Collection. Enjoy! Check out my Channel for some quality video game tributes. Here's a summary of the series: One of the most durable offerings of ABC's Friday night TGIF sitcom lineup, Boy Meets World premiered September 24, 1993. Set in Philadelphia, the series starred Ben Savage as Cornelius A. "Cory" Matthews, who at the outset of the program was 11 years old. Hoping to make sense of the world around him and to hack his way through the t*** thicket of "tween-age" (and later ***age) life, Cory found a kindred spirit in fellow 11-year-old Shawn Hunter (Rider Strong), who lived in a trailer camp with his combative parents. Cory himself resided in a comfortable suburban home with dad Alan (William Russ), mom Amy (Betsy Randle), footloose older brother Eric (Will Friedle) and precocious kid sister Morgan (played first by Lily Nicksay, then by Lindsay Ridgeway); near the end of the series' run, Amy gave birth to a fourth child, a boy named Joshua ...
- ES COLLECTION - SWIMWEAR 2010 - BOAT TRIP
- The Collection 1/2 The Twilight Zone (2002-2003) S1 Ep 38 Miranda's (Jessica Simpson) night of babysitting turns to cold terror when she realizes that the child's eerily lifelike doll collection may explain the mysterious disappearance of the previous sitters.. - Actors: Ashley Ender, Ian Robinson and Tammy Pentecost - Writer(s): Erin Maher & Kay Reindl - Director(s): John Kretchmer
- Lipstick Collection and Swatches (Part 1) www.cl2425.com This was requested a long time ago, and this is part 1 of my lipstick collection video. I am basically storming through since I have so much ground to cover. The swatches go by color gradation, not brand. Part I consists of Beige, Mauve and pink colors. [[ Lipsticks in order ]] NYX - Summerlove Missha creamy matte rouge - Pale heart Mac - Creme D'*** Mac - Myth Mac - Quiet Please YSL Rouge Volupte #1 - *** Beige Dior Addict Ultra Shine - 256 Shiniest softness Bobbi Brown creamy Lip color - Pale Mauve Dior Addict Ultra Shine - 142 Shiniest Meringue Revlon Super Lustrous - 405 Silver City Pink Mac - Angel NYX - Strawberry Milk Missha Creamy matte rouge - Baby Milk YSL Rouge Volupte #7 - Lingerie Pink Missha Creamy Matte Rouge - Chic Lavender Skinfood Honey Glossy Rouge - Rose Jelly 303 Revlon Super Lustrous - 410 Soft Shell pink Givenchy Lip Lip Lip! - 142 Weekend Lilac Light Guerlain Kiss Kiss LIpstick - 566 Rose Desire YSL Rouge Pure Shine - 11 Pink Diamond YSL Rouge Personnel - 9 Frivolous PInk Laoncome Color Design - It girl MAC - Lovelorn YSL Rouge Volupte #8 - *** Pink NYX - Narcissus Shiseido Perfect Rouge - RS320 Fuchsia Tony Moly - Pink Girl MAC Slimshine - Bare MAC Slimshine - Gentle Simmer MAC - Modesty Bobbi Brown Lip color - 49 Pink Mauve Missha the style Lucid shine tint - PK01 MAC - Flowerplay YSL Rouge Volupte #13 - Peach Passion Shiseido Perfect Rouge - 419 Ariel MAC - Fast Thrill Lancome Color Fever - Enticing Rose MAC Slimshine - Scant ...
- Accessories Collection & Storage my accessory mirror/storage is from ebay :) Follow me on Twitter: Visit my Blog: fafinettex3 Become a Facebook Fan
- My homemade weapons collection Here are all of the homemade weapons i've made so far. And the tests are sped up, just if your wondering. ENJOY!!
twitter about collection
Blogs & Forum
blogs and forums about collection
“The Hess Collection -- a producer of outstanding Napa Valley wines and a unique wine country exhibitor of vivid, powerful and thought-provoking modern art”
— Hess Collection,
“All things Vegan Posted by Kevin | Events | Friday 26 November 2010 9:29 am. As part of the international day of protest against consumerism, The Vegan Collection will be shutting down our shop on November”
— The Vegan Collection >> Blog,
“New publication: Collection Forum, Fall 2009, Volume 23, Numbers 1-2 Indexes, abstracts, and for some issues, full papers, for Collection Forum are available on-line as hypertext documents”
— Collection Forum | SPNHC,
“After searching the Internet,I am so glad to show youThe 15 most ***y woman in nfl football jerseys Sport JERSEY Collection BLOG. nfl- sell NFL Jerseys,MLB Jerseys,NBA Jerseys,NCAA Jerseys and NHL”
— The 15 most ***y woman in nfl football jerseys | Sport JERSEY, nfl-
“Tiendas – shops. Contacto. No hay categorías. Nothing found. Sorry, but you are looking for something that isn't here. Get the Flash Player to see the wordTube Media Player. Tiendas online. ES Collection. ES Collection Outlet. Join our newsletter. Loading (C) 2010 ES Collection”
— ES COLLECTION UNDERWEAR & SWIMWEAR, escollection.es
“blue sky collection ecofriendly. blue sky collection blog. Nov. 28 " Cyber Monday Savings! SKY COLLECTION. HOME | WEDDINGS | PETS | GIFTS | KIDS | ECO-FRIENDLY | BLOG | SPECIALS”
— BlueSkyCollection Blog,
“We're giving away this gorgeous layered necklace from the MG collection! the hottest pieces from our collection!! Click HERE for your chance”
— Material Girl Collection | Geek Chic!!!,
“The Tea Collection blog is your online destination for news, tips, trends and creative ideas for raising your little citizen of the world. Tea offers baby and children's clothing that celebrates the beauty found in cultures around the world”
— tea collection blog | for the parents of little citizens of,
“Your daily source for inspiration and insider information on luxury travel, design, fashion, food, wine, arts and good living”
— Luxury Travel Blog | Passport on Kiwi Collection,
related keywords for collection
- collection agency
- collection letter
- collection etc
- collection agencies
- collection java
- collection b
- collection dx
- collection 2000
- collection laws
- collection c
- collection agency laws
- collection agency fees
- collection agency list
- collection agency scams
- collection agency tactics
- collection agency software
- collection agency rules
- collection agency letter
- collection agency credit score
- collection agency jobs | 1 | 26 |
<urn:uuid:de32a203-60e0-487f-a18f-6bc1e01d23e8> | Review of Short Phrases and Links|
This Review contains major "Arm"- related terms, short phrases and links grouped together in the form of Encyclopedia article.
- ARM is an acronym for Advanced RISC Machines.
- ARM is the industry's leading provider of 16/32-bit embedded RISC microprocessor solutions.
- The ARM was a simple RISC architecture with 16 registers and no floating point support.
- To call an ARM-native subroutine from your 68K application, you use the new function PceNativeCall.
- When implementing the ARM-native subroutine, you should be aware of how the 68K processor and the ARM processor are different.
- Bernd Helfert is a german designer who got hooked by the kheper website, following to orions arm.
- The empiric arm standardized programming regimen is based on the following key strategies to reduce shocks.
- This book is a complete introduction to ARM7-TDMI and the Philips LPC21xx devices.
- I'm writing an embedded application for an ARM7TDMI processor.
- ARM, ARM9TDMI and RealView are registered trademarks of ARM Limited.
- If you are writing much ARM code you should obtain a copy of the latest ARM Architecture Reference Manual - see the list of recommended books.
- The Debian GNU/Linux distribution for the ARM architecture. Runs on Acorns, Netwinders and ARM-based PDAs.
- Labs provide an introduction into ARM assembly language programming.
- Useful for: Source code; Development tools; Articles and tutorials; Cpu programming for 680x0, DSP, 8051, 8080, 6502; ARM and more.
- Programming experience with ARM processors, Windows CE, and other Real Time Operating Systems.
- ARM-ELF-GCC is used to target ARM processors. HP Specific libraries.
- ARM's repository of technical documentation on their processors, the heart of all RISC OS boxes.
- Targetted towards the ARM (2 thru 7, and StrongARM) from RISC OS, you may begin programming simply by using the BASIC assembler built into your computer.
- Oki's ARM MCUs have a 16-bit external data bus.
- Including topics such as Articles, Resources, 68k, ARM, DOS and Windows, Game Machines , Java, PDP11 and VAX, PIC, SPARC .
- Q. I cant arm my system to Level 3AWAY.
- Artisan Components has been acquired by ARM Holdings plc.
- Each arm may include equations that must be satisfied for the arm to match.
- ARM Computers inc.
- The #MTR-250 monitor stand - monitor arm computer accessory will ergonomize your workspace.
- I will use some of this time to introduce you to Verilog and ARM assembler.
- Introduction to assembler programming on the ARM, a series of information and tutorials.
- World's first implementation of a commercial microprocessor architecture (ARM) in asynchronous logic. Micropipeline design style.
- This is the authoritative reference guide to the ARM RISC architecture.
- The ARM architecture is the industry's leading 16/32-bit embedded RISC processor solution.
- The OCDemon family of debug tools is immediately available for the ARM, MIPS and PowerPC processors.
- Free Pascal: free 32-bit Pascal compiler (x86,m68k,powerpc,sparc,arm) for DOS, Linux, Darwin, NetBSD, FreeBSD, Solaris, MacOS, BeOS, Win32 and OS/2.
- ARM Disassembler: For EPOC32 by R.Panton. Diss: Commercial desktop ARM code disassembler, by Ben Dooks.
- Diss Commercial desktop ARM code disassembler, by Ben Dooks. Demo download.
- The task of converting directly from IA code to ARM code is considered too difficult.
- Make a call on a mobile phone and a chip based on ARM's (nasdaq: ARMMHY) technology is probably inside the phone.
- The cost of this course includes a copy of Steve Furber's recently published book, ARM System-on-Chip Architecture.
- ARM Instruction Formats/Timings From Robin Watts, Steven Singer, Mark Smith, David Seal, some others; in multiple formats.
- Desk-mounted arm, hardware, instruction manual.
- D300 € 193,50 Bestel Newstar LCD ARM NEW GAS SPRING 5 movem bl.
- The pronator teres is a short, round, deep arm muscle.
- The initial ARM implementations concentrated on low cost by using a short 3-stage pipeline.
- Mehr Info's ARM COMPUTER BP989ArmNote TS758CDLi-Ion Battery, 10.8 Volt, 4800mAH258.00 SFr.
- Mehr Info's ARM COMPUTER 442670000005ArmNote 7521P Series Li-Ion Battery, 14.8 Volt, 3600mAH232.00 SFr.
- Laptop Computer Batteries for arm Laptop Computers and more from Advanced Battery Systems.
- Mehr Info's ARM COMPUTER P280C-TS38Patriot TS38NiCad Battery, 12.0 Volt, 2800mAH153.00 SFr.
- Mehr Info's ARM COMPUTER H350AE-6NECArmNote TS280 NiMH Battery, 7.2 Volt, 3800mAH140.00 SFr.
- The ARM-native subroutine will run on Palm-OS Garnet on ARM hardware.
- Most Palm-OS-5 applications do not need native ARM code and will not benefit from using native ARM code.
- Software for ARM programmingVersion/ReleaseAuthorDescription BL-jump computing programNiMarSoftware for computing code of relative jump BL-THUMB instruction.
- The processor has two states: ARM and Thumb (see below).
- Typical applications, including those on the GBA (and our experimental code), use a mixture of ARM and Thumb.
- Atmel manufactures three families of microcontrollers: the popular 8051, the AT91 which is an ARM Thumb, and the Atmel AVR 8-bit RISC devices.
- Introduced in 1987, the Archimedes was based on Acorn's ARM architecture (ARM stood for Advanced RISC Machine).
- The ARM architecture (originally the Acorn RISC Machine) is a RISC processor architecture that is widely used in a number of applications.
- Additional Orion's Arm material Home Extras Extras An assortment of material not covered in the above categories.
- The Eburacum Gazetteer - An illustrated catalogue of some of the various worlds found in the civilised galaxy; part of the Orion's Arm Worldbuilding Project.
- ARM Software Development Toolkit - Includes disassembler by ARM Ltd.
- The ARM design was started in 1983 as a project at Acorn Computers Ltd.
- The most widely used industrial robot is the robotic arm.
- Endeavour's robot arm is now in motion, lifting Raffaello out of the payload bay bound for the international space station.
- A Hitachi SH3, Hitachi SH4, MIPS-compatible, ARM or StrongARM CPU.
- Red Hat Embedded Linux currently supports certain ARM, StrongARM, and MIPS families of processors.
- Demo download. ARM Disassembler - http://www.iota.demon.co.uk/psion/disassembler/disassembler. html For EPOC32 by R.Panton.
- Computers Programming Disassemblers ARM Brought to you by 123promotion.
- Expertise spans the Z80, x86, 8051, ARM, TI, Motorola chips. Wireless: PocketPC/PalmOS/Embedded Linux,J2ME.
- If you wish to sign up, send a message with the word "subscribe" as the subject to [email protected].
- Encyclopedia of Keywords > Nature > Life
- Computers > Programming > Disassemblers > Arm /
Books about "Arm" in | 2 | 6 |
<urn:uuid:5bb503fd-e57b-47a9-9f95-8dabb22e9312> | Debian vs Ubuntu
Debian and Ubuntu are free Linux distributions using the apt package management system. Ubuntu builds on the foundations of Debian architecture and infrastructure, with a different community and release process .
Ubuntu is specifically designed to be easy for inexperienced users to use. Initial configuration of Debian may be more difficult. Ubuntu's early motto was "Linux for human beings", while Debian describes itself as "the universal operating system." The decision to use one or the other may also hinge on the relative importance of new, possibly unstable software versus old reliable software.
Community is probably the biggest distinguishing feature besides distribution "flavor". The Ubuntu forums are more accessible to newcomers, while Debian forums are more technical. Both distributions depend heavily on a large community of volunteer open-source software developers and users who provide free support for each other while using the software. Ubuntu is based on Debian so it relies on Debian community as well as on Ubuntu community. Ubuntu is also sponsored by (and was created by) the company Canonical, which offers fee-based support services for Ubuntu, whereas Debian is developed entirely by the community.
Target Architecture
Ubuntu is aimed at desktop and server users using the Intel x86, x86-64, and (some versions) powerpc architectures only, whereas Debian runs not only on these but also on a huge range of hardware from embedded or handheld devices using ARM to MIPS processors to platforms such as Sparc, Intel's IA64 and Alpha, to name a few. There are advantages and disadvantages to supporting such a wide range of different hardware; one disadvantage Debian faces is that it takes more time and resources to ensure a piece of software works on all its architectures.
Standard Releases
Both Ubuntu and Debian have standard releases that allow users and contributers to use the system. Most of the release separation is managed by the apt source.
Debian Unstable (a.k.a. Sid)
The Debian Unstable branch is an opportunity for developers and experienced computer users to use, test, and develop the very latest open source software. It is not recommended for beginners or for anyone for whom reliability is a priority. While the quality of software in Debian Unstable is usually very high, the branch is highly unstable in the sense that things change rapidly and constantly; developers can't rely on libraries that are available (and working) one day to be available the next, and users can't rely on their correctly-configured and smooth-running system not to need additional configuration and workarounds every time something changes.
Ubuntu Releases (non-LTS)
These releases are based on the Debian Unstable branch, and are made every six months (although every fourth release becomes an LTS release, see below).
Ubuntu releases (whether LTS or non-LTS) include software that is more up-to-date than the software in the Debian Unstable branch when developers believe it will benefit their users. For example, the version of Ubuntu released in April 2010 contained Firefox 3.6, while Debian Unstable still included Firefox 3.5 (with Iceweasel branding).
Ubuntu's goals are to start with the most up-to-date software that Debian has to offer, release new stable versions regularly on a predictable 6-month cycle, and make an operating system that is easy to use and set up for beginners. To this end, Ubuntu adds a number of features of its own aimed at making configuration easier for the uninitiated, offering graphical configuration tools in place of command-line or configuration files in places.
Ubuntu puts a lot of its own effort into making the Unstable branch of Debian as reliable as it can given the newer software that it uses and its more rapid release cycle. After being pulled from Debian Unstable, it undergoes some Ubuntu-specific changes and a series of testing and bug-fixing before it is declared as a release. Some of the work that Ubuntu does in testing newer software results in patches that make it back to the software maintainers, which in turn helps improve Debian (and other distributions) for its future releases.
The latest non-LTS release version of Ubuntu is 13.04. In April 2012 Ubuntu 12.04 LTS was released.
Debian Testing
The Debian Testing branch represents the state of the upcoming Debian release, before it is Stable. Most of the time, packages from Debian Unstable (a.k.a. Sid) will transfer across to Testing, usually after 10 days, as soon as they meet a stringent set of guidelines, such as having no new known "release-critical" bugs and having all dependencies installable and satisfied. Thus, some of the more immediately obvious bugs and problems in Debian Unstable will be prevented from entering Testing. Nevertheless, Debian Testing is still inherently unstable due to constantly changing most of the time, and it can be affected by interoperability problems between packages, especially when a major change to a library has recently entered.
Prior to a Debian Stable release, Debian Testing will become 'frozen', accepting only fixes to existing serious bugs. This occurs for a few months, with the aim of resolving all "release-critical" bugs. After this time, a new Debian Stable branch is created from Debian Testing, and Testing is unfrozen again and given a new codename, in preparation for testing the next release.
"Wheezy" is the codename for current Debian Testing.
Ubuntu LTS (Long Term Support) Releases
Each Ubuntu LTS (Long Term Support) release will initially be based on the Debian Testing branch. These releases take Ubuntu's approach of making Linux accessible to inexperienced users, and constitute a release that is going to be kept stable for five years (before 12.04 three years) into the future, receiving security updates and some occasional bug fixes during that time, but remaining otherwise unchanged.
Essentially, an Ubuntu LTS release is made in the same way as a non-LTS release, except that it will be kept stable and supported for a lot longer, including by Canonical's own paid support services. In addition, it was decided that Ubuntu LTS releases should initially be based on Debian Testing rather than the similarly unstable but sometimes buggier Debian Unstable.
It is intended that every fourth Ubuntu release shall become an LTS release, thus Ubuntu LTS releases are made every two years. The most current Ubuntu LTS release is Ubuntu 12.04.
Debian Stable
These releases are very reliable. A "Debian Testing" is branched to form a new "Debian Stable" release after it has been frozen for some months, tested extensively and very few problems remain.
Debian Stable is supported by security updates, and fixes to extremely major bugs, until the next Debian Stable release aimed at 2 years later, and then for an additional year (similar to Ubuntu LTS). During this time it does not, however, receive any new versions of software, nor any bug-fixes except for a small number of bugs deemed major enough and for which a fix is unlikely to change any relied-upon behaviour.
Because of this, users wishing to test or use the latest versions of software with Debian Stable will need to either compile it themselves, or use a 'backports' repository providing some newer versions of software which has been compiled to be compatible with Debian Stable. Developers, or technically-adept people who know how to fix problems and want the latest software, may like to use 'Testing' or 'Unstable' instead.
"Wheezy" is the current Debian Stable release.
Repositories for updating specific packages
Ubuntu PPAs
Ubuntu supports Personal Package Archives, which are mini repositories that can be installed to get a specific package or set of packages that aren't in the main repositories, or to update a package or set of packages to a newer version than the version in the repositories. Ubuntu users may choose an LTS release, but then install a PPA for a specific program they would like to keep more up to date, e.g. Firefox 5 on Ubuntu 10.04 LTS.
PPA repositories co-exist with the Official Ubuntu Package Repository. Debian can still use PPA Repositories but they are considered to be 3rd party repositories.
Debian Backports
Debian Backports are similar to Ubuntu PPAs, insofar as getting an updated package on a stable / LTS operating system goes.
"You are running Debian stable, because you prefer the Debian stable tree. It runs great, there is just one problem: the software is a little bit outdated compared to other distributions. This is where backports come in."
"Backports are recompiled packages from testing (mostly) and unstable (in a few cases only, e.g. security updates) in a stable environment so that they will run without new libraries (whenever it is possible) on a Debian stable distribution."
Backports can be used in a similar way to PPAs, e.g. to install IceWeasel 5 on Debian 6.0.
Other Releases
The Debian community continues to support a release of Debian Stable even for one year after a newer Stable release is made. When this happens, the old Stable release is dubbed "Oldstable". The current "Oldstable" release is called Lenny.
There are other variations of Debian and Ubuntu, suitable for users with different computer processor types, older computers, or other specific needs. | 1 | 2 |
<urn:uuid:99ae756d-01e6-474b-a217-220bb134db42> | PROTECTION OF THE
WATERS OF THE GREAT LAKES
Final Report to the Governments of Canada and the United
2. The Great Lakes Basin
3. Water Uses in the Great Lakes Basin
4. Cumulative Effects
5. Climate Change
8. Legal and Policy Considerations
9. Next Steps
This is the Final Report of the International Joint Commission to the governments of
the United States and Canada concerning protection of the waters of the Great Lakes. It is
submitted in response to a February 10, 1999 Reference from the governments to undertake a
study of such protection.
This Final Report incorporates and where appropriate updates the Commission's Interim
Report of August 10, 1999. It also extends and, in some cases, modifies the conclusions
reached and recommendations made in the Interim Report.
Section 1 - Introduction:
Water is an important and often emotional issue throughout North America. Along the
U.S.Canadian border there have been many controversial issues involving boundary and
transboundary water resources, and there also have been many opportunities for cooperative
ventures, projects, and other efforts to make life considerably better for the citizens of
both countries. The history of U.S.Canadian relations is filled with examples of
cooperative efforts in navigation, hydropower, agriculture, and fisheries and of
significant improvements in water quality.
Diverting water from the Great Lakes has been an issue of interest and at times
controversy between the United States and Canada. This issue, dating back to the 1800s,
has been investigated by the International Joint Commission most recently in the
mid-1980s. In 1996, the Commission advised both national governments that the subject of
diversion and consumptive use of Great Lakes waters needed to be addressed more
comprehensively than it had been to date.
In the light of recent proposals to export water from the Great Lakes and other areas of
the United States and Canada, the governments decided to refer the issue of water use
along the border to the Commission. In a letter of February 10, 1999 (the
"Reference"; see Appendix 1), the governmentsafter noting that the number
of proposals to use, divert, and remove greater amounts of water that flow along or across
the boundary is increasingstated that they were concerned that current management
principles and conservation measures may be inadequate to ensure the future sustainable
use of shared waters. Within this context, the governments requested the Commission to
examine, report upon, and provide recommendations on the following matters that may affect
levels and flows of waters within the boundary or transboundary basins and shared
- existing and potential consumptive uses of water,
- existing and potential diversions of water in and out of the transboundary basins,
including withdrawals of water for export,
- the cumulative effects of existing and potential diversions and removals of water,
including removals in bulk for export, and
- the current laws and policies as may affect the sustainability of the water resources in
boundary and transboundary basins.
The Reference instructed the Commission, in preparing its recommendations, to consider
in general terms such matters as potential effects on the environment and other interests
of diversions and consumptive uses and, where appropriate, the implications of
climatological trends and conditions.
The governments requested the Commission to give first priority to an examination of the
Great Lakes Basin, focusing on the potential effects of bulk water removal, including
removals for export, and to provide interim recommendations for the protection of the
waters of the Great Lakes. The governments asked that the interim recommendations covering
the Great Lakes be submitted within six months and that a final report be submitted six
months later. The Commission was asked to include in its final report advice on additional
work that may be required to better understand the implications of consumption, diversion,
and removal of water from boundary and transboundary basins and from shared aquifers
elsewhere along the boundary.
In this report, "Great Lakes Basin" refers to the Great Lakes, their connecting
channels, and the international section of the St. Lawrence River, together with their
tributaries, and it also includes the reach of the St. Lawrence River immediately
downstream from the international section of the river to the end of Lake St. Peter,
excluding the tributaries of this downstream reach (Figure 1). This is the
same area the Commission addressed in its 1985 report, Great Lakes Diversions and
Immediately after receiving the Reference, the Commission established a binational,
interdisciplinary study team to carry out the required investigations. An equal number of
members from each country were appointed to the team. They were directed to work in the
spirit of consensus in their personal and professional capacities and not as
representatives of their countries or organizations. Members of the study team and IJC
study participants are listed in Appendix 2.
In August 1999, the Commission submitted to the governments its Interim Report. The
Commission recommended that, pending submission of its Final Report under the Reference,
federal, state, and provincial governments should not authorize or permit any new bulk
sales or removals of surface water or groundwater from the Great Lakes Basin and should
continue to exercise caution with regard to consumptive use of these waters. The
Commission also offered other recommendations and indicated it would discuss the
recommendations with the governments and the public.
The Commission has carried out a broad public-consultation process and has made
information related to work on this Reference as widely available as practicable. A
section on the International Joint Commission web site (www.ijc.org) was created to
disseminate information and to encourage public discussion during the study period. Eight
public hearings were held throughout the Great Lakes Basin in both countries in the latter
half of March 1999, and 12 additional hearings were held in September and October
(Appendix 3). In addition to over 300 presentations made at these hearings, the Commission
received hundreds of other submissions in writing and by e-mail, primarily from
governments, interest groups, and individuals. The Commission also consulted with federal,
provincial, and state governments and regional and other relevant sources, including a
selection of experts convened at a special workshop at the end of March 1999 and another
workshop in September 1999 (Appendix 4).
The majority of presentations from the public supported the Commission's Interim Report
but wanted the recommendations to be strengthened to provide greater protection for the
waters of the Great Lakes Basin. There was general opposition to all forms of bulk
removals, although some presenters acknowledged the possibility of exports to meet
humanitarian needs. Many presenters believed that the Interim Report understated the
pressure that may arise in the future for removal of water from the Great Lakes Basin.
Many advocated adopting a precautionary approach to removals, particularly in the light of
future uncertainties produced by, among other things, the possible impacts of climate
change. The hearings revealed widespread concern about water quality issues, groundwater
supplies, and the increasing trend to privatization of water and sewage services. They
also demonstrated that there is support for conservation measures in the Basin. Aboriginal
Peoples and Indian tribes opposed water exports and were concerned that removals or
diversions could affect their treaty rights.
The public hearings and written presentations revealed a profound concern on the part of
the public that international trade law could prevent proper protection of the waters of
the Great Lakes Basin. This view is not shared by the Canadian and U. S. governments, and
it is not supported by the statements and writings of many experts in international trade
law who appeared before the Commission. These experts agreed that international trade
agreements do not prevent governments from protecting the waters of the Great Lakes Basin.
The public, however, remains deeply concerned that international trade law could affect
the protection of these waters.
This Final Report is based on information the Commission had before it when it prepared
the Interim Report and on additional information the Commission subsequently obtained from
a variety of sources, including the 12 public hearings held in September and October 1999.
The Commission consulted government officials and experts on climate change, cumulative
impacts, and international trade and water law.
There is little change from the Interim Report in Section 2 "The Great Lakes
System". Section 3"Water Uses in the Great Lakes Basin"provides
updated information on consumptive use and removals and addresses concerns expressed at
the recent public hearings with respect to the possibility of future major diversions and
the subject of privatization.
Section 4"Cumulative Effects"reports on the findings of an experts
workshop on cumulative impacts (held in Windsor, Ontario, in September 1999) and the study
team's report on information gathered with respect to the cumulative effects on the Great
Lakes ecosystem of factors affecting water levels and flows. Section 5 "Climate
Change"provides more recent information on climate change assessments.
Section 6"Groundwater"expands the discussion of groundwater basins
and their divides. Section 7"Conservation"expands on the need for
conservation in the Basin.
Section 8"Legal and Policy Considerations"more fully addresses
international trade law and U.S. constitutional law issues and provides new information on
domestic legal developments in Canada and the United States.
Section 9 (a new section) proposes a plan, as requested by the governments, for the
continuation of this study into the remainder of the boundary region.
The Commission reviewed its conclusions and recommendations in the Interim Report. Most
conclusions remain the same, others have been modified, and two have been added. Although
the thrust of the final recommendations parallels that of the interim report, some of the
recommendations in this report have been revised in the light of the Commission's further
consideration of the issues; some new recommendations have also been added.
A glossary of terms used in this report is provided in Appendix 5.
Section 2 - The Great Lakes System:
The Great Lakes Basin lies within eight states and two provinces and comprises the
lakes, connecting channels, tributaries, and groundwater that drain through the
international section of the St. Lawrence River. The waters of the Great Lakes Basin are a
critical part of the natural and cultural heritage of the region, of Canada and the United
States, and of the global community. About 40 million people reside in the Basin itself1. Spanning
over 1,200 km (750 mi.) from east to west, these freshwater seas have made a vital
contribution to the historical settlement, economic prosperity, culture, and quality of
life and to the diverse ecosystems of the Basin and surrounding region.
The waters of the Great Lakes have been a fundamental factor in placing the region among
the worlds leading locations in which to live and do business. Water contributes to
the health and well-being of all Basin residents, from its use in the home to uses in
manufacturing and industrial activity, in shipping and navigation, in tourism and
recreation, in energy production, and in agriculture. The Great Lakes are, however, more
than just a resource to be consumed; they are also home to a great diversity of plants,
animals, and other biota.
The waters of the Great Lakes are, for the most part, a nonrenewable resource. They are
composed of numerous aquifers (groundwater) that have filled with water over the
centuries, waters that flow in the tributaries of the Great Lakes, and waters that fill
the lakes themselves. Although the total volume in the lakes is vast, on average less than
1 percent of the waters of the Great Lakes is renewed annually by precipitation, surface
water runoff, and inflow from groundwater sources2.
Lake levels are determined by the combined influence of precipitation (the primary source
of natural water supply to the Great Lakes), upstream inflows, groundwater, surface water
runoff, evaporation, diversions into and out of the system, consumptive use, dredging, and
water level regulation. Because of the vast water surface area, water levels of the Great
Lakes remain remarkably steady, with a normal fluctuation ranging from 30 to 60 cm (12-24
in.) in a single year.
Climatic conditions control precipitation (and thus groundwater recharge), runoff, and
direct supply to the lakes, as well as the rate of evaporation. These are the primary
driving factors in determining water levels. With removals and in-Basin consumptive use
remaining relatively constant, during dry, hot-weather periods, inflow is decreased and
evaporation increased, resulting in lower lake levels and reduced flows. During wet,
colder periods, the opposite situation develops: higher levels and increased flows.
Between 1918 and 1998, there were several periods of extremely high and extremely low
water levels and flows. Exceptionally low levels were experienced in the mid-1920s,
mid-1930s, and early 1960s. High levels occurred in 1929-30, 1952, 1973-74, 1985-86, and
1997-98. Studies of water level fluctuations have shown that the Great Lakes can respond
relatively quickly to periods of above-average, below-average, or extreme precipitation,
water supply, and temperature conditions.
Great Lakes levels and lake level interests are highly sensitive to climatic variability,
as illustrated by the impact of high water levels in the early 1950s and mid-1980s and of
low water levels in the 1930s and mid-1960s. Significant variability will continue whether
or not human-induced climate change is superimposed on natural fluctuations. An example of
how quickly water levels can change in response to climatic conditions occurred during
1998-99, when the water levels of Lakes Michigan-Huron dropped 57 cm (22 in.) in 12
Studies have concluded that the hydraulic characteristics of the Great Lakes system are
the result of both natural fluctuation and, to a lesser extent, human intervention3. Control
works that are operated under the authority of the International Joint Commission have
been constructed in the St. Marys River at the outlet of Lake Superior and in the St.
Lawrence River below the outflow from Lake Ontario. The level of Lake Erie has been
increased by obstructions in the Niagara River, including a number of fills on both sides
of the river, with a cumulative effect of about 12 cm (4.8 in.). Dredging in the
connecting channels has had a relatively significant impact on lake levels, even in
comparison to natural fluctuations. Connecting channels and canals that have been dredged
to facilitate deep-draft shipping have permanently lowered Lakes MichiganHuron by
approximately 40 cm (15.8 in.). Although dredging in the connecting channels can have a
significant effect, its impact is greatest on lakes above the point of dredging, with
downstream interests still receiving the total amount of water flowing through the system.
Out-of-basin diversions or other removals and consumptive uses, by contrast, reduce water
levels both above and below the actual point of withdrawal and also reduce flows in the
Diversions have been constructed to bring water into the Great Lakes system from the
Albany River system in northern Ontario at Long Lac and Ogoki. They also have been
constructed to take water out of the system at Chicago and, to a much lesser extent,
through the Erie Canal. At present, more water is diverted into the system than is taken
out. A few other diversions on the border of the Basin move water in and out of the Basin
and have negligible effect. The volume of diversions out of the Basin, of other removals,
and of consumptive uses exceeds the volume of water brought into the Basin by diversions
and other artificial means. Water is also diverted around Niagara Falls for hydroelectric
power generation, and water is diverted from Lake Erie to Lake Ontario through the Welland
Groundwater is important to the Great Lakes ecosystem because it provides a reservoir for
storing water and for slowly replenishing the Great Lakes through base flow in the
tributaries and through direct inflow to the lakes. Groundwater also serves as a source of
water for many human communities and provides moisture and sustenance to plants and other
The Great Lakes Basin is home to a diverse range of fish, mammals, birds, and other biota.
The interplay between human activity and the natural order of the Lakes is complex and
only partially understood. Human activity is altering the biological diversity and the
socioeconomic structure of the Great Lakes Basin. Not only has there been some loss of
species in the Lakes, but there has also been the introduction and establishment of alien
invasive species like the lamprey eel, the zebra mussel, and the goby fish through
channels built to foster transportation and electricity. Urbanization and farming have
changed the hydrology of the Lakes by reducing wetlands and other natural habitats and by
altering the speed at which runoff reaches the lakes4.
Section 3 - Water Uses in the Great Lakes Basin
The Commission has conducted an examination of water use data in the Great Lakes Basin.
Water uses are presented in two categories: (1) consumptive uses estimated from water
withdrawal data and (2) removals. Close to 90 percent of withdrawals are taken from the
lakes themselves, with the remaining 10 percent coming from tributary streams and
groundwater sources (Figure 2-A)5.
In its Interim Report issued in August 1999, the Commission used the most current data
that were available at that time for its analysis1993 data drawn from the Regional
Water Use Data Base, maintained by the Great Lakes Commission (GLC) on behalf of the Great
Lakes states and provinces6. These
data did not include consumptive use figures for the Chicago urban area.
Since the Interim Report, the GLC has provided the Commission with more recent water use
Although most of these data are concentrated in the years 1994-98, not all of the data
fall into this time frame8. Because
the data span several years and the methods of data collection vary from one jurisdiction
to another, trend analysis and jurisdictional comparison are difficult. In some instances,
there are large differences between the two sets of data in water use by sector presented
by some individual jurisdictions; the reasons for these differences are not always clear.
The Commission is of the view that analysis of the 1994-98 water use data by sector and
jurisdiction is of limited value. It decided to focus instead on the overall aggregate
Basin figures for withdrawals and consumptive use, and compared these figures with the
equivalent 1993 numbers, including Chicago consumption data.
The Commission also looked at Great Lakes Basin water use data, extracted from national
databases compiled by the U.S. Geological Survey (USGS)9 and
Environment Canada (EC)10.
For its five-year reports, the USGS analyzes state data, adjusts the data to compensate
for perceived deficiencies, and produces estimates of actual water use for the year of the
report. Environment Canada derives its information from Statistics Canada surveys of major
water users in the Basin, not from provincial data. Environment Canadas water use
data tend to be lower than data provided by the provinces to the GLCs Regional Water
Use Data Base, since provincial data are generated from water license permits as opposed
to actual withdrawals. Like the USGS, Environment Canadas treatment of data is
viewed as consistent over the years. As with the 1994-98 GLC data, the Commission
concentrated on Basin aggregate numbers for withdrawals and consumptive use, mainly
because of the somewhat different water use sector category and classification systems
utilized by the two federal agencies.
provides data (rounded) for withdrawals and consumptive use calculated from the various
databases above. All tables and charts in this final report now reflect data for the
Chicago urban area. The data indicate a range for water use in the Great Lakes Basin. The
percentage of water consumed is approximately the same for all data sources, ranging from
4.4 percent to 4.6 percent.
For consumptive use, the Commission determined that the 1993 data, now updated with the
inclusion of full water use data for Chicago, would be the basis for its final report. The
Great Lakes Commission stated that the 1993 data were sufficiently comprehensive and
consistent across all jurisdictions, were the product of a quality assurance and control
process by its committee of water resource managers, and provided the best possible
snapshot of water use in the Basin.
In 1993, consumptive use in the Great Lakes Basin was estimated to be 121 cms (4,270 cfs)
as compared to a withdrawal of about 2,493 cms (88,060 cfs) (Figure 2-B).
The 1993 consumptive use in the Great Lakes Basin can be summarized as follows:
- By country: Canada, 33 percent, and the United States, 67 percent, with per capita
consumptive use being approximately equal for the two countries.
- By jurisdiction: Ontario, 27 percent; Michigan, 21 percent; Wisconsin, 20 percent;
Indiana, 7 percent; New York, Quebec, and Ohio, 6 percent each; Illinois, 4 percent;
Minnesota, 2 percent; and Pennsylvania, less than 1 percent (Figure 2-C).
- By type of water use: irrigation, 29 percent; public water supply, 28 percent;
industrial use, 24 percent; fossil fuel thermoelectric and nuclear uses, 6 percent each;
self-supplied domestic use 4 percent; and livestock watering, 3 percent (Figure 2-D).
The percentage of withdrawn water that is consumed within the Great Lakes system varies
with the type of use to which the water is put. When water is used for irrigation, over 70
percent is consumed11.
At the other extreme, when water is used for thermoelectric power, less than 1 percent is
consumed. The percentage of water lost to the Basin when it is used for public supply and
for industrial purposesother large water-using categoriesis of the order of 10
percent for each (Figure
3). As previously indicated the average consumption rate, considering all types of
uses, is approximately 5 percent.
Consumptive use data for groundwater are not available for most jurisdictions. Groundwater
withdrawal in the Great Lakes Basin is estimated to be generally between 3 percent and 5
percent of the total water withdrawal in the Basin. This figure, however, greatly
understates the importance of groundwater to the Basin population. The USGS estimates that
over 8 million people on the U.S. side of the border rely on groundwater as their source
of drinking water, and groundwater is the most common source of bottled water. The effects
of groundwater withdrawal may therefore be of concern on a local or subregional basis,
particularly with respect to urban sprawl, even if withdrawals do not have a major impact
on the overall water budget of the Basin12.
The Commission has developed insights into trends in water use and their impact on
potential future water demands. These insights were derived from a simple extension of
trends established over the previous decade. The variability in existing data complicates
not only analysis of past and present trends, but also the task of predicting the future.
All predictions are heavily dependent on the assumptions underlying them and on an
accurate understanding of the present starting point. Factors such as climate change could
encourage the increased use of water for irrigation and other purposes. On the other hand,
continued improvement in water demand management as well as in water conservation might
help to slow any increase in withdrawals for consumptive use within the Basin. Because
population will increase, there is a greater probability of increasing use in the future
than there is of decreasing use. Projections presented below extend to 2020. The
Commission believes that water use is likely to increase modestly by 2020 and that
projections beyond this point should be considered highly speculative.
Thermoelectric Power Use. At thermoelectric power plants, water is used
principally for condenser and reactor cooling. In the United States, thermoelectric
withdrawals have remained relatively constant since 1985 and are expected to remain near
their current levels for the next few decades. In Canada, modest increases are expected to
continue along with population and economic growth.
Industrial and Commercial Use. In the United States, industrial and commercial
water use has declined in response to environmental pollution legislation, technological
advances, and a change in the industrial mix from heavy metal production to more
service-oriented sectors. A similar trend is evident in Ontario, so combined use is
expected to gradually decline through 2020.
Domestic and Public Use. In the United States, water use for domestic and public
purposes in the Great Lakes Basin generally increased from 1960 to 1995 and is expected to
climb gradually through 2020. In Ontario, however, the modest downward trend established
in recent years because of water conservation efforts is expected to continue.
Agriculture. In the United States, water use for agriculture in the Great Lakes
region increased fairly steadily from 1960 to 1995 and is expected to continue to grow. In
Canada, the rate of increase was somewhat greater, so that combined projections indicate a
significant increase by 2020. Climate change could increase even further the competitive
advantage in agriculture the Basin has as a result of its relative abundance of water.
Total Water Use. There is agreement that water withdrawal will increase in the
future, although it is impossible to say with confidence just how much the increase will
There is, however, no such agreement on consumptive use. For example:
- The USGS and the U.S. Forest Service both estimate that water withdrawals in the U.S.
portion of the Great Lakes Basin could rise about 2 percent from1995 to 2040.
- The USGS forecasts a decline of 2 percent to 3 percent in consumptive use of water in
the U.S. section of the Great Lakes by 2020.
- A consultant to the study team developed a trend line for the period 1995-2020 that has
consumption rising by 27 percent in the U.S. portion of the Basin, by 19 percent in the
Canadian portion of the Basin, and by 25 percent in the whole Basin.
- The same consultant also produced estimates for a "conservation" scenario that
projected rises in consumption by 2020 in the U.S. portion of the Basin of 4 percent, in
the Canadian section of 1 percent, and in the total Basin of 3 percent.
The above figures may represent a range of possibilities. What is clear is that water
managers will need to manage the resource carefully.
Removals are waters that are conveyed outside their basin of origin by any means. The
following paragraphs discuss current removals by diversion, other types of removals such
as removal by marine tanker, bottled water, or ballast water, and the potential for future
diversions and other removals. Some past diversion and removal proposals are summarized in
Current Diversions. Water diversions into and out of the Great Lakes Basin are
summarized in Figure 4
and by the accompanying data in Table 2.
The U.S. Supreme Court has authorized an average removal of 3,200 cfs (91cms) from Lake
Michigan into the Mississippi River system through the Chicago Diversion.
This is the only major diversion out of the Great Lakes Basin. From 1981 to 1995, the
Chicago Diversion, as reported by the Corps of Engineers, has averaged 3,439 cfs (97 cms),
which is 239 cfs (6.9 cms) more than the U.S. Supreme Court limit of 3,200 cfs (91 cms).
Pursuant to the 1996 Memorandum of Understanding, the state of Illinois has agreed to
repay the cumulative flow deficit by the year 2019.
The Long Lac and Ogoki
diversions into Lake Superior from the Albany River system in northern Ontario are the
only major diversions into the Basin. These two diversions represent 6 percent of the
supply to Lake Superior.
At present, more water is diverted into the Great Lakes Basin through the Long Lac and
Ogoki diversions than is diverted out of the Basin at Chicago and by several small
diversions in the United States. If the Long Lac and Ogoki diversions were not in place,
water levels would be 6 cm (2.4 in.) lower in Lake Superior, 11 cm (4.3 in.) lower in
Lakes MichiganHuron, 8 cm (3.1 in.) lower in Lake Erie, and 7 cm (2.8 in.) lower in
Aside from these major diversions, there are also a few small diversions15. Three
were implemented in the 19th century to facilitate waterborne commerce between the Great
Lakes and neighboring drainage basins. These are the Forestport, New York, diversion of
water from the Black River tributary of Lake Ontario into the Erie Canal and Hudson River
basin; the Portage Canal diverting Wisconsin River water from the Mississippi River system
into the Lake Michigan basin; and the Ohio and Erie Canal diverting water from the Ohio
River basin into the uyahoga River of the Lake Erie basin. All three are now used
primarily for recreational purposes.
In recent years, London, Ontario and Detroit, Michigan have taken water from Lake Huron
for municipal purposes, discharging their effluent to Lake St. Clair and the Detroit
River, respectively. The Raisin River Conservation Authority in Ontario has, with the
approval of the Commission, taken water from the international section of the St. Lawrence
River to maintain summer flows in the Raisin River. Ohio has reported very small
diversions in Lorain County and the City of Ravenna, both communities whose customers
straddle the Lake ErieOhio basin divide. The information in this section covers the
diversions of which the Commission is aware. There may be others.
Two U.S. communities Pleasant Prairie, Wisconsin, which lies outside the Basin, and
Akron, Ohio, whose water district straddles the Great Lakes Basin divide have
obtained permission under U.S. law (the Water Resources Development Act of 1986) to take
water from the Great Lakes on the condition that they return an equivalent volume of water
over time to the Basin. In 1988, the Great Lakes governors approved the Pleasant Prairie
Diversion and agreed that a like amount of water would be returned to the Lake Michigan
Basin by 2005. Although this diversion was below the consultation trigger amount in the
Great Lakes Charter, Ontario and Quebec were consulted. Quebec concurred, but Ontario did
not. The diversion was implemented. After 2005, the diversion would provide no net
loss to Lake Michigan. With respect to Akron, the governors approved; Ontario
concurred, and Quebec did not object. The state of Ohio has already increased the flow of
water into the Cuyahoga River from the Ohio/Portage system to support the Akron Diversion,
and there is no loss of water to the Great Lakes from this diversion.
In addition to these diversions in and out of the Great Lakes Basin, the Welland and Erie
Canals divert water between subbasins of the Great Lakes and are considered intrabasin
In 1997, another small intrabasin diversion was built from Hamilton to the Haldimand
region in Ontario.
Other Removals. Public concern has been focused on the potential movement of
freshwater in bulk beyond the Great Lakes Basin by ocean tankers. To date, no contracts
are in place, and no regular trade has begun to ship water in bulk from the Great Lakes
Basin or from North America as a whole18. For
almost two decades, however, entrepreneurs have actively pursued foreign markets and have
sought approval to export from jurisdictions on both the west and east coasts. When the
Interim Report was written, Alaska, Newfoundland, and Quebec were considering proposals to
export freshwater in bulk by ocean tankers, although both Newfoundland and Quebec have
since moved to prohibit such exports subject to exceptions described in Section 8 of this
The Commission has learned that one exporter in Alaska was shipping a small volume of
water, 378,500 liters per week (100,000 gallons/week). The Commission understands that
orders for Alaskan water have fallen significantly since the beginning of 1999. The water
is placed in containers that are barged to Washington state, where the water is bottled.
It is then shipped to Alaska, Taiwan, and Korea. Although it seems clear that climate
change and continued reports of worldwide water shortages will continue to keep discussion
of bulk water shipments alive, the cost of such shipments makes it unlikely that there
will be serious efforts to take Great Lakes water to foreign markets, and cost will
continue to serve as an impediment to bulk shipments from coastal waters. Thus far,
companies in these jurisdictions have captured only small markets for bottled water.
Analysis of the bottled water industry indicates that when intrabasin trade in bottled
water is subtracted from the total trade, the Basin imports about 14 times more bottled
water than it exports 141 million liters (37 million gallons) in 1998 imported vs.
10 million liters (2.6 million gallons) exported. At this time, bottled water appears to
have no effect on water levels in the Great Lakes Basin as a whole, although there could
be local effects in and around the withdrawal sites19.
Trade in other types of beverages is believed to be of a similar order of magnitude20. For
example, 272 million liters (72 million gallons) of bottled water were exported in 1998
from all of Canada to the United States. That represented 33 percent of all beverage
exports from Canada to the United States that year, compared with 44 percent for beer and
19 percent for soft drinks. Considering the extremely small magnitude of trade in bottled
water and other beverages, it would appear both impractical and unnecessary to treat
bottled water and other beverages any differently than any other products that either
include water or use water in their production processes.
In July 1999, there was a flurry of media interest in the bottled water situation in
Ontario. According to media reports, the Ontario government had issued permits authorizing
the withdrawal of 18 billion liters (4.8 billion gallons) of water per year for bottling
purposes, almost all from groundwater sources. Only about 4 percent of this volume is
currently being withdrawn, amounting to a flow of 0.02 cms (0.7 cfs), and Ontario is
reviewing whether groundwater supplies are adequate to satisfy the licenses that it has
issued to bottling companies. It appears that most of this water remains within the Great
Lakes Basin. While the Commission is sensitive to the potential importance of this matter
to local groundwater regimes, at this time the Commission believes that this is not a
significant issue with respect to the level of Great Lakes waters and that local effects
can be managed best at the local level.
Ballast water, which is used to stabilize vessels, has always been considered a
noncommercial item. No evidence has been found to suggest that any ballast water taken
from the Great Lakes Basin is sold abroad. It should be noted that water quality is not an
issue for the purpose of establishing ballast, but discharging ballast water can lead to
the introduction of alien invasive species. A number of these species are now prevalent
throughout the Great Lakes Basin. Over a recent nine-year period, the net loss of water
from the Great Lakes Basin as a result of ships taking on ballast water in the lakes was
equivalent to an average annual flow of 0.02 cms (0.7 cfs)21.
Potential for Future Diversions and Removals. Many speakers at the public
hearings on the Interim Report said the Commission too readily dismissed the threat of
major diversions from the Great Lakes to other regions, especially the Southwestern
states. They indicated that while an analysis of past proposals for mega-diversions
indicates that they may not have been feasible, at least from an economic standpoint, this
does not mean that proposals of this kind could never be pursued for economic or other
reasons. While the Commission acknowledges the anxiety expressed by some at the hearings,
the Commission continues to believe that the era of major diversions and water transfers
in the United States and Canada has ended. Barring significant climate change, an
overcoming of engineering problems and of numerous economic and social issues, and an
abandonment of national environmental ethics, the call for such diversions and transfers
will not return. At present, there do not appear to be any active proposals for major
diversion projects either into or out of the Basin. There is little reason to believe that
such projects will become economically, environmentally, and socially feasible in the
In the United States, the era of major diversions and water transfers was linked to the
transcontinental movement of population and industry, which fostered a dynamic of resource
exploitation to support new settlements and new economic activity. In the western United
States, engineers created, at tremendous cost, networks of dams, reservoirs, and canals to
harvest water sources to support power generation, irrigation, human consumption, and
sanitation. As the west moves into the 21st century, concerns are turning to ecosystem
restoration and environmental remediation, and sustainable management has begun to guide
regional planning principles.
The mega-projects that have already been completed targeted the most easily accessible
areas. Future mega-diversions would present many additional engineering challenges.
Although most of these challenges could be overcome, the costs of such projects, whether
by pipeline or channel, remain enormous. Not only must capital be invested in the
construction of the project, but also operating and maintenance funds must be found to
support the effort. Every study of such projects has highlighted the high energy costs
associated with the pumping of water over topographic barriers. Mega-diversions also
require rights-of-way for their passage and security for the products being transported,
which would be difficult to obtain. The environmental costs of such projects in terms of
disruption of habitat and species movement are enormous. A project similar to the current
California Aqueduct would represent 75 percent of the current consumptive use in the Great
Lakes Basin and would, prima facie, have a major environmental impact on aquatic and
terrestrial resources. Increasingly, water managers recognize the validity of pricing
water at its true value, making it far more cost effective to increase the available
supply of water by using existing supplies more efficiently as they are allocated among
The 1998 Report of the Western Water Policy Review Advisory Commission22
confirmed earlier expert analysis that Western states have options for water that are less
expensive and less open to legal challenge than long-distance import of water from the
Columbia, MissouriMississippi, or Great Lakes basins. The population of the Western
states is continuing to grow faster than the national average. It is an urban population
and may be able to afford to buy and lease existing water rights from the less-productive
agricultural sector. Water savings are already being realized by some cities in the
Southwest as a result of conservation measures and improved irrigation practices. The fact
that agriculture still accounts for almost 80 percent of water withdrawals in Western
states, most of it for low-value crops like alfalfa and corn, indicates that there will
continue to be significant opportunities for reallocation of existing supplies for the
Even if mega-diversions were technically and economically feasible, current water
management thinking recognizes that the political difficulties of managing water
effectively increase as one moves beyond a single basin. Although it can be very difficult
to do so effectively, those who share a basin generally recognize the importance of
working together to manage both excess and shortfall, as well as water quality. Agreeing
to cooperate across both political boundaries and basin divides is even more difficult,
and it would be impossible for Great Lakes jurisdictions to guarantee an uninterruptible
supply to a non-Basin consumer of water. Some interests in the Great Lakes Basin, such as
riparian homeowners, might welcome a means of removing water from the Basin during periods
of extremely high levels. Most interests, including in-stream interests, commercial
navigation, and recreational boating, would be adamantly opposed to such removals in
periods of low levels. Diversions during droughts would, however, be difficult to
interrupt because of the dependency that diversions create among recipients. The
Commission recognizes that once a diversion to a water-poor area is permitted, it would be
very difficult to shut it off at some time in the future.
The Chicago Diversion, where infrastructure already exists, is a possible exception to the
technical and economic impediments to additional major diversions. There were expressions
of anxiety in public hearings about this possibility, which would, of course, lower Lakes
MichiganHuron and the downstream system, impair navigation, and reduce hydroelectric
power generation in the Niagara and St. Lawrence Rivers. In fact, during a period of high
water in the Great Lakes in the mid-1980s, a Commission study team evaluated the
possibility of increasing the Chicago Diversion to reduce water levels. Shortly
thereafter, there were calls, during a period of low water in the Mississippi River Basin,
to increase the diversion for a limited period to ease navigation difficulties on the
Mississippi River. In the 1980s, further diversions from the Great Lakes were reviewed,
including the possibility of increasing the Chicago Diversion to replace water diverted
from the Arkansas River Basin to help replenish the Ogallala aquifer23. In all
cases, it was determined that such diversions would either not achieve the intended
objectives or were too expensive to be practical. Any effort to increase the diversion in
periods of either high or low water would have to overcome potential opposition from some
downstream Mississippi Basin states and from Canada, the reluctance of any Great Lakes
states to allow any increase in the diversion lest it become permanent, and the need for
U.S. Supreme Court approval.
The Chicago Diversion was designed for a flow of 10,000 cfs (283 cms). When the Boundary
Waters Treaty was signed in 1909, the U.S. government had already limited the Chicago
Diversion to 4,167 cfs (118 cms)24.
Subsequent urban development limits the diversion to 8,700 cfs (246 cms); flows above this
level will damage property along the diversion.
In the short run, pressures for small removals via diversion or pipeline are most likely
to come from growing communities in the United States just outside the Great Lakes Basin
divide where there are shortages of water and available water is of poor quality. The cost
of building the structures needed to support such diversions would be relatively small by
comparison to the cost of building structures to move water vast distances. Population
suggests that several communities that straddle or are near the Great Lakes Basin divide,
particularly communities in Ohio, Indiana, and Wisconsin, may look to the Great Lakes for
a secure source of municipal and industrial water supplies in the future. Such diversions
would require the approval of the Great Lakes governors under the Water Resources
Development Act of 1986 (WRDA), and they would fall within the provisions of the Great
Lakes Charter. The only diversions approved in the United States under WRDA procedures to
date have resulted in no net loss of water to the Great Lakes Basin. In Ontario, because
of geography, there are currently no such pressures along the border of the Basin to draw
on Great Lakes water, nor are there likely to be any in the future.
At a lesser level, water may be transferred in bulk by trucks or marine tankers. Because
water is heavy, it is expensive to move. The geography of the region and the inability of
the St. Lawrence Seaway to handle large tankers are such that the commercial viability of
long-distance trade in bulk water from the Great Lakes appears uneconomical. Moreover,
other countries with abundant water supplies are located much closer to prospective
foreign markets than are the Great Lakes. Even the CaliforniaMexico border region
could be served more effectively from the Pacific Northwest, Alaska, and Panama than from
diversions or ocean tankers drawing water from the Great Lakes, and there are more readily
accessible sources of water on the East Coast of North America.
Towing large fabric bags filled with water is a variation on freshwater export by ocean
tanker. This technique has been used since late 1997 to provide water from the mainland to
some of the Greek islands and to the Turkish part of Cyprus26.
Apparently, these short-haul arrangements in the Mediterranean have reduced the cost of
delivery to under $1 U.S. per cubic meter, but the limited capacity of the Great
LakesSt. Lawrence system and longer ocean distances may rule out the use of this
technology in the Great Lakes Basin.
The difficulty and the expense of moving water in bulk are forcing water managers around
the world to place greater emphasis on the efficient use of existing local sources.
Treated domestic and industrial wastewaters are being used for many purposes, including
lawn watering and agricultural irrigation. As demand for urban water supplies increases,
communities are seeking to manage their demands rather than increase their supplies. In
some areas, implementation of conservation techniques has reduced demand by as much as 50
percent. In other areas, water rights markets have shifted available water from
agricultural to urban uses.
Desalination is another promising alternative to long-distance diversion (or shipment) of
water. Santa Barbara chose during the California drought a decade ago to build a
desalination plant in order to guarantee a reliable supply of water in preference to
importing water by tanker and/or reducing system-wide use. More recently, Quebec has
concluded that in most instances, the cost of desalination would be about half that of
transporting freshwater long distances by ship. By late 2002, Tampa, Florida, will begin
blending desalinated water with freshwater at costs that are competitive with the costs of
developing new freshwater sources. Desalination technology is improving rapidly. Hybrid
desalination systems, which combine thermal and membrane filtration, are lowering costs
significantly, and throughout the world, new desalination projects worth billions of
dollars are scheduled to come on-line over the next two decades27.
Privatization. It is evident from the Commissions public hearings that many
people are concerned about the growing trend toward private sector involvement in water
utilities worldwide. Privatization incorporates a spectrum of privatepublic
relationships such as entirely private, private with public oversight, and private
management contracts. Governments are divesting themselves of their investments and
services in order to promote capital inflow, efficiency, and solvency28. For
example, Milwaukee, Toronto, HamiltonWentworth, and other cities in the Great Lakes
Basin are involving the private sector in water or wastewater systems. Private sector
involvement may lead to efficiencies, improved technology, improved customer service, and
In addition, other benefits include conservation, improved adherence to local and federal
regulations, and increased spending on research and development.
However, public divestiture of utilities may have its disadvantages. The public raised
concerns that profit-oriented private firms may act at the expense of the public since
profits are directly related to high rates of consumption, lower expenditures, and/or
higher rates in the water services industry. Also, there is some evidence that companies
may be more lax on public and environmental safety standards to increase profits because
there is little regulation and public accountability30.
An increasing amount of privatization will require that attention be paid to government
regulations and their enforcement to ensure that public goals with respect to such matters
as high water quality, other aspects of environmental quality, conservation, equity, and
efficiency are fully satisfied. This includes ensuring that public and private sector
water managers are held accountable for the achievement of these public goals and for
protection of public health.
Section 4 - Cumulative Effects
Human intervention has affected the Great Lakes ecosystem at the local level as well as
at the system-wide level, and the effects (impacts) are both short-term and long-term. The
Commission has identified the basic physical (abiotic or nonliving) impacts of human use
and activity on the current water levels in the Basin and has worked to identify the
ensuing impacts of these and possible future changes on the living components of the
ecosystem. Human interventions (withdrawals, consumptive uses, regulation, dredging, land
use, etc.) are inherently cumulative. The impact of localized, small-scale activities may
be difficult to quantify on an individual basis but, collectively, they can significantly
alter the level and flow regime and associated ecological conditions.
Existing consumptive uses have lowered the levels of the Great Lakes from less than 1 cm
(0.4 in.) to 6 cm (2.4 in.) (Table 3). This
impact has been far exceeded by other anthropogenic activities. The inflows from the Long
Lac and Ogoki Diversions have raised lake levels, and the outflows from inter- and
intrabasin diversions have lowered lake levels. The largest human-induced impact on lake
levels has come from the channel work on the St. Clair and Detroit Rivers; this dredging
and mining for gravel has lowered the levels of Lakes Michigan and Huron by 40 cm (15.8
in.). The Commission's orders of approval governing the operations of the structures on
the St. Marys and St. Lawrence Rivers have established desirable ranges for levels in
Lakes Superior and Ontario to avoid very low or very high levels and the consequent
impacts that very low and very high levels have on Great Lakes interests.
There is interaction among these changes, bringing about cumulative impacts. Cumulative
impacts in ecosystems involve past, present, and reasonably foreseeable effects that are
seldom simply the sum of the changes. Even modest changes induced by individual, discrete
actions have incremental and other cumulative impacts on both a localized and system-wide
basis. These implications become more pronounced as one proceeds downstream through the
Great LakesSt. Lawrence system.
Although changes to lake levels and outflows are relatively easy to determine, the impact
of these changes is subject to interpretation. The impacts of the changes in levels on the
ecosystem as a whole, and especially on its lake and river subsystems, are not well
understood. For example, construction of the power and navigation projects on the St.
Lawrence River in the late 1950s forever changed the character of the river. Some argue
that the environmental changes brought about by the project have done incalculable harm.
Others have built their lives on the basis of the new riverlake system and would be
devastated by a return to pre-project conditions. In fact, the overall effects of the
changed regime have not been fully assessed.
The Commission is aware of only one assessment of the overall effects of water diversions.
In 1979 the U.S. Army Corps of Engineers conducted an assessment of a major increase in
the Chicago Diversion on the Great Lakes31.
Experts participating in a Commission workshop on cumulative impacts concluded that it is
difficult to quantify with any degree of precision the ecological impacts of most water
withdrawals, consumptive uses, and removals32. In
particular, impact assessment data and information are lacking with respect to fisheries
productivity and composition, the extent and range of coastal wetlands, near-shore water
quality, habitat and the degree of slope lakeward of the habitat, and biodiversity.
The dynamic nature of the Great LakesSt. Lawrence system and the multiplicity of
physical, chemical, and biological processes affecting ecosystem status challenge
science's ability to establish and characterize causal relationships between a given water
use and its impact on levels, flows, and fluctuations, on any observed changes in the
ecosystem, and on economic uses of the system. These challenges will always be difficult
to deal with, and additional research clearly is warranted in several areas33.
It is unlikely that cumulative assessment tools will ever be able to deal comprehensively
with all the uncontrollable and unknown factors and all the uncertainties, surprises, and
complex, nonlinear interrelationships that are inherent in a vast ecosystem. Nevertheless,
efforts to conduct such assessments must continue.
Given the uncertainties associated with future climate change, consumptive use, and
possible pressures for removals, and given the additional uncertainties associated with
impact assessment methodologies, a precautionary approach is appropriate. Toward this end,
consideration should be given to policies that are well advised from an ecological and
economic standpoint irrespective of climate change or unforeseen demands.
A literature review conducted in conjunction with the experts workshop provided key
findings from studies related to assessment of impacts of changes in water levels and a
listing of methodologies that could be useful in assessing impacts of changes in water
levels. Through the literature review, it became evident that meaningful assessments have
been limited by unavailability of information and by a lack of science to support
analysis. Meaningful assessments are also limited by an inability to go beyond assessment
of individual impacts. The literature review pointed out the uncertainties associated with
conducting assessments and the variety of challenges faced in determining the appropriate
methodology to be used.
For the 21st century, there is a great deal of uncertainty regarding factors such as
future consumptive use, small-scale removals of water, and climate change. Despite this
uncertainty, present indications are that all three factors are likely to place downward
pressures on water levels, with reinforcing impacts. Although there are insufficient data
and inadequate scientific understanding to place precise estimates on the magnitude and
timing of such impacts, the impacts could be significant. Thisand the prospect of
adverse cumulative impact of new human interventionssuggests a need for great
caution in dealing with those water use factors that are within the control of Basin
Section 5 - Climate Change
Two decades after the 1979 World Climate Conference, there is still considerable debate
over how fast human-induced climate change will take place, how extreme it will be, how
dangerous such changes will be for ecosystems, including socioeconomic systems, and just
how aggressively the global community should seek to mitigate the issue. There are,
however, some points of consensus. The rate of increase in concentrations of greenhouse
gases in the atmosphere is related to human activity, and, at a minimum, a doubling of
carbon dioxide concentrations in the atmosphere will occur in the 21st century, with a
corresponding increase in the average global temperature of 14 degrees C. There is
also a reasonably strong consensus that the science is sound and that "the balance of
evidence suggests there is discernible human influence on the climate system."34
In recent decades, scientists have become increasingly concerned about changes taking
place in the atmosphere, particularly the increasing concentrations of greenhouse gases.
There is growing evidence that the changing composition of the atmosphere is beginning to
influence specific components of the hydrologic cycle, even though it is not yet possible
to differentiate such effects from the natural variability of Great Lakes levels. Over the
past several decades, trends in hydrologic variables in the Basin and in the vicinity of
the Basin have generally been consistent with changes projected by and inferred from
climate models, in terms of increases in temperature, precipitation, and evaporation.
Although it is not yet possible to differentiate such effects from the natural variability
of climate, these research results are generally what would be expected with
"enhanced greenhouse effect" warming.
Results from computer climate models have been used to explore impacts on various
water-related interests, assuming likely scenarios of future atmospheric greenhouse gas
concentrations and, in some cases, sulfate aerosol concentrations. The information from
these models has been used to develop climate scenarios that have been input to hydrologic
models. Early impact assessments, based on equilibrium 2 x CO2 scenarios, suggest global
warming will result in a lowering of water supplies and lake levels and in a reduction of
outflows from the Basin. Based on projections using several state-of-the-art models35, experts
from the U.S. National Oceanic and Atmospheric Administration (NOAA) and Environment
Canada believe that global warming could result in a lowering of lake level regimes by up
to a meter or more by the middle of the 21st century, a development that would cause
severe economic, environmental, and social impacts throughout the Great Lakes region.
Experts associated with the U.S. National Assessment on the Potential Consequences of
Climate Variability and Change indicate the possibility of both slightly increased and
decreased lake levels as a result of their analysis of climate models. The National
Assessment is focusing on two transient, coupled atmosphere ocean general
(GCMs) that generally result in increased precipitation and temperature in North America
as a whole, although one more dramatically than the other, in the long run (2090)37. Of
particular note, these two models reach different projected outcomes in 2030 and 2090 for
net supplies and water levels in the Great Lakes Basin38. Given
the large discrepancies in some results of the models, there continues to be a high degree
of uncertainty associated with the magnitude of potential changes.
Many analysts recognize that results from the analysis of general circulation models
indicate that global warming will change global precipitation patterns, with different
amounts of rainfall over the course of the year. Warmer conditions may also lead to more
precipitation falling as rain rather than as snow; less snow cover and shorter duration of
both snow and ice cover; earlier snow melt; more runoff in winter; and a greater
likelihood of less runoff in summer because of higher evaporation and the earlier onset of
spring melt, with less runoff because of less snow pack. Many analysts believe that there
will be increased frequency of heavy, short-duration rains in some regions interspersed
with dry spells, and more pronounced droughts. All these factors indicate a shift in the
peak volume and timing of rainfall and runoff, which may change the timing of increases
and decreases of lake water levels. Thus, areas that receive roughly the same amount of
total annual precipitation could be forced to alter water management practice
significantly to take into account large changes in seasonal patterns of precipitation.
The question with respect to average Great Lakes levels is whether, in the long term,
increases in evaporation due to global warming will significantly offset increases in
precipitation, thereby reducing net water supplies. It is impossible at this time to
conclusively differentiate shorter-term natural variability from any longer-term trend in
the historical record. Great Lakes levels and lake level interests are highly sensitive to
climatic variability, as illustrated by the impact of high water levels in the early 1950s
and the mid-1980s and of low water levels in the 1930s and the mid-1960s. Significant
variability will continue whether or not human-induced climatic change is superimposed on
these natural fluctuations. From a policy perspective, this uncertainty does not alter the
risk posed by climate change.
Climate change suggests that some lowering of water levels is likely to occur. The
Commission's study team examined the subject of changing water levels and found that the
effects of high water levels have been dealt with in the recent past39.
However, should lower water levels occur, the factors noted below may be indicative of
some of the impacts that could be significant for the economy, the social fabric, and the
natural environment of the Great Lakes ecosystem40. It
should be noted that adaptation measures would moderate some of these impacts.
- There would be losses in hydroelectric power generation. Even though they would not be
nearly as severe as those projected in climate change scenarios, record low levels and
flows in the 1960s caused hydropower losses of between 19 percent and 26 percent on the
Niagara and St. Lawrence Rivers41. A small
proportion of these losses would be offset by lower heating costs, but this in turn would
be offset by increases in air conditioning costs.
- Great Lakes shipping costs could increase significantly because of reduced drafts in
shipping channels and increased dredging costs. At least some of these costs might be
offset by a longer shipping season.
- Flood damage in shoreline areas would decrease as long as new development was not
permitted to encroach on the newly exposed land.
There would be significant detrimental effects on recreational boating and sport fishing42.
- Shoreline-based infrastructure would experience problems similar to those experienced in
the 1960s, including less attractive scenic views, inaccessible docking facilities, and
the need to modify water intakes and waste disposal outlets. Some shoreline properties may
become attractive to people looking for vacation homes near lakes because of low water
- A reduction in the water levels of Montreal Harbour would have a major effect on all
deep-draft commercial navigation. The adaptation measures could include significant
channel dredging and the associated issue of where to put the dredge spoils.
- Finally, there could be reductions in freshwater discharges into the St. Lawrence
estuary, gulf, and beyond, affecting fish populations and other components of the St.
Lawrence and Atlantic ecosystems.
The analysis of the general circulation models suggests that a notable difference
between the results discussed above and previous climate change studies is the timing of
the change in lake levels and connecting channel flows. There is a need for further
research to help predict future weather and climate with more certainty and for impact
assessments that define the vulnerability. The continually developing research would
provide water managers with information so they may address coping mechanismssuch as
developing water management plans to handle extremesthat alleviate the possible wide
range of climate change effects. At a minimum, cost-effective measures should be taken
that would modify those human activities that contribute to changes in climate and other
unsustainable environmental impacts on resources.
Although uncertainty is inherent in climate models, it should not be assumed that climate
change impacts on the Great Lakes Basin ecosystem would take place gradually over the next
several decades. Human-induced climate change will be superimposed on normal climate
variability and natural events like El Niņo/La Niņa. The timing and regional patterns of
precipitation and runoff could change and have a dramatic effect on water levels and
outflows. In summary, the Commission believes that considerable caution should be
exercised with respect to any factors potentially reducing water levels and outflows.
Section 6 - Groundwater
Groundwater is an important source of water for many segments of the Great Lakes
community. Humans use groundwater primarily for public supply and for irrigation,
industrial, commercial, and domestic purposes. Some members of the biotic
communityfor example, cave-dwelling fish, cave-dwelling crayfish, cave-dwelling
insects, some kinds of funguses, and some microorganismsspend all their lives
underground and are completely dependent upon groundwater. Additionally, the vadose zone
(the occasionally saturated permeable substrate) is home to a number of
organismsmany of them microorganismsthat emerge from dormancy during periods
of water saturation and return to dormancy during periods of dessication.
Recent U.S. studies have estimated that groundwater makes a significant contribution to
the overall water supply in the Great Lakes Basin43. Indirect groundwater discharge
accounts for approximately 22 percent of the U.S. supply to Lake Erie, 33 percent of the
supply to Lake Superior, 35 percent of the supply to Lake Michigan, and 42 percent of the
supply to Lakes Huron and Ontario (Figure 5). Over most
of Ontario, the contribution of groundwater to stream flow is less than 20 percent; this
is because of the predominance of silt and clay or poorly fractured bedrock at the
surface. However, in some portions of the Lake Erie and Lake Ontario basins, where sand
and gravel are found at the surface, the contribution of groundwater to local streams can
be as high as 60 percent or more.
Groundwater's contribution to stream flow is significant as, among other things, it
ultimately affects lake levels. Groundwater discharge is also a significant determinant of
the biological viability of tributary streams. In undisturbed areas, groundwater discharge
throughout the year provides a stable inflow of water with generally consistent dissolved
oxygen concentration, temperature, and water chemistry. In disturbed areas where, for
example, land uses have significantly reduced groundwater flow to a stream, stream reaches
may experience diminished biological viability. Where land uses add contaminants, streams
may also lose viability.
In the Great Lakes Basin, the groundwater system is recharged mainly by infiltration and
percolation of precipitation. Withdrawal of groundwater at rates greater than the recharge
rate causes water levels in aquifers to decline. If the amount of decline is sufficient,
water may be drawn from streams or lakes into the groundwater system, thus reducing the
amount of water discharging to the Great Lakes. This is indicative of the inextricable
link between ground and surface waters.
Groundwater withdrawals at rates high enough to warrant concern have been and are taking
place at a number of locations. Among the best known of these are high-volume withdrawals
in the ChicagoMilwaukee metropolitan region. There, in 1979, in the eight-county
northeastern Illinois area, deep-aquifer withdrawals from the CambrianOrdovician
aquifer system peaked at 693 million liters per day (mld) (183 mgd) . During this same
period, maximum pumpage (withdrawals) for Milwaukee from the CambrianOrdovician
aquifer system reached 212 mld (56 mgd). This large-scale pumping produced cones of
depression in aquifers under Milwaukee and Chicago, with declines in the levels of
groundwater as great as 114 and 274 m, respectively (375 and 900 ft., respectively). As a
result of lower pumping rates since 1980, groundwater levels in the Chicago area have
recovered as much as 76 m (250 ft.) in some localities, but groundwater levels are
continuing to decline in the southwestern part of the Chicago metropolitan area44.
Groundwater consumption and groundwater recharge in the Great Lakes Basin are not well
understood. Reasons for this include the following:
- There is no unified, consistent mapping of boundary and transboundary hydrogeological
- There is no comprehensive description of the role of groundwater in supporting
- Although some quantitative information is available on consumptive use, in many cases
the figures are based on broad estimates and do not reliably reflect the true level and
extent of consumptive use.
- There are no simplified methods for identifying large groundwater withdrawals near
boundaries of hydrologic basins.
- Estimates are needed of the effects of land-use changes and population growth on
groundwater availability and quality.
There is inadequate information on groundwater discharge to surface water streams and
inadequate information on direct discharge to the Great Lakes.
- There is no systematic estimation of natural recharge areas.
In the strictest sense, a groundwater basin may be defined as a hydrogeologic unit
containing one large aquifer or several connected and interrelated aquifers. In practice,
the term "groundwater basin" is loosely defined and implies an area containing a
groundwater flow system capable of storing or furnishing a substantial water supply. The
groundwater basin includes both the surface area and the permeable materials beneath it.
The concept of a groundwater basin becomes important because of the hydraulic continuity
that exists for the contained groundwater resource. A groundwater basin may or may not
coincide with a surface physiographic featurethat is, water in an aquifer under a
lake or river may actually flow away from the lake or river and be deposited in a
different surface-water basin. In a valley between mountain ranges, the groundwater basin
may occupy only the central portion of the stream drainage basin. In limestone and
sandhill areas, drainage and groundwater basins may have entirely different
configurations. The physical boundaries of the groundwater basin are formed, in some
instances, by the physical presence of an impermeable body of rock or a large body of
Other boundaries form as a result of hydrologic conditions. These boundaries are hydraulic
boundaries that include groundwater divides. A groundwater divide can be visualized as a
ridge in the water table from which groundwater moves away in both directions at right
angles to the ridge line. Groundwater divides form hydraulic boundaries whose locations
are influenced by the presence of surficial featuresfor example, topographic lows
that hold major rivers and topographic highs from which waters drainand by hydraulic
stresses including pumping from wells and recharge. All hydraulic boundaries, including
those that coincide with physical features, are transitory in that these hydraulic
boundaries may shift location or disappear altogether if hydrologic conditions change.
Groundwater basins may have boundaries that are considerably different from the boundary
of the surface water basin under which the groundwaters lie. In fact, there may be several
groundwater basins layered at different depths, and each of these groundwater basins may
have a boundary that does not coincide with the boundary of the surface water basin under
which it is found. Accurate mapping of groundwater basins has the potential to bring about
changes in how we manage the withdrawal of groundwater as well as in how we manage the
interlinked surface waters. In any case, owing to the interconnection of surface water and
groundwater, whether water consumption is from the lakes, the tributaries, or groundwater
sources, the eventual physical impact on average lake levels is virtually identical.
Section 7 - Conservation
The first step in sound management of resources and the exercise of the precautionary
principle is conservation. Some consumption, of course, is essential to the functioning of
the human element of ecosystems. Currently, consumptive use in the Great Lakes Basin is
relatively small and is likely to experience only modest increases into the foreseeable
future. However, the cumulative impact of past activity and the likelihood of future
change will further stress the integrity of the Great Lakes ecosystem and its ability to
respond to change. Global warming will likely increase and will likely change patterns of
consumptive use; in particular, higher average temperatures in the Basin could result in
increased agricultural activity and water consumption in the longer term. Because of a
possible downward trend in net Basin supply in the 21st century, water-conservation and
demand-management practices should become increasingly important components of any overall
sustainable use strategy. Governments and citizens alike can best prepare for future
uncertainty and protect the health of the Great Lakes ecosystem by imbedding a robust
ethic of conservation into education and into every level of planning and execution.
Experience has shown that conserving water by using it more efficiently makes sound
economic and environmental sense in that infrastructure costs for water supply and
wastewater treatment are reduced, energy use is reduced, cost efficiencies are increased
by reducing the volumes of water and waste to be treated, resiliency of the ecosystem is
improved by reducing withdrawals, and exemplary behavior is demonstrated to others.
On a basin-wide scale, implementation of the Basin Water Resources Management
Programto which the states and provinces are committed under the Great Lakes
Chartercould provide the opportunity to launch a water-conservation initiative.
Sharing of conservation experiences among Basin jurisdictions should be an integral part
of the overall approach to cooperative programs and practices. Cooperating jurisdictions
may wish to adopt some common approaches, as appropriate, in their water-conservation
plans, including incentives to encourage water demand-management initiatives and the
installation of best practicable water-saving technology.
A 1999 report by the Organization for Economic Cooperation and Development (OECD)45 compares
water use in the European Union with water use in the United States and Canada and
indicates that there are opportunities to reduce waste and inefficient uses and to achieve
energy and infrastructure cost savings. The report notes that the United States and Canada
use (withdraw) nearly twice as much water per capita as the OECD average. Even taking into
account differences in economic structure and lifestyle between the United States and
Canada and other OECD countries, it would appear that improvements in water use could be
made by using appropriate, existing water-conservation and demand-management techniques.
Demand management shifts traditional thinking away from going after new water supplies to
more efficient use of the resource. Central to the concept of demand management is the
setting of prices in such a way that the amount of water used by any activity is a
function of price. Much can be done in many areas of the Basin to use water more
efficiently by such measures as adopting metering of all water facilities and moving more
assertively to recovering the full costs of providing water services.
During the public hearings the Commission held in September and October 1999, it was
suggested that the Commission should develop measurable targets for reducing water
withdrawals and consumptive losses and that it should recommend that Basin jurisdictions
adopt these targets. The Commission believes, however, that decisions on conservation
targets and the means for achieving them are better made at the local level, where the
real problems and opportunities lie and where results are more likely to be measurable.
This approach makes it possible to build on experience gained in the Basin and, at the
same time, allows for measures to be tailored to unique local situations. Mechanisms for
sharing conservation and demand-management experience should, in the Commission's view, be
an integral part of such programs as the Basin Water Resources Management Program under
the Great Lakes Charter.
Section 8 - Legal and Policy Considerations
Water management in the Great Lakes Basin is governed by a network of legal regimes,
including international instruments and customs, federal laws and regulations in both
Canada and the United States, the laws of the eight Great Lakes states and Ontario and
Quebec, and the rights of Aboriginal Peoples and Indian tribes under Canadian and U.S.
laws. This section is not intended to be a full discussion of all legal issues; rather, it
is intended to be an identification of aspects of the legal regime that bear most directly
on the issues raised in this report.
The International Legal Context
Boundary Waters Treaty. The Boundary Waters Treaty of 1909 is the primary
international legal instrument governing the use of the waters of the Great Lakes Basin.
The treaty established certain basic legal principles to deal with boundary and
transboundary waters and created the International Joint Commission to help implement
portions of the treaty. For over 90 years, the treaty has been effective in assisting
Canada and the United States to avoid and resolve disputes over freshwater.
Under the treaty, boundary waters (i.e., the waters along which the boundary passes) are
treated differently from transboundary rivers or tributaries. Thus, the treaty does not
deal with all waters of the Great Lakes Basin in the same way. With some exceptions,
Article III provides that the use, diversion, or obstruction of boundary waters must be
approved by the Commission if water levels or flows on the other side of the boundary are
to be affected. With respect to tributaries of boundary waters and transboundary rivers,
however, Article II states that each nation reserves "the exclusive jurisdiction and
control over [their] use and diversion." The treaty does not explicitly refer to
The treaty also provides that the governments of the United States and Canada may refer
issues to the Commission to investigate and to make recommendations on, in order to help
the countries resolve and avoid disputes along the border. This provision of the treaty
has been used many times over the years to address water quality and water quantity issues
in the Great Lakes and elsewhere.
Great Lakes Charter. The 1985 Great Lakes Charter is an arrangement among the
Great Lakes states and the provinces of Ontario and Quebec. Although the Charter is not
binding, it focuses the Great Lakes states and provinces on a number of resource issues
and fosters cooperation among them. The Charter provides that the planning and management
of the water resources of the Great Lakes Basin should be founded upon the integrity of
the natural resources and ecosystem of the Great Lakes Basin. Moreover, the Charter
stipulates that the water resources of the Basin should be treated as a single hydrologic
system that transcends political boundaries in the Basin. New or increased major
diversions and consumptive use of the water resources of the Great Lakes are said to be
matters of serious concern, and the Charter states that "[it] is the intent of the
signatory states and provinces that diversions of Basin water resources will not be
allowed if individually or cumulatively they would have any significant adverse impacts on
lake levels, in-basin uses and the Great Lakes Ecosystem."
The Charter provides that no state or province will approve or permit any major new or
increased diversion or consumptive use of the water resources of the Great Lakes Basin
without notifying and consulting with and seeking the consent and concurrence of all
affected Great Lakes states and provinces. The trigger point for notification and for
seeking the consent and concurrence of other Great Lakes states and provinces is an
average use of 5 million gallons (19 million liters) per day in any 30-day period. In
order to participate in this notice and consultation process, jurisdictions must be in a
position to provide accurate and comparable information on water withdrawals in excess of
100,000 gallons (380,000 liters) per day in any 30-day period and must have authority to
manage and regulate water withdrawals involving a total diversion or consumptive use of
Great Lakes Basin water resources in excess of 2 million gallons (7.6 million liters) per
day average in any 30-day period.
The Great Lakes Charter also records a commitment by the signatory states and provinces to
pursue the development and maintenance of a common base of data and information regarding
the use and management of Basin water resources, the establishment of systematic
arrangements for the exchange of water data and information, the creation of a Water
Resources Management Committee, the development of a Great Lakes Basin Water Resources
Management Program, and additional coordinated research efforts to provide improved
information for future water planning and management decisions. Although not fully
implemented, these commitments point toward the kind of cooperation and coordination that
is required in the future.
On October 15, 1999, the Great Lakes governors issued a statement renewing their
commitment to the principles contained in the Great Lakes Charter and pledged to develop a
new agreement, based on those principles, that would bind the states and provinces more
closely to collectively planning, managing, and making decisions regarding the protection
of the waters of the Great Lakes46. The
governors also pledged to develop a new common standard, based on the protection of the
integrity of the Great Lakes ecosystem, against which water projects will be reviewed.
International Trade Law. One issue raised by the governments in the Reference was
whether international trade obligations might affect water management in the Basin. To
address this issue, the Commission, with the assistance of the study team, reviewed the
relevant World Trade Organization (WTO) agreements, including the General Agreement on
Tariffs and Trade (GATT) as well as the CanadaUnited States Free Trade Agreement
(FTA) and the CanadaUnited StatesMexico North American Free Trade Agreement
(NAFTA), and relevant case law. The Commission and its study team also consulted experts
in the field.
The Commission believes it is unlikely that water in its natural state (e.g., in a lake,
river, or aquifer) is included within the scope of any of these trade agreements since it
is not a product or good. This view is supported by the fact that the NAFTA parties have
issued a statement to this effect. When water is "captured" and enters into
commerce, it may, however, attract obligations under the GATT, the FTA, and the NAFTA.
The key GATT provision with possible significance for water exports is the prohibition of
quantitative restrictions in Article XI. The GATT, however, creates a number of
exceptions. Of these, the most relevant to trade in water would appear to be those related
to measures "necessary to protect human, animal, or plant life or health" (the
"health exception") or "relating to the conservation of exhaustible natural
resources if such measures are made effective in conjunction with restrictions on domestic
production or consumption" (the "conservation exception"). With respect to
the former, there has been some debate as to whether this provision should be read
broadly, so as to in effect create an "environmental" exception to the GATT, or
narrowly, so as to embrace essentially traditional concerns related to sanitary and
phytosanitary measures. With respect to the latter, there may be a question as to whether
water is an exhaustible natural resource, although this raises less of a problem in the
case of a discrete ecosystem such as the Great Lakes Basin, where only a small part of the
resource is replenished annually. Both exceptions are qualified by a requirement that they
"[not] be applied in a manner which would constitute a means of arbitrary or
unjustifiable discrimination between countries where the same conditions prevail, or a
disguised restriction on international trade."
Although dispute-settlement panels considering these GATT exceptions have affirmed, in
principle, that trade interests may have to give way to legitimate environmental concerns,
it is also true that the same panels have questioned very closely whether measures
nominally taken for environmental reasons have underlying protectionist elements. Clearly,
then, the achievement of a coherent and consistent approach to water conservation and
management in the Great Lakes Basinan approach clearly grounded in environmental
policywould be an important step in addressing any trade-related concerns with
respect to the use of Basin waters.
The NAFTA trade obligations with respect to goods, while rooted in the GATT, appear to
constrain the availability of certain GATT exceptionsincluding the conservation
exceptionin some important ways, in effect making it more difficult to "turn
off the tap" once trade in water has been established. These constraints do not,
however, apply to the health exception, and the NAFTA wording of that exception
specifically provides that it is understood by the parties to include environmental
measures. NAFTA also makes provision for certain trade obligations in
environmental/conservation agreements to prevail in the event of a conflict. Finally, it
should be recalled that following the signing of NAFTA, the three parties issued a joint
declaration that NAFTA creates no rights to the natural water resources of any party; that
unless water, in any form, has entered into commerce and has become a good or product, it
is not covered by the provisions of any trade agreement, including NAFTA; and that
international rights and obligations respecting water in its natural state are contained
in separate treaties, such as the Boundary Waters Treaty, negotiated for that purpose.
Many people who made presentations during the Commission's hearings in September and
October 1999 believed that the NAFTA and WTO agreements could prevent or at least impede
the United States and Canada from prohibiting the export of Great Lakes waters and the
diversion of those waters. Several noted that to date, in all the cases before the WTO
involving issues of protecting environmental or natural resource interests, the WTO had
ruled against those interests. Some observed that the WTO decision-making process was not
Since issuing its Interim Report, the Commission has received a letter dated November 24,
1999, from the Deputy United States Trade Representative concerning the implications of
international trade agreements for the protection of the waters of the Great Lakes Basin.
A copy of this letter is attached (Appendix 8). The Commission has also received a
document entitled Bulk Water Removal and International Trade Considerations from the
Canadian Department of Foreign Affairs and International Trade (Appendix 9). These
submissions generally are consistent with the Commission's views regarding the effect of
international trade law on the ability of the two countries to protect the water resources
of the Great Lakes Basin.
The Commission also received legal opinions from several experts. The following points
synthesize the thrust of these opinions received and are intended to take into account the
uncertainties and the caution expressed with respect to international trade law. They are
similar to the views expressed by the Canadian and U.S. governments.
- The provisions of NAFTA and the WTO agreements do not prevent Canada and the United
States from taking measures to protect their water resources and preserve the integrity of
the Great Lakes Basin ecosystem where there is no discrimination by decision-makers
against individuals from other countries in the application of those measures.
- NAFTA and the WTO agreements do not constrain or affect the sovereign right of a
government to decide whether or not it will allow natural resources within its
jurisdiction to be exploited and, if a natural resource is allowed to be exploited, the
pace and manner of such exploitation.
- Moreover, even if there were sales or diversions of water from the Great Lakes Basin in
the past, governments could still decide not to allow new and additional sales or
diversions in the future.
- The NAFTA and WTO agreements contain provisions that prohibit export restrictions and
discrimination between nationals and foreigners who are entitled to national treatment
under those treaties. Sales of water that are allowed could not be restricted to the
domestic market unless they fit within the health and conservation exceptions referred to
above (i.e., restrictive measures would be necessary for the protection of human, animal,
or plant life or health or for the conservation of an exhaustible natural resource and are
not applied in a way that constitutes arbitrary or unjustifiable discrimination or a
disguised restriction of international trade). Recent decisions of the appellate body of
the WTO may raise concerns about the circumstances in which environmental measures will
meet the test of not constituting arbitrary or unjustifiable discrimination or a disguised
restriction of international trade, even though they may otherwise relate to the
conservation of an exhaustible natural resource or may be necessary for the protection of
life or health. The WTO decisions have tended to focus on whether measures are arbitrary
or discriminatory. In the light of these decisions, it appears that it would be desirable,
whenever possible, for environmental measures to be based on an international agreement or
- If governments in Canada and the United States want to avoid falling within the
investment provisions of the NAFTA, they should avoid creating undue expectations by
clearly articulating their water-management policies in a fully transparent manner, by
acting in a manner that is entirely consistent with their stated policy, and by limiting
the time for which authorizations are valid. Moreover, the governments should make it
clear that authorizations do not give rise to any continuing entitlement or expectation on
the part of the holder of the authorization, that, if the holder of the authorization were
to reapply after the expiry of the authorization, there is no guarantee that that person
would be given treatment any more favorable than any other person who might apply, and
that it is within the government's jurisdiction to decide whether or not even to permit an
authorization to be issued again.
- Actions with respect to water diversions or sales that nationalize or expropriate an
investment of a foreigner may lead to a claim under Chapter 11 of NAFTA, which gives
private investors of one country the right to commence proceedings against another country
for injuries to the rights accorded private investors under the agreement. In all other
cases, claims under the WTO agreements or the NAFTA must be brought by a Party to the
agreement (i.e., by the government of one of the countries).
Other experts, while not suggesting international trade law made it impossible to
regulate exports of water, cautioned that trade law could make the process more
The Domestic Legal Context
In Canada. The constitutional underpinnings of Canadian water law are found in
the Constitution Act. Because water is not treated explicitly in that act, the respective
federal and provincial roles in water management can be found under a number of
constitutional headings that may be either legislative or proprietary in nature.
Federal legislative jurisdiction over water is rooted in several headings under the
Constitution Act. The most obvious are the specific federal responsibilities for
navigation and shipping and for sea coast and inland fisheries. Other headings, such as
trade and commerce, Indians and lands reserved for Indians, agriculture (a power exercised
concurrently with the provinces), criminal law (especially with respect to pollution), and
undertakings (including canals) connecting or extending beyond the limits of a province,
are also relevant. Two other more general grants of legislative authority are also
relevant. The first general grant is the power of the federal government to implement
treaties concluded by the British Empire on Canada's behalf. This power supports the
International Boundary Waters Treaty Act, but it has not been extended to treaties
concluded by Canada in its own right. The second general grant is the power to make laws
for the "peace, order and good government" of Canada. Although this power has
had a checkered history, it has been used to justify federal authority over marine dumping
within provincial waters, and it could take on significance with respect to issues such as
climate change that are determined to have a primarily national or international
On November 22, 1999, the Minister of Foreign Affairs introduced in the House of Commons
proposed amendments to the International Boundary Waters Treaty Act that, if enacted, will
impose a prohibition on removals of boundary waters from their water basins. The proposed
amendments also provide that the Governor in Council, on the recommendation of the
Minister of Foreign Affairs, may make regulations that create exceptions to this
prohibition. Moreover, the amendments will require persons to obtain a license from the
Minister of Foreign Affairs for the use, obstruction, or diversion of boundary waters in a
manner that in any way affects, or is likely to affect, the natural level or flow of
boundary waters on the other side of the international boundary. This licensing
requirement does not, however, apply to the ordinary use of waters for domestic or
sanitary purposes or in cases for which exceptions have been established by regulations.
According to the Canadian government, the recently introduced amendments to the
International Boundary Waters Treaty Act are part of its three-part strategy, announced on
February 10, 1999, to prohibit the removal of water (including removals for the purposes
of export) out of major Canadian water basins. The strategy includes the joint Reference
by Canada and the United States to the International Joint Commission on consumptive uses,
diversion, and removal of Great Lakes water. It also includes an effort by the Canadian
Minister of the Environment to seek the endorsement by provinces and territories of a
Canada-wide accord prohibiting bulk water removals to ensure that all of Canada's
watersheds are protected. This process continues.
Apart from its legislative powers, the federal government also exercises certain
proprietary rights that may involve a water-management role. These rights include
ownership of specified public works such as canals (and connected lands and water power),
public harbors, lighthouses and piers, river and lake improvements, lands set apart for
general public purposes, and national parks.
Although the federal government exercises jurisdiction over water management primarily
through its legislative authority under the Constitution Act, provinces also derive
important authority from their proprietary rights. The Constitution Act provides, with
limited exceptions, for provincial ownership of all public lands (including water). The
legislative powers of the provinces largely buttress their proprietary powers and include
authority with respect to management and sale of public lands, local works and
undertakings, property and civil rights in the province, and generally all matters of a
local or private nature.
There is no plenary federal legislation with respect to water. Historically, the primary
interest of the federal government in water management has been focused on its
constitutional responsibilities for fisheries (through the Fisheries Act), navigation
(through the Navigable Waters Protection Act), and international relations, although it
has in recent years taken a role in water quality, particularly with respect to toxic
The most ambitious attempt by the federal government to legislate in a comprehensive
fashion with respect to water was the Canada Water Act of 1970. The act emphasizes
federalprovincial cooperation and includes provisions for unilateral federal action
on transboundary issues. In practice, however, the federal role envisaged in the act has
not been fully realized. The International Rivers Improvements Act also has potential
application to some water withdrawals with transboundary aspects. The act requires a
license for international river improvements. The definition of an international river is
very broad and would include, for example, a transboundary water pipeline.
The International Rivers Improvement Act is, however, subject to two important exceptions:
It does not apply to improvements situated within boundary waters as defined by the
Boundary Waters Treaty, nor does it apply to improvements "constructed, operated or
maintained solely for domestic, sanitary or irrigation purposes, or other similar
consumptive uses." In sum, as with other federal legislation, the act is not designed
to provide a general mechanism for dealing with water removals, and it would not even
apply to schemes that do not involve a physical "work" of some kind.
The Ontario Water Resources Act (OWRA) prohibits the withdrawal of more than 50,000 liters
(13,209 gal.) of water a day from a well or from surface waters without a permit.
Ontario's recently issued Water Taking and Transfer regulation, which took effect on April
30, 1999, among other things, prohibits the transfer of water out of the Great Lakes
Basin, subject to certain exceptions.
In Quebec, the Civil Code contains provisions concerning the use of water, including the
rights of riparian owners. Moreover, Quebec's Environmental Quality Act, which is
concerned primarily with contamination and withdrawals that have a significant effect on
the environment, imposes constraints on the use of water.
The Quebec Minister of the Environment introduced Bill 73 on October 21, 1999, in the
Quebec National Assembly, and it was assented to on November 26, 1999. The bill, a
proposal for a Water Resources Preservation Act, was put forward as an interim measure to
prevent adverse effects on the environment from water transfers outside Quebec prior to
completion of the public inquiry that is now underway regarding a framework for water
management. The Water Resources Preservation Act prohibits the transfer outside Quebec of
surface or groundwater taken in Quebec. Bill 73 does, however, provide exceptions for (1)
water to produce electric power, (2) water to be marketed for human consumption that is
packaged in Quebec in containers of 20 liters or less, (3) water to supply potable water
to establishments or dwellings situated "in a bordering zone," and (4) water to
supply vehicles. Moreover, the government may lift the prohibition on the grounds of
urgency, for humanitarian reasons, or for any other reason considered to be in the public
In the United States. Congress has plenary power under the commerce clause of the
U.S. Constitution to regulate interstate commerce. This federal authority includes the
power to authorize and control the diversion of water from one navigable waterway to
another or from one watershed to another, and it also includes the power to authorize the
use of water for navigational purposes. The exercise of this Congressional power is as
broad as the needs of commerce. It extends to the use of water of a navigable stream for
the production of hydroelectric power and to the protection of navigable waters from
obstruction by out-of-basin diversions and from pollution.
The Great Lakes Basin Compact, which was agreed to by the eight Great Lakes states and
approved by the U.S. Congress in 1968 and which created the Great Lakes Commission,
provides, among other things, for joint or cooperative action to promote the orderly,
integrated, and comprehensive development, use, and conservation of the water resources of
the Great Lakes Basin and to plan for the welfare and development of these water
The Water Resources Development Act of 1986 (WRDA) is a federal law that prohibits any
further diversion of water from any U.S. portion of the Great Lakes or their tributaries
for use outside the Basin unless such diversion is approved by the governors of all Great
Lakes states. It also prohibits federal studies of diversions without the concurrence of
the governors. The impetus for the Charter and for WRDA was the concern in the U.S.
portion of the Great Lakes Basin, in the early 1980s, that there would be major demands
for Great Lakes Basin water from the agricultural and energy sectors of the western and
southern United States.
The Commission received legal advice on issues related to the Commerce Clause of the U.S.
- Under the Supreme Court doctrine known as the Dormant Commerce Clause Doctrine, federal
courts may invalidate state laws that either blatantly discriminate against interstate
commerce or unreasonably burden interstate commerce in other ways. Courts have
consistently applied this doctrine to invalidate state legislation that simply blocks the
flow of goods across state lines. On the other hand, they have also recognized that there
are timesfor example, times of shortagewhen a state may favor its own
citizens. There are also times when legitimate state interests may justify actions by
states that do affect interstate commerce. The more narrowly tailored any restraints on
commercialization can be, and the more targeted to preservation of ecological integrity,
the more likely the restraints are to be sustained against a Commerce Clause attack. How a
court will act in any given case will, of course, depend on the facts of that case.
- The Commission is not aware of any cases where the doctrine has been applied to waters
allocated by the doctrine of riparian rights, as are the Great Lakes in the United States,
or to interstate or boundary waters widely shared among basin states and a foreign nation.
Moreover, Congress has the power to authorize state legislation that would otherwise
violate the Dormant Commerce Clause Doctrine, and neither the Court nor commentators have
suggested any limitations on this power that would restrain Congressional approval of
Great Lakes protection efforts. It is very clear under the Commerce Clause cases that,
where Congress has authorized a restraint on trade, there is no Commerce Clause problem.
- The Water Resources Development Act of 1986, by not having standards, may run afoul of
the nondelegation doctrine. The U.S. Supreme Court has not, however, found an improper
delegation since 1935, and the Water Resources Development Act of 1986 could be upheld by
the Court finding appropriate standards in a variety of sources, such as practice and
existing arrangements, including the Great Lakes Charter. This issue could be addressed by
the creation of appropriate standards that were legally binding on the states
Historically, surface water law in each of the Great Lakes states has been based on the
doctrine of riparian rights. Under this doctrine, the right to make reasonable use of
water in rivers and lakes was incidental to the ownership of land that abutted the water.
Leaving aside the relevant provisions of the Boundary Waters Treaty, this right could be
exercised even if it caused some diminution in the quantity or quality of the water
remaining in the river or lake. The riparian right was usually limited to the use of the
water on the riparian land and within the watershed of origin. Traditionally, the use of
groundwater was not similarly restricted. Each of the Great Lakes states has made
legislative changes to the legal regime over many years, to address specific needs in that
state. Changes range from collecting information regarding specific large uses to
requiring permits for withdrawals or consumptive uses above a certain amount. Although
there is no clear pattern to these legislative changes, they do provide different
approaches to achieve overall state water-management goals within a context of riparian
With the signing of the Great Lakes Charter, each of the Great Lakes states found it
necessary to institute a legal regime for protecting the Great Lakes ecosystem. Different
states have adopted different statutes. Most state laws deal with water withdrawals in
general or with withdrawals in the context of Basin waters. Typically, the level of
withdrawal that triggers state permitting requirements is well below that which triggers
review under the Great Lakes Charter. Although some Basin states (Minnesota, New York, and
Wisconsin) include a statutory provision that specifically requires consultations with the
other Great Lakes states and provinces in the event of diversions from the Basin that fall
within the Charter's trigger provision of 5 million gallons (19 million liters) per day,
others have not provided for this explicitly.
Since the signing of the Great Lakes Charter and the adoption of the Water Resources
Development Act, several proposals for diversions of Great Lakes water have been
considered by the Great Lakes governors and premiers. These proposals include diversions
at Pleasant Prairie, Wisconsin, and at Akron, Ohio, which were approved, and at Lowell,
Indiana, which was denied. A proposal to divert water from the Crandon Mine to the
Wisconsin River was retracted without formal consideration by the Great Lakes governors. A
proposal to withdraw water from Lake Huron for the Mud Creek irrigation district in
Michigan, an increased consumptive use, went forward even though there were objections by
some Great Lakes jurisdictions. To date, the Mud Creek irrigation project has been the
only consumptive use proposal large enough to trigger the Charter requirement for notice,
consultation, and seeking the concurrence of all Great Lakes Basin jurisdictions.
Consequently, the Charter has not yet provided the impetus for an ongoing conversation
among the jurisdictions on the subject of consumptive uses.
The implementing resolutions for the Great Lakes Charter that were approved by the Great
Lakes governors and premiers in 1987 outlined a review process for diversion proposals. A
process has evolved for reviewing and approving diversions pursuant to the Charter and the
WRDA. A custom and usage has developed of requiring extensive information before a
diversion proposal can be approved. The states have also developed the practice of
employing the Charter procedures regarding consultation for diversion proposals covered by
WRDA that do not meet the Charter trigger point, so that the provinces are consulted
although they have no rights under WRDA.
The Commission notes that while WRDA offers the strength of mandatory review of all
proposed diversions, concern has been expressed by observers that WRDA applies only to
diversions in the United States, does not address consumptive use, contains no criteria
for the governors to use in considering proposals, contains no appeal procedure, and may
not cover groundwater.
Legislation was introduced in the U.S. Congress in 1999 to impose a moratorium on the
export of water from the U.S. portion of the Great Lakes and, in one case, from elsewhere
in the United States pending the development of agreed principles and procedures that
would protect the water resources of the Great Lakes Basin. To date, there has not been
final Congressional action on these legislative initiatives.
Aboriginal Peoples and Indian Tribes
In Canada, Aboriginal and treaty rights are recognized and affirmed by the Constitution
Act, 1982, although the specific nature and the extent of these rights have not yet been
determined. Aboriginal Peoples' interests in land are understood to be communal in nature,
involving rights of occupation as well as the use and benefit of resources. The extent to
which Aboriginal Peoples' interests extend to water and waterways may vary significantly
with the circumstances, including whether the particular interest has the status of a
treaty right. It is not clearly settled whether Aboriginal Peoples' interests in water are
riparian in nature. More generally, however, the federal government may have an obligation
to consult with Aboriginal Peoples, which is underpinned by its fiduciary duty toward
In the United States, the right of Indian tribes to the use of the waters of the Great
Lakes Basin has continued without significant challenge since the reservations were
established (late 1700s to the mid-1800s). Although litigation has occurred regarding the
existence and extent of tribal fishing rights in the Great Lakes, there does not appear to
have been any dispute over tribal use of water from the Great Lakes or its tributaries
flowing through or adjacent to the reservations.
During its recent hearings, the Commission received numerous submissions with respect to
the interest and involvement of Aboriginal Peoples and Indian tribes. These submissions
uniformly expressed opposition to exports or diversions from the Great Lakes Basin and
strongly urged the need to ensure opportunities for the participation of Aboriginal
Peoples and Indian tribes in decisions concerning the waters of the Great Lakes Basin
During its hearings, the Commission was also requested to clarify the relationship between
international trade agreements and treaties with Aboriginal Peoples and Indian tribes. The
Commission is not, however, the appropriate forum in which to address this issue.
Section 9 - Next Steps
The Reference asks the Commission to report on additional work that may be required to
better understand the implications of consumption, diversions, and removal of
waterincluding removals for export from boundary waters outside the Great Lakes
Basin, removals of waters of transboundary basins, and removals of groundwater of shared
aquifersand to prepare a plan proposing the phasing of such additional work.
The Commission's binational, interdisciplinary study team undertook a reconnaissance
survey of shared watersheds beyond the Great Lakes Basin to determine the availability of
water supply and consumptive use data and the availability of information on such matters
as diversions and other removals, bilateral agreements and arrangements with respect to
water quantity and quality issues, groundwater, and climate change. Based on this survey,
the study team identified the following areas of study in which further work could assist
in better understanding the implications of consumption, diversions, and removal of water.
It was, however, recognized that these areas of study may not be applicable in the same
way to all transboundary basins and that other issues may also deserve attention in some
Water Supply and Consumptive Uses
- Reviews should be undertaken of balances between water supply and consumptive use in
major transboundary river basins.
- Transboundary basins in which water shortages may become a constraint on the health of
the economy or the environment should be identified.
- Analyses should be undertaken of factors that change balances between water supply and
consumptive use in transboundary basins.
Diversions and Other Removals
- The existence of inventories of diversions, other bulk removals, removals for bottling,
and exchanges of treated drinking water between border communities should be confirmed.
- Assessments should be undertaken of the probability of future proposals for diversions,
other bulk removals, additional removals for bottling, and exchanges for domestic purposes
between border communities.
- Assessments should be undertaken of the implications of existing and potential
diversions and other removals on shared groundwater resources, water balances, intangible
values (e.g., fish, wildlife, heritage, and recreation), and the rights of Aboriginal
Peoples and Indian tribes.
- Continuous monitoring should be maintained of any water removals from either country
outside the Great Lakes Basin and assessments should be made of their potential
implications in terms of removals from the Basin or other regions.
- Information should be assembled on current and probable future developments that are
likely to be influenced by, or to affect, transboundary water removals or water use.
- To the extent possible using existing data, descriptions of groundwater hydrology,
quality, and availability in shared basins should be prepared.
- To the extent possible using available data, current groundwater uses in the
transboundary region and factors likely to affect those uses in the future should be
- Medium- and long-term research priorities for groundwater management in the boundary
region should be identified.
- Existing climate change studies that may be applicable to relevant transboundary basins
should be reviewed.
- Appropriate hydrologic indicators (e.g., changes in mean and extreme flows and in
seasonal patterns of runoff) should be developed.
- Estimates should be prepared of potential impacts of climate change on social, economic,
and environmental interests in transboundary basins.
Transboundary Legal Regimes
- An assessment should be made of the effects of existing CanadaU.S. agreements on
water uses and diversions in shared river basins and on the sustainable use of shared
- An assessment should be made of the effects of federal, provincial, and state legal
regimes on water uses and diversions in shared river basins and on the sustainable use of
shared water resources.
- An assessment should be made of the effects of interstate, interprovincial, and
stateprovincial water-management arrangements on water uses and diversions in shared
river basins and on the sustainable use of shared water resources.
Binational Institutions and Arrangements
- Binational institutions and arrangements for water management in transboundary basins
should be identified.
- An assessment should be made of the adequacy of existing institutions and arrangements
in the light of the findings under the headings above.
- Situations in which there may be a need in the future to contemplate new or altered
binational water-apportionment arrangements should be identified.
- A synthesis should be prepared of findings in the above areas to provide governments
with a broad understanding of the implications of consumption and of diversions and other
removals in or from boundary and transboundary surface water and groundwater.
- Policy and legal concerns that the governments should consider addressing should be
The Commission has consulted states and provinces along the border about the plans for
additional work on consumption, diversions, and removal of waters from boundary waters
outside the Great Lakes Basin, transboundary waters, and shared aquifers. In general,
jurisdictions appreciate the importance of these issues and appear to be prepared to share
existing information with the Commission. There were, however, different views about how
these issues should be addressed.
All western states expressed an interest in cooperating with the Commission on the study.
In some of the Canadian provinces, however, there was some concern that encompassing all
boundary and transboundary basins and shared aquifers in one sea-to-sea approach could, in
some cases, lead to an inappropriate linking of issues.
Manitoba supports the study and would like to participate in such a new or extended
Reference. Moreover, Manitoba officials consider that the Commission could be of
assistance in resolving binational water quantity issues that are on the horizon,
including the apportionment of water crossing the ManitobaNorth Dakota portion of
the border. North Dakota also considers that the Commission could play a useful role in
Alberta and Saskatchewan officials expressed a number of concerns about the proposed
study. In their view, water quantity issues are well in hand in their areas, and they
consider that a new or extended Reference would not only duplicate work that is being done
by the Alberta and Saskatchewan governments, but would also confuse the public. Moreover,
they expressed concern that there could be inappropriate comparisons (e.g., comparisons
between eastern and western situations); that local issues could become linked, making
them more difficult to manage and resolve; and that a broad-brush study could reopen old
wounds for no apparent reason.
In the east, the Commission consulted with officials from New Hampshire, Vermont, and New
Brunswick. There was a general feeling among the participants that all major issues in the
boundary region were being addressed appropriately. In addition, budgets throughout the
region were extremely tight. Nevertheless, all three jurisdictions were willing to
contribute data and information and participate in Commission work under the current
reference to the extent that resources were available.
The Commission considers that further work in the areas the study team identified could
provide a better understanding of the implications of consumption, diversions, and removal
of water from boundary waters outside the Great Lakes Basin, from waters of transboundary
basins, and from groundwater of shared aquifers. Taking into account the views it has
received, the Commission believes that some issuessuch as climate change and
groundwater researchshould be addressed across the entire border region, and other
issuessuch as water balances in the plains region and water apportionment in the
border region between Manitoba and North Dakotashould be focused at the regional
This approach would allow efforts and resources to be focused on important concerns with
respect to consumption, diversions, and removal of water in the border region without
duplicating work that is being done in states and provinces. This would provide for work
to continue binationally, focusing on those priority issues that are not being addressed
elsewhere and on specific regional issues to which the Commission can contribute
binational experience and resources. In both instances, the Commission's involvement would
serve its traditional purpose of acting impartially in the common interest of both
countries to prevent and resolve differences.
Section 10 - Conclusions
The Commission was charged to provide recommendations to the governments concerning the
protection of the waters of the Great Lakes. In the course of developing these
recommendations, conducting its studies, and consulting with the public, the Commission
was able to draw several conclusions and to note matters it believes should be brought to
the attention of the governments at this time. The Commission was also able to identify
and build upon principles that would effectively lead to both the protection and the
enhancement of the Great Lakes ecosystem.
The Great Lakes Basin Ecosystem
1. The Great Lakes: A Critical Resource. Water is a critical resource that is
essential for all forms of life and for a broad range of economic and social activities.
The Great Lakes, sometimes referred to as North America's inland sea, are one of the
largest freshwater ecosystems in the world and support about 40 million people and a
diversity of biotic populations. Moreover, the lakes are a central feature of the natural
and cultural heritage of the Great Lakes region and the social and economic
interdependence of eight U.S. states and two Canadian provinces.
2. The Aquatic Ecosystem. The Great Lakes aquatic ecosystem is made up not only
of the lakes themselves, but also of the complex network of tributaries and groundwater on
which the lakes depend. Changes to the lakes, the tributaries, or the groundwater can
alter the balance of the ecosystem of the region in significant and sometimes
unpredictable ways. Measures aimed at protecting and conserving the waters of the Great
Lakes must cover the surface water of the lakes, connecting channels, tributaries, and
groundwater if they are to be effective.
3. Conservation. Conservation measures can and should minimize the amount of
water that is withdrawn and consumed in the Great Lakes Basin, and such measures must form
part of any effort to preserve the integrity of the waters of the Great Lakes Basin and
ensure the sustainability of those resources.
4. System Stress. Removals of water from the Great Lakes Basin reduce the
resilience of the system and its capacity to cope with future, unpredictable stresses. On
an average annual basis, less than 1 percent of the water in the Great Lakes
systemapproximately 613 billion liters per day (162 billion gallons per day)is
renewable. Any water taken from the system has to be replaced in order to restore the
system's lost resilience. It is not possible at this time to identify with any confidence
all the adverse consequences of water removals so that these consequences could be
mitigated. The precautionary approach dictates that removals should not be authorized
unless it can be shown, with confidence, that they will not adversely affect the integrity
of the Great Lakes Basin ecosystem.
5. Climate Influences. Although the outflows from Lake Ontario and Lake Superior
are regulated, the levels of the lakes ultimately depend on climatic conditions that
cannot be controlled or even reliably predicted. It can, however, be expected that the
Great Lakes system will continue to experience periods of high and low precipitation and
therefore high and low levels and variable flows, which will be beneficial to some
interests and disruptive to others. As illustrated during 199899when the level
of Lakes MichiganHuron dropped 57 cm (22 in.) in 12 monthswater levels can
change quickly over short periods in response to climate conditions.
6. Use of Great Lakes Water. If all interests in the Basin are considered, there
is never a "surplus" of water in the Great Lakes system; every drop of water has
several potential uses, and trade-offs must be made when, through human intervention,
waters are removed from the system. Environmental interests, for example, require
fluctuations between high and low levels to preserve diversity. Seemingly
"wasted", the infrequent very high waters do, in fact, serve a purpose by
inundating less frequently wetted areas and renewing habitat for their biotic occupants.
Major outflows from the Great Lakes provide needed freshwater input to fish populations as
far away as the Gulf of Maine.
7. Water Quality and Water Quantity. Water quantity and water quality are
inextricably linked. For most uses, quantity alone does not satisfy the demand. Since the
signing of the Great Lakes Water Quality Agreement, significant strides have been made
toward restoring and preserving the quality of water in the Great Lakes Basin. However, in
many areas, the restoration has not been complete and problems remain. In these
situations, this poor water quality impairs the potential uses of the waters of the Great
Lakes and constitutes a virtual "removal" of usable waters from the system.
8. Climate Change. Mounting evidence of the potential for climate change adds
uncertainty to the nature of future supplies to the Great Lakes and how the levels and
flows of the lakes will be affected. All climate models to date agree that there will be
some increase in temperature in North America. Although most models suggest that global
warming would lower Great Lakes levels and outflows, there is some limited new information
that suggests the possibility of a slight rise in water levels. There is information to
suggest that there could be more frequent and severe local weather events. Climate change
also has the potential to increase the demand for water, both inside and outside the Great
9. Future Demands. There is uncertainty not only with respect to water supplies
to the Great Lakes Basin, but also with respect to future demand for water within the
Basin. The use of water for irrigation is increasing in the Basin. Currently, however,
there is a trend to slower growth in water withdrawals in the Great Lakes region. This
trend is the result of conservation and environmental measures, shifts in resources from
the industrial sector to the service sector, and a decline in population growth, mainly in
the portion of the Basin that lies within the United States. Whether this trend will
continue cannot be predicted. Existing water use data, much of which is out of date, do
not provide a reliable basis from which to predict future demand, and withdrawals could
start to rise again with economic growth or climate change.
10. Diversions and Other Removals. Over the longer term, a number of factors may
affect the demand for water diversions and other bulk removals. Global population growth
or climate changes could result in requests for shipments of Great Lakes water to meet
short-term humanitarian needs. Geography and distance may reduce such demands as there are
more logical and more economical water sources closer to most areas of potential drought.
The United Nations advocates that the solution to future water crises rests with nations
learning to use water more efficiently, not in shipping freshwater around the world.
11. Potential Diversions. There are no active proposals for major diversion
projects either into or out of the Basin at the present time. There is little reason to
believe that such projects will become economically, environmentally, and socially
feasible in the foreseeable future. Although the Commission has not identified any
planning for or consideration of major diversions in areas outside the Basin, such
diversions cannot be entirely discounted. There are no active proposals for any smaller
diversions into or out of the Great Lakes Basin at this time, although growth trends would
indicate that such requests are likely from communities on or near the Great Lakes Basin
12. Interruptions of Supply. Apart from the many engineering, economic,
environmental, and social obstacles to construction of large-scale diversions, and given
the variations in water levels and flows in the Great Lakes, it would be impossible for
the Great Lakes jurisdictions to guarantee an uninterruptible supply to any mega-removal.
Some interests in the Great Lakes Basin, such as riparian homeowners, might welcome a
means of removing water from the Basin during periods of extremely high levels. Most
interests, including in-stream interests, commercial navigation, and recreational boating,
would be adamantly opposed to such removals in periods of low levels. Diversions during
droughts would be difficult to interrupt because of the dependency that diversions create
among recipients. The Commission recognizes that once a diversion to a water-poor area is
permitted, it would be very difficult to shut it off at some time in the future.
13. Current Bulk Removals. There are not, at present, significant removals of
water from the Great Lakes Basin by truck. There is no trade in water from the Great Lakes
by marine tanker, although the Nova Group in 1998 did seek a permit to ship 600 million
liters (159 million gallons) of water annually from Lake Superior to Asia. Moreover,
despite the increase that has occurred in the market for bottled water, the volume of
water leaving the Great Lakes Basin in bottles is not significant (the amount of bottled
water presently imported into the Basin exceeds the amount leaving by a factor of 14). The
amount of ballast water currently leaving the Basin is not sufficient to cause damage to
the Basin ecosystem. There is nevertheless a need to monitor these activities and keep
them under review.
14. Groundwater. There is uncertainty and a lack of adequate data about
groundwater and use of groundwater in the Basin. Data on withdrawals vary in quality,
while data on consumption are extremely limited. It is estimated that about 5 percent of
all withdrawals in the Basin are from groundwater. Current estimates of consumption of
groundwater do not indicate that this consumption is a major factor with respect to Great
Lakes levels. Nevertheless, it is a matter of considerable importance to more than 20
percent of the Basin's human population and to the large biological community that rely on
groundwater and that can be significantly affected by local withdrawals. There is a
serious lack of information on groundwater in the Basin, and governments should undertake
the necessary research to meet this need. There is clear need for state, provincial, and
local government attention to the monitoring and regulation of groundwater withdrawals and
protection of groundwater recharge areas.
15. Human Interventions. Human activities beyond water removals and consumption
have had impacts on the natural environment of the Great Lakes ecosystem. Land use
changes, water pollution, regulation of lake levels, channel work for navigation,
construction of dams, other activities, and development of wetlands can affect water
levels, destroy habitat, and modify hydrologic regimes.
Great Lakes Basin Laws and Policies
16. Cooperative Efforts. The Great Lakes Basin extends across the boundary
between Canada and the United States and the borders of eight states and of the provinces
of Ontario and Quebec. None of these governments alone can regulate water in the entire
Basin. The Great Lakes are an integrated hydrologic system. When water is removed from the
Basin on one side of the international boundary by either consumptive use or removals, the
amount of water that is available on both sides is reduced. Measures to protect and
conserve the waters of the Great Lakes ecosystem must therefore be directed at the Basin
as a whole in order to be effective. This requires cooperation and coordination among the
governments with responsibilities in the Basin.
17. The Boundary Waters Treaty.
- At the international level, the waters of the Great Lakes are subject to the
requirements of the Boundary Waters Treaty, which has established a binational regime that
has been in place since 1909. The treaty requires, among other things, a special agreement
between the governments of Canada and the United States or approval of the International
Joint Commission for uses of boundary waters that affect levels or flows on the other side
of the border. It also provides that each country reserves exclusive jurisdiction and
control over tributaries of boundary waters.
- The Boundary Waters Treaty, after 90 years, continues to provide effective protection
for both countries from abuses to the waters of the Great Lakes Basin ecosystem. It
represents a proven regime for avoiding and resolving disputes that arise between Canada
and the United States over boundary waters and transboundary rivers.
- The Boundary Waters Treaty is buttressed by the Great Lakes Water Quality Agreement,
which the governments of Canada and the United States signed in 1978. The objective of
that agreement is to protect the physical, chemical, and biological integrity of the
waters of the Great Lakes Basin ecosystem.
18. The Dormant Commerce Clause Doctrine. In the United States, the Dormant
Commerce Clause Doctrine could be a constitutional restraint on state efforts, as opposed
to federal efforts, to protect the resources of the Great Lakes. However, it need not
prevent genuine, well-supported cooperative management and conservation and cooperation
among the Great Lakes states and provinces. The potential restraint is reduced
considerably if the states can agree on common standards for the use and protection of
Great Lakes waters and can coordinate their water-management programs with federal and
19. Great Lakes Charter.
- The Great Lakes Charter is an effective arrangement among the Great Lakes states and the
provinces of Ontario and Quebec. Although it is not legally binding, the Charter fosters
cooperation among the states and provinces on water resource issues and requires that the
states and provinces notify each other of major new or increased diversions or consumptive
- The Great Lakes Charter's trigger amount for consideration of significant proposed new
diversions and consumptive use is too high to encourage the degree of consultation
regarding the use of Great Lakes water that is needed to assure the sustainable use of
- The Charter does not require the consent of all Great Lakes states and provinces before
allowing a new diversion or consumptive use to proceed, it does not establish standards
for when such consent should be given or withheld, and it does not provide for public
involvement during the consultation process.
20. Conservation Management. Conservation of water by using it more
efficiently makes sound economic and environmental sense. Little has been done by the
states and provinces to implement the conservation provisions of the Great Lakes Basin
Water Resources Management Program, to which they are committed under the Great Lakes
Charter. The states and provinces need to make a commitment to move forward vigorously
with conservation programs.
21. Data Monitoring and Collection. Sound management of the water resources of
the Great Lakes requires sound data about these resources. Although the Great Lakes
Charter provides a structure for the collection, analysis, and distribution of these data,
progress in the data management area has been very slow. The states and provinces have
failed to maintain adequate databases needed to make appropriate decisions concerning the
management of the waters of the Great Lakes Basin. In addition, current monitoring
arrangements are inadequate to support such decisions and to assess cumulative effects of
water use. The federal governments, the Great Lakes states, and the provinces are
underfunding data collection and management and, as a result, must use outdated and
inadequate information in their decision-making process. This calls into question the
soundness of governments' decisions. The uncertainty of future water supply makes adequate
data collection and management an absolute necessity.
22. Legal Limitations. There are now laws in both countries that, in different
ways, limit removals of water from the Great Lakes Basin. These laws, however, apply only
in the jurisdictions that enacted them; they can be changed by those jurisdictions at any
time and do not constitute a binational regime.
23. Trade Law. International trade law obligationsincluding the provisions
of the CanadaUnited States Free Trade Agreement (FTA), the North American Free Trade
Agreement (NAFTA), and World Trade Organization (WTO) agreements, including the General
Agreement on Tariffs and Trade (GATT)do not prevent Canada and the United States
from taking measures to protect their water resources and preserve the integrity of the
Great Lakes Basin ecosystem. Such measures are not prohibited so long as there is no
discrimination by decision makers against persons from other countries in their
application, and so long as water management policies are clearly articulated and
consistently implemented so that undue expectations are not created. Canada and the United
States cannot be compelled by trade laws to endanger the waters of the Great Lakes
ecosystem. The public, however, remains deeply concerned that international trade law
could affect the protection of these waters.
24. To ensure the protection and conservation of the waters of the Great Lakes, the
Commission concludes that the following principles should guide their management:
Integrity of the Ecosystem: The Great Lakes Basin is an integrated and fragile
ecosystem. Its surface and groundwater resources are part of a single hydrologic system
and should be dealt with as a unified whole in ways that take into account water quantity,
water quality, and ecosystem integrity.
The Precautionary Approach: Because there is uncertainty about the availability
of Great Lakes water in the futurein the light of previous variations in climatic
conditions as well as potential climate change, uncertainty about the demands that may be
placed on that water, uncertainty about the reliability of existing data, and uncertainty
about the extent to which removals and consumptive use harm, perhaps irreparably, the
integrity of the Basin ecosystemcaution should be used in managing water to protect
the resource for the future. There should be a bias in favor of retaining water in the
system and using it more efficiently and effectively.
Sustainability: Water and related resources of the Basin should be used and
managed to meet present needs, while not foreclosing options for future generations to
meet their cultural, economic, environmental, and social needs.
Water Conservation: There should be an obligation to apply the best conservation
and demand-management practices to reduce water use and consumptive losses and thus retain
water in the Basin.
Cooperation: Decisions regarding management of water resources must involve
cooperation among the two federal governments, the Great Lakes states and provinces, the
tribes and Aboriginal Peoples, the municipalities and regions, and the citizenry on both
sides of the boundary. The processes must be open to involvement and meaningful
participation by these governments, the stakeholders, and the public.
Existing Institutions: Existing institutions, processes, and legal
instrumentsincluding the Boundary Waters Treaty, the International Joint Commission,
the Great Lakes Charter, the U.S. Water Resources Development Act, the Ontario Water
Taking and Transfer Regulation, and the Great Lakes Commissionhave provided vehicles
to deal with water use issues. It is important to retain these strengths in any new
process. Moreover, it is important to continue to respect existing international
agreements and arrangements and the rights of tribes and Aboriginal Peoples.
Measurable Objectives, Sound Science, and Adaptive Management: Water resource
goals should, whenever possible, be established as measurable objectives that can be
assessed through open, objective, scientific studies that are subject to peer review.
Where information is incomplete, particularly with respect to emerging issues of concern,
decisions should be based on the precautionary approach and should take into account the
best available data, information, and knowledge, including cultural, economic,
environmental, and social values.
Fairness: The Great Lakes Basin community is broad, diverse, and interdependent.
Culturally and economically, it extends beyond the physical confines of the hydrologic
basin. It is important that programs designed to protect the ecological foundation of the
Basin community be, and be seen to be, fair to all those who use and contribute to the
Basin and are part of the community.
Section 11 - Recommendations
The following recommendations build upon the Boundary Waters Treaty, which provides the
principles and mechanisms to help prevent and resolve disputes (primarily those concerning
water quantity and water quality along the boundary between Canada and the United States),
and upon the Great Lakes Charter, which brings together the Great Lakes states and
provinces in a cooperative arrangement designed to protect the Great Lakes. They were
developed in accordance with the ecosystem approach adopted by the governments of Canada
and the United States in the Great Lakes Water Quality Agreement, the purpose of which is
to restore and maintain the chemical, physical, and biological integrity of the waters of
the Great Lakes Basin ecosystem. The Commission's recommendations have also been prepared
to support and enhance the economic and social well-being of the Great Lakes Basin
community and to ensure that the beneficial uses associated with ecosystem integrity are
sustained over the long term.
Recommendation I. Removals
Without prejudice to the authority of the federal governments of the United States and
Canada, the governments of the Great Lakes states and Ontario and Quebec should not permit
any proposal for removal of water from the Great Lakes Basin to proceed unless the
proponent can demonstrate that the removal would not endanger the integrity of the
ecosystem of the Great Lakes Basin and that:
- there are no practical alternatives for obtaining the water,
- full consideration has been given to the potential cumulative impacts of the proposed
removal, taking into account the possibility of similar proposals in the foreseeable
- effective conservation practices will be implemented in the place to which the water
would be sent,
- sound planning practices will be applied with respect to the proposed removal, and,
- there is no net loss to the area from which the water is taken and, in any event, there
is no greater than a 5 percent loss (the average loss of all consumptive uses within the
Great Lakes Basin); and the water is returned in a condition that, using the best
available technology, protects the quality of and prevents the introduction of alien
invasive species into the waters of the Great Lakes.
In reviewing proposals for removals of water from the Great Lakes to near-Basin
communities, consideration should be given to the possible interrelationships between
aquifers and ecosystems in the requesting communities and aquifers and ecosystems in the
Great Lakes Basin.
In implementing this recommendation, states and provinces shall ensure that the quality of
all water returned meets the objectives of the Great Lakes Water Quality Agreement.
At this time, removal from the Basin of water that is used for ballast or that is in
containers of 20 liters or less should be considered, prima facie, not to endanger the
integrity of the ecosystem of the Great Lakes. However, caution should be taken to
properly assess the possible significant local impacts of removals in containers.
Removal of water for short-term humanitarian purposes should be exempt from the above
The governments of Canada and the United States and the governments of the Great Lakes
states and Ontario and Quebec should notify each other of any proposals for the removal of
water from the Great Lakes Basin, except for removal of water that is used for ballast or
that is in containers of 20 liters or less.
Consultations regarding proposed removals should continue in accordance with the
procedures and processes that are evolving throughout the Great Lakes Basin and should be
coupled with additional opportunities for public involvement.
Any transboundary disagreements concerning any of the above matters that the affected
governments are not able to resolve may, as appropriate, be referred by the governments of
Canada or the United States to the International Joint Commission pursuant to Article IX
of the Boundary Waters Treaty.
Nothing in this recommendation alters rights or obligations under the Boundary Waters
Recommendation II. Major New or Increased Consumptive Uses
To avoid endangering the integrity of the ecosystem of the Great Lakes Basin, and without
prejudice to the authority of the federal governments of the United States and Canada, the
governments of the Great Lakes states and Ontario and Quebec should not permit any
proposal for major new or increased consumptive use of water from the Great Lakes Basin to
- full consideration has been given to the potential cumulative impacts of the proposed
new or increased major consumptive use, taking into account the possibility of similar
proposals in the foreseeable future,
- effective conservation practices will be implemented in the requesting area, and,
- sound planning practices will be applied with respect to the proposed consumptive use.
In implementing this recommendation, states and provinces shall ensure that the quality
of all water returned meets the objectives of the Great Lakes Water Quality Agreement.
The governments of Canada and the United States and the governments of the Great Lakes
states and Ontario and Quebec should notify each other of any proposals for major new or
increased consumptive uses of water from the Great Lakes Basin.
Consultations regarding proposed major new or increased consumptive uses should continue
in accordance with the procedures and processes that are evolving throughout the Great
Lakes Basin and should be coupled with additional opportunities for public involvement.
Any transboundary disagreements concerning the above that the affected governments are not
able to resolve may, as appropriate, be referred by the governments of Canada or the
United States to the International Joint Commission pursuant to Article IX of the Boundary
Nothing in this recommendation alters rights or obligations under the Boundary Waters
Recommendation III. Conservation
In order to avoid endangering the integrity of the ecosystem of the Great Lakes Basin, the
governments of the Great Lakes states and Ontario and Quebec should apply conservation
measures to significantly improve efficiencies in the use of water in the Great Lakes
Basin and should implement the conservation measures set out in this recommendation.
The governments of the Great Lakes states and Ontario and Quebec, in collaboration with
local authorities, should develop and launch a coordinated basin-wide water conservation
initiative, with quantified consumption reduction targets, specific target dates, and
monitoring of the achievement of targets, to protect the integrity of the Great Lakes
Basin ecosystem, and to take advantage of the other economic and environmental benefits
that normally flow from such measures.
In developing and implementing this initiative, the governments should, among other
- state-of-the-art conservation and pollution-control technologies and practices,
- potential cumulative impacts,
- the application of sound planning practices,
- to the extent practicable, the setting of water prices at a level that will encourage
- conditioning financial help from governments for water and wastewater infrastructure on
the application of sound conservation practices,
- promotion of eco-efficient practices, especially in the industrial and agricultural
- establishment of effective leak detection and repair programs for water infrastructure
in all municipalities,
- the inclusion of strong performance and environmental standards and financial incentives
for water saving in contractual arrangements for delivery of water-related services,
whether public or private,
- the application of best practicable water-saving technologies in governmental
- sharing experiences with respect to the planning and implementation of conservation
policies and programs and the use of water-saving technologies, and,
- joint preparation of promotional and educational materials and publication of success
stories, including sponsoring conferences and workshops on water conservation, in
partnership with others
Recommendation IV. Great Lakes Charter Standards
Without prejudice to the authority of the federal governments of the United States and
Canada, the Great Lakes States and Ontario and Quebec, in carrying out their
responsibilities under the Great Lakes Charter, should develop, within 24 months, with
full public involvement and in an open process, the standards and the procedures,
including the standards and the procedures in Recommendations I and II, that would be used
to make decisions concerning removals or major new or increased consumptive uses. Federal,
state, and provincial governments should not authorize or permit any new removals and
should exercise caution with respect to major new or increased consumptive use until such
standards have been promulgated or until 24 months have passed, whichever comes first.
Recommendation V. Existing Institutions and Mechanisms
To help ensure the effective, cooperative, and timely implementation of programs for the
sustainable use of the water resources of the Great Lakes Basin, governments should use
and build on existing institutions to implement the recommendations of this report. In
this regard, the governments of the states and the provinces should take action, with
respect to the implementation of the Great Lakes Charter, to:
- develop and implement, on an urgent basis, the Basin Water Resources Management Program,
- develop a broader range of consultation procedures than is currently called for in the
Charter to assure that significant effects of proposed uses of water resources in the
Great Lakes Basin are assessed, and,
- ensure that the notice and consultation process under the Charter is open and
transparent and that there is adequate consultation with the public.
Recommendation VI. Data and Research
Federal, state, and provincial governments should move quickly to remedy water use data
- allocating sufficient staff and financial resources to upgrade the timeliness,
precision, and accuracy of water use data,
- working much closer together to ensure consistency in water use monitoring, estimation
techniques, and reporting,
- emphasizing and supporting the development and maintenance of a common base of data and
information regarding the use and management of the water resources of the Great Lakes
Basin, establishing systematic arrangements for the exchange of water data and
information, and undertaking coordinated research efforts to provide improved information
for future water planning and management decisions.
Furthermore, governments should immediately take steps to ensure that, on a binational
basis, research is coordinated on individual and cumulative impacts of water withdrawals
on the integrity of the Great Lakes Basin ecosystem. In support of their decision-making,
governments should implement long-term monitoring programs capable of detecting threats
(including cumulative threats) to ecosystem integrity. Such monitoring programs should be
comprehensive, particularly in their approaches to detecting threats to ecosystem
integrity at a spectrum of space and time scales.
As part of an anticipatory policy for identifying emerging issues, governments should, on
a binational basis, undertake more active science and research and, in particular, should
implement appropriate long-term monitoring programs for key indicators of ecosystem
Recommendation VII. Groundwater
Governments should immediately take steps to enhance groundwater research in order to
better understand the role of groundwater in the Great Lakes Basin. In particular, they
should conduct research related to:
- unified, consistent mapping of boundary and transboundary hydrogeological units,
- a comprehensive description of the role of groundwater in supporting ecological systems,
- improved estimates that reliably reflect the true level and extent of consumptive use,
- simplified methods of identifying large groundwater withdrawals near boundaries of
- effects of land-use changes and population growth on groundwater availability and
- groundwater discharge to surface water streams and to the Great Lakes, and systematic
estimation of natural recharge areas, and,
- systematic monitoring and tracking of the use of water-taking permits, especially for
bottled water operations.
In recognition of the frequent and pervasive interaction between groundwater and
surface water and the virtual impossibility of distinguishing between them in some
instances, governments should apply the precautionary principle with respect to removals
and consumptive use of groundwater in the Basin.
Recommendation VIII. Climate Change
Recognizing that the Intergovernmental Panel on Climate Change has concluded that human
activities are having a discernible effect on global climate, and despite the
uncertainties associated with the modeling of future climate, the governments of Canada
and the United States should fully implement their international commitments to reduce
greenhouse gas emissions.
Recommendation IX. Trade Law
The governments of the United States and Canada should direct more effort to allaying the
public's concern that international trade law obligations could prevent Canada and the
United States from taking measures to protect waters in the boundary region, and they also
need to direct more effort to bringing greater clarity and consensus to the issue.
Recommendation X. Standing Reference
The Commission should be given a standing reference to review its recommendations for the
protection of the waters of the Great Lakes in three years and thereafter at 10-year
intervals unless conditions dictate a more frequent review.
Recommendation XI. Next Steps
The Commission recommends that the governments consider for adoption the proposed plan of
work for Commission activities on the rest of the border, focusing on priority issues and
on specific regional issues where the Commission can contribute binational experience and
Recommendation XII. Implementation
The Commission recommends that the governments of the United States and Canada and the
governments of the Great Lakes states and Ontario and Quebec, acting individually or
collectively, as appropriate, take the necessary steps to implement the recommendations
contained in this report.
Submitted to governments on Febraury 22nd, 2000
|L. H. Legault, Chairman
|Thomas L. Baldini, Chairman
|C. Francis Murphy, Commissioner
|Susan B. Bayh, Commissioner
|Robert Gourd, Commissioner
|Alice Chamberlin, Commissioner
Appendicies are not available in this html version. For a complete
version of the report, please download the Acrobat Reader version available on the
Commission's Web Site or order your copy from either one of the Commission's offices. | 1 | 21 |
<urn:uuid:c3f715ed-7f7e-44ee-8491-30e7abc1f0b7> | Actinopterygii (ray-finned fishes) > Acipenseriformes
(Sturgeons and paddlefishes) > Acipenseridae
(Sturgeons) > Acipenserinae
Etymology: Acipenser: Latin, acipenser = sturgeon, 1853 (Ref. 45335). More on author: Pallas.
Environment / Climate / Range
Marine; freshwater; brackish; demersal; anadromous (Ref. 51243); depth range 10 - 100 m. Temperate; 10°C - 20°C (Ref. 2059); 61°N - 36°N, 22°E - 54°E
Length at first maturity / Size / Weight / Age
Maturity: Lm ?, range 120 - ? cm
Max length : 220 cm TL male/unsexed; (Ref. 9988); common length : 125 cm TL male/unsexed; (Ref. 3397); max. published weight: 80.0 kg (Ref. 9988); max. reported age: 27 years (Ref. 6866)
Morphology | Morphometrics
soft rays: 24 - 29. Snout long, pointed at tip. Lower lip not continuous, interrupted at center. Barbels short not reaching mouth but nearer to it than to tip of snout. Five rows of scutes, dorsal 11-14, lateral 30-36 on each side, ventral 10-11 on each side, with small bony stellate plates and smaller grains between main scute rows. Back dark grey to almost black, flanks lighter, belly white.
Eurasia: Caspian, Black, Azov and Aegean Seas, ascending rivers to spawn. Occurrence in Albania needs confirmation. Introduced in Aral Sea. Artificially propagated (Ref. 6866). Appendix III of the Bern Convention (protected fauna). International trade restricted (CITES II, since 1.4.98; CMS Appendix II).
At the sea, it occurs in coastal and estuarine zones and forages on the bottom mostly on clayey sand and intensively in the middle and upper water layers (Ref. 59043). Found mainly near shore over sand and mud, stays at the bottom during the day and rises to the surface to feed at night. Feeds mainly on fish, also mollusks, crustaceans and worms (Ref. 3193). Spawns in strong-current habitats in main course of large and deep rivers, on stone or gravel bottom. Spawning also takes place on flooded river banks and if gravel bottom is not available, on sand or sandy clay. Juveniles stay in shallow riverine habitats during first summer (Ref. 59043). One of the three most important species for caviar; also utilized fresh and frozen; eaten pan-fried, broiled and baked (Ref. 9988). Overfishing at the sea for meat and caviar will soon cause extinction of the natural populations and their survival can only depend on stocking (Ref. 59043).
Bauchot, M.-L., 1987. Poissons osseux. p. 891-1421. In W. Fischer, M.L. Bauchot and M. Schneider (eds.) Fiches FAO d'identification pour les besoins de la pêche. (rev. 1). Méditerranée et mer Noire. Zone de pêche 37. Vol. II. Commission des Communautés Européennes and FAO, Rome.
IUCN Red List Status (Ref. 90363)
Threat to humans
Fisheries: commercial; aquaculture: commercial; aquarium: public aquariums
Estimates of some properties based on empirical models
Phylogenetic diversity index (Ref. 82805
= 0.5000 [Uniqueness, from 0.5 = low to 2.0 = high].
Bayesian length-weight: a=0.00492 (-0.11746 - 0.12730), b=3.10 (3.03 - 3.17), based on LWR estimates for species & genus-BS (Ref. 93245
Trophic Level (Ref. 69278
): 3.1 ±0.3 se; Based on diet studies.
Resilience (Ref. 69278
): Low, minimum population doubling time 4.5 - 14 years (K=0.06; tm=9; tmax=27; Fec=20,000-360,000).
Vulnerability (Ref. 59153
): High vulnerability (64 of 100) . | 1 | 14 |
<urn:uuid:cca344d1-c2f0-4c2e-b624-46352f2382e9> | View Entire Collection
By Clinical Topic
Diabetes – Summer 2012
Future of Nursing Initiative
Heart Failure - Fall 2011
Influenza - Winter 2011
Nursing Ethics - Fall 2011
Trauma - Fall 2010
Traumatic Brain Injury - Fall 2010
Fluids & Electrolytes
Heart failure, which affects about 5 million Americans, is a chronic, progressive disease with high morbidity and mortality, and a staggering cost.1 The American College of Cardiology and American Heart Association recently updated their guidelines on the diagnosis and management of heart failure and reduced left ventricular ejection fraction in adults. In this article, we'll review the previous guidelines, published in 2005, and then describe the changes in the focused update.
The 2005 guidelines presented a new staging system in heart failure development that emphasized the progressive nature of heart failure (see Staging heart failure).2 The new staging system also included recommended therapies by stage. Stages A and B recommendations target early identification and treatment, including risk factors for heart failure development. The use of drugs such as angiotensin-converting enzyme (ACE) inhibitors or angiotensin receptor blockers and beta-blockers became part of the standard of care for heart failure in appropriate patients because of landmark outcomes studies.3-7 To prevent sudden cardiac death, implanted device therapy, such as biventricular pacing or implantable defibrillators, was recommended in selected patients.8,9
The 2005 guidelines included recommendations for both patients with heart failure with reduced left ventricular ejection fraction, and patients with heart failure with preserved left ventricular ejection fraction. The recommendations for patients with preserved left ventricular ejection fraction remain unchanged: control of systolic and diastolic BP in accordance with published guidelines, restoring and maintaining sinus rhythm in patients with atrial fibrillation (AF), and diuretic therapy to minimize signs and symptoms of heart failure in patients with evidence of fluid retention.
End-of-life considerations for patients with refractory end-stage heart failure were discussed in the 2005 guidelines. These interventions included ongoing discussions of the patient's prognosis, options for developing and implementing advance directives, and the role of palliative and hospice care services. The guidelines also recommended discussing deactivation of implantable cardioverter defibrillators (ICDs).
The updated guidelines separate the treatment recommendations for outpatients from the treatment recommendations for patients with acutely decompensated heart failure requiring hospitalization.10
For outpatients, the guidelines clarify the roles of the New York Heart Association (NYHA) functional classification system, B-type natriuretic peptide (BNP), and N-terminal pro-BNP (NT-proBNP) in assessing patients with heart failure. Although the NYHA classification system is the most widely used tool to assess a patient's functional capacity, it's limited by its subjectivity. Tools such as measuring the distance a patient can walk in 6 minutes, and maximal exercise testing provide objective assessment of functional capacity.
The 6-minute walk test may assess the patient's functional limitation and provide prognostic value. Maximal exercise testing with a peak oxygen uptake measurement can be used to determine the patient's disability and to help formulate exercise prescriptions, and may also help identify patients who need cardiac transplantation.
BNP and NT-proBNP are released from the heart in response to increased volume and pressure, and are associated with reduced left ventricular ejection fraction, left ventricular hypertrophy, and acute myocardial infarction (MI) and ischemia. Elevated levels of these natriuretic peptides also can occur with pulmonary embolism and chronic obstructive pulmonary disease. Serum BNP levels are associated with the severity of heart failure, but factors such as age, gender, weight, and renal function also affect BNP levels. Although evidence has shown that BNP (or NT-proBNP) can provide prognostic information, using it to guide therapy hasn't been shown to improve outcomes. However, BNP can be useful in risk stratification in both systolic and diastolic dysfunction, and in evaluating patients in urgent care settings when the clinical diagnosis of heart failure is uncertain.
The only change to outpatient treatment recommendations in the new guidelines is for patients with reduced left ventricular systolic function. For these patients, the use of ACE inhibitors or angiotensin receptor blockers and beta-blockers remains unchanged in the update. The updated guidelines recommend using a combination of hydralazine and isosorbide for African American patients with moderate-to-severe heart failure symptoms on optimal drug therapy; this has been elevated to a Class I recommendation.
To clarify previous recommendations (and be consistent with the Heart Rhythm Society's 2008 guidelines), the updated guidelines recommend that a patient have an ejection fraction of 35% or less to be considered for an ICD for primary prevention of sudden cardiac death.
For patients with AF and heart failure, a rhythm or rate control strategy can be pursued, per the guidelines. The guidelines don't recommend the routine use of intermittent infusions of vasoactive and positive inotropic agents in patients with refractory end-stage heart failure.
Heart failure is associated with high morbidity and a high hospital readmission rate. The most common reasons for hospitalization include acute volume overload, profound depression of cardiac output (hypoperfusion), and signs and symptoms of shock and fluid overload. The following are Class I recommendations-these treatments should be performed or administered because the benefits outweigh the risks for these patients.
* Assessment and diagnosis. The diagnosis of heart failure should be primarily based on the patient's signs and symptoms, according to the guidelines (see Assessing a patient for heart failure). Clinicians should assess and document the patient's volume status, adequacy of systemic perfusion, and presence of precipitating factors and comorbidities.
Identifying the precipitating factor or factors is important to guide therapy. Common factors that can precipitate decompensated heart failure include: acute myocardial ischemia; uncorrected hypertension; AF or other dysrhythmias; nonadherence to medications or to sodium or fluid restriction; worsening renal function; recent addition of negative inotropic drugs, such as verapamil and diltiazem; nonsteroidal anti-inflammatory drugs; concurrent infections such as pneumonia; pulmonary embolus; excessive use of alcohol or illicit drugs; and endocrine abnormalities such as hyperthyroidism.
The patient's plasma BNP or NT-proBNP should be measured if the diagnosis of heart failure is uncertain, for example in patients with dyspnea in which the contribution of heart failure isn't known. Acute coronary syndromes can be ruled out by cardiac troponin levels and an ECG. Obtaining an echocardiogram may be helpful (particularly in patients with new-onset heart failure) but shouldn't delay treatment.
* Early therapy. Administer oxygen therapy to relieve signs and symptoms related to hypoxemia. Start I.V. loop diuretics, as prescribed, in the ED or outpatient clinic without delay as soon as fluid overload is identified. If diuresis is inadequate and the patient's congestion isn't relieved, an intensified diuretic regimen may be needed. This regimen consists of either a higher dose of the loop diuretic, addition of a second diuretic, or a continuous infusion of a loop diuretic.
Patients with clinical evidence of hypoperfusion (as manifested by decreased urine output and signs and symptoms of shock) and elevated cardiac filling pressures, such as elevated jugular venous pressure, should be given I.V. inotropes or vasopressors to maintain systemic perfusion and preserve end-organ function.
* Ongoing therapy and patient teaching. After the I.V. diuretics, inotropes, and vasopressors are discontinued, the patient should be started on a low- dose beta-blocker if he's hemodynamically stable. Long-term oral maintenance therapy should consist of an ACE inhibitor or angiotensin receptor blocker and beta-blockers, and should be continued if the patient is hemodynamically stable and has no other contraindications. Long-term maintenance therapy should be started before discharge if the patient isn't already on these medications.
Before the patient is discharged, provide comprehensive written discharge instructions for the patient and his caregivers. These instructions should focus on the six aspects of care: diet, discharge medications (with emphasis on adherence), activity level, follow-up appointments, daily weight monitoring, and what to do if signs and symptoms of heart failure worsen.
The following interventions are considered reasonable for patients with acute decompensated heart failure:
* Urgent cardiac catheterization and revascularization for patients with acute decompensated heart failure and known or suspected acute myocardial ischemia due to occlusive coronary disease. Cardiac catheterization and revascularization is considered reasonable especially if the patient has signs and symptoms of inadequate systemic perfusion and the interventions are likely to prolong meaningful survival.
* If the adequacy of the patient's cardiac function can't be determined by clinical assessment, invasive hemodynamic monitoring can be used to guide therapy. Invasive monitoring also may be helpful in carefully selected patients with persistent symptoms despite empiric adjustments of standard therapies.
* Vasodilators such as I.V. nitroglycerine, nitroprusside, or nesiritide, may be helpful when given with diuretics or if patients don't respond to diuretics alone.
* I.V. inotropic agents such as dopamine, dobutamine, or milrinone, for patients who have hypotension and evidence of low cardiac output (with or without congestion).
* Ultrafiltration or another renal replacement strategy may be reasonable for patients when diuretic therapy isn't successful. Because ultrafiltration removes more sodium than diuretics, the healthcare provider should consult with a renal specialist before using a mechanical strategy for diuresis.
The following interventions aren't recommended for routine therapy because they aren't helpful and may be harmful: using parenteral inotropes in normotensive patients who lack evidence of decreased organ perfusion, and using routine invasive hemodynamic monitoring in normotensive patients with acute decompensated heart failure and congestion who have obtained symptomatic relief from diuretics and vasodilators.
Guidelines can help healthcare providers make clinical decisions, but patient care should always be individualized and based on the patient's clinical status. By understanding the latest guidelines, you can provide your patients with care based on the latest and best available evidence.
Patient has no structural heart disease or symptoms of heart failure, but is at high risk for heart failure because of the following risk factors: hypertension, atherosclerotic disease, diabetes, obesity, metabolic syndrome, cardiotoxin use, or a family history of cardiomyopathy.
Patient has structural heart disease but no signs or symptoms of heart failure. Examples of structural heart disease include a history of MI, left ventricular remodeling (including left ventricular hypertrophy and low ejection fraction), or asymptomatic valvular disease.
Patient has structural heart disease with previous or current symptoms of heart failure, such as shortness of breath and fatigue, or reduced exercise tolerance.
Patient has refractory heart failure requiring specialized interventions. The patient has marked symptoms at rest despite maximal medical therapy.
The section in boldfaced italics is the 2009 update, a modification of the 2005 recommendation. Class I treatments should be performed because the benefits outweigh the risks. Class II treatments can be reasonable (Class IIa) or may be considered (Class IIb) for the patient. Class III treatments shouldn't be done because they aren't helpful and may be harmful.
* Thorough history and physical exam, including current or past use of alcohol, illicit drugs, chemotherapy, or alternative therapies, to identify cardiac and noncardiac causes of heart failure. (Level of Evidence [LOE] C-consensus of expert opinions, case studies, or standard of care from a very limited population.)
* Assessment of orthostatic BP changes, volume status, measurement of body mass index, and ability to perform routine and desired activities of daily living. (LOE C)
* Complete blood cell count, urinalysis, serum electrolytes, lipid profile, hepatic profile, thyroid function tests, 12-lead ECG, chest X-ray, two-dimensional echocardiography with Doppler. (LOE C)
* Coronary angiography for inpatients with angina or significant ischemia, unless they're ineligible for revascularization. (LOE B-single randomized study or nonrandomized studies with limited populations.)
* Coronary angiography in patients presenting with heart failure and chest pain or who have known or suspected coronary artery disease (CAD) without angina, unless they're ineligible for revascularization. (LOE C)
* Noninvasive imaging to detect myocardial ischemia and viability in patients with known CAD and no angina unless they're ineligible for revascularization. (LOE C)
* Maximal exercise testing with or without measurement of respiratory gas exchange or blood oxygen saturation, to determine whether heart failure is the cause of exercise limitation. (LOE C)
* Screening for other causes, such as hemochromatosis, sleep-disturbed breathing, human immunodeficiency disease, rheumatologic disease, amyloidosis or pheochromocytoma; endomyocardial biopsy when a specific diagnosis is suspected. (LOE C)
* Maximal exercise testing with measurement of respiratory gas exchange or blood oxygen saturation to identify patients who are candidates for cardiac transplantation or other advanced treatments. (LOE B)
* Measurement of natriuretic peptides (BNP or NT-proBNP) in the urgent care setting can be useful in risk stratification. (LOE A-multiple randomized clinical studies with multiple populations)
* Noninvasive imaging to determine likelihood of CAD. (LOE C)
* Holter monitoring in patients with a history of MI and those being considered for electrophysiologic study. (LOE C)
* Endomyocardial biopsy, signal averaged electrocardiography, and measurements of neurohormones aren't recommended for routine initial evaluation. (LOE C)
* Careful history of current use of alcohol, tobacco, illicit drugs, chemotherapy, and alternative therapies. (LOE C)
* Assessment of the patient's volume status, weight, diet, sodium intake, and ability to perform routine and desired activities of daily living. (LOE C)
* Repeated measurement of ejection fraction in patients whose clinical status has changed or who have improved from a clinical event or received treatment that might have had a significant effect. (LOE C)
* Serial measurement of BNP. (LOE C)
1. American Heart Association. Heart disease and stroke statistics 2009 update: a report from the American Heart Association Statistics Committee and Stroke Statistics Subcommittee. http://www.circ.aha-journals.org/cgi/content/full/119/2/e21. [Context Link]
2. Hunt SA, Abraham WT, Chin MH, et al. American College of Cardiology/American Heart Association 2005 Heart Failure Guideline Update for the diagnosis and management of chronic heart failure in the adult. Circulation. 2005;112(12):e154-e235. [Context Link]
3. Effect of enalapril on survival in patients with reduced left ventricular ejection fractions and congestive heart failure. The SOLVD Investigators. N Engl J Med. 1991; 325:293-302. [Context Link]
4. Cohn JN, Tognoni G, for the Valsartan Heart Failure Trial Investigators. A randomized trial of the angiotensin receptor blocker valsartan in chronic heart failure. N Engl J Med. 2001;345:1667-1675. [Context Link]
5. Packer M, Bristow MR, Cohn JN, et al. The effect of carvedilol on morbidity and mortality in patients with chronic heart failure. U.S. Carvedilol Heart Failure Study Group. N Engl J Med. 1996;334:1349-1355. [Context Link]
6. Effect of metoprolol CR/XL in chronic heart failure: Metoprolol CR/XL Randomised Intervention Trial in Congestive Heart Failure (MERIT-HF). Lancet. 1999; 353:2001-2007. [Context Link]
7. CIBIS-II Investigators and Committees. The Cardiac Insufficiency Bisoprolol Study II (CIBIS-II): a randomized trial. Lancet. 1999;353:9-13. [Context Link]
8. Bardy GH, Lee KL, Mark DB, et al. Amiodarone or an implantable cardioverter-defibrillator for congestive heart failure. N Engl J Med. 2005;352:225-237. [Context Link]
9. Bristow MR, Saxon LA, Boehmer J, et al., for the Comparison of Medical Therapy, Pacing, and Defibrillation in Heart Failure (COMPANION) Investigators. Cardiac resynchronization therapy with or without an implantable cardioverter defibrillator in advanced heart failure. N Engl J Med. 2004; 350:2140-2150. [Context Link]
10. Jessup M, Abraham WT, Casey DE, et al. writing on behalf of the 2005 Guideline Update for the Diagnosis and Management of Chronic Heart Failure in the Adult Writing Committee, 2009 Focused update: ACC/AHA guidelines for the diagnosis and management of heart failure in adults: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. http://circ.ahajournals.org. [Context Link]
Sign up for our free enewsletters to stay up-to-date in your area of practice - or take a look at an archive of prior issues
Join our CESaver program to earn up to 100 contact hours for only $34.95
Explore a world of online resources
Back to Top | 1 | 3 |
<urn:uuid:dc395c86-f87b-4fe0-9437-6beaf9e70ee5> | Blues Music Buying Guide
The Blues, as a musical genre, encompasses countless subgenres and a variety of musical elements, all of which have developed regionally over the course of the past hundred years of recording history, as well as countless generations before that. As with any folk tradition, characteristics of the blues are as varied as the locale's in which they were formed. Basic components of blues music encourages artists to incorporate extensive musical and lyrical improvisations, which would then spawn wholly original new forms out of the traditional structures. The songs were mostly played on guitar ,banjo or piano , using simply, often using simple three chord progressions.
In approaching blues music, one must be made aware of the following:
- As one of America's discinct traditional art forms, many blues songs were handed down orally over many generations. The early recordings of blues music only represent a truncated portion of the genre's actual musical history.
- Before the 1960s, the blues were almost never popular enough to warrant an LP record, and so small 78rpm singles were the only option. Unfortunately, many of these early recordings were made cheaply and did not survive long enough to be remastered. Thus you will find that most recordings of early blues artists from that age have been reformatted into large collections.
- When listening to these artists in collection form, one must also remember that these songs were in most instances recorded over the course of decades, and not in distinct periods like most modern recording artists. Thus, each song must be taken as a work in and of itself, and most importantly, in its historical context.
With so many subgenres, the elementary blues enthusiast may find a complete list encompassing all the subgenres daunting. Thus, this article attempts to highlight the main or archetypal blues subgenres that also give a sense of the progression of styles over time.
Originating as we know it in the Delta region of Mississippi, Delta Blues is the main archetypal style of early blues. Usually heard with an acoustic guitar, banjo or piano, this form was usually recorded/practised solo, with the musician accompanying his forceful musicianship with passionate and often rauncy lyricism. Major artists include Mississippi Fred McDowell ,Son House ,Bukka White ,Robert Johnson , and Charley Patton .
Major albums to consider:
With the onslaught of the Great Depression, poor Southerners migrated from the plantations into the northern cities to find work in the industrial sector. As Bruce Eder notes, "...as the Black populations in Chicago, New York, and other northern cities gradually surged, the audience for the blues changed. A new, more sophisticated brand of blues, akin to big-band jazz, began to manifest itself alongside acoustic country blues. By the middle of the 1930s, rural acoustic blues was on the decline, along with the economic situation of its audience." (Beginner's Guide and History -- How to Listen to the Blues...Bruce Eder). The resulting style became known as Chicago Blues.
Musicians from the plantations, like Muddy Waters , came to northern cities and adapted their music to the new lifestyle. Muddy Waters was especially important in this transition, as he is considered to have been one of the original blues artists to evolve acoustic delta blues into a small band context with aggressive, electric instrumentation. From this point on, Rock & Roll and Rhythm and Blues emerged as offshoots of the blues tradition and became the most popular musical genres of the 1950s. However, soon the Chicago Blues style would become a major source of fascination as the youth of America and England became interested in the source of the contemporary popular styles.
Major artists from the Chicago style include Sonny Boy Williamson ,Big Bill Broonzy , and Tampa Red as the original Post-War kingpins, along with Elmore James ,Little Walter ,Muddy Waters ,B.B. King ,Howlin Wolf ,Buddy Guy and Willie Dixon as later masters and revolutionaries.
Major Albums to consider:
Characterized by a more relaxed, swinging feel, Texas blues followed the trends of other subgenres, such as the acoustic sensibilities of the Delta blues , followed by the postwar electric Chicago style. The Texas subgenre, like Chicago, similarly illustrates the progression from acoustic to electric, though with a different regional twist. The main proponents include: Lightnin' Hopkins ,Stevie Ray Vaughn ,Blind Lemon Jefferson ,John Lee Hooker ,T-Bone Walker , and Clarence "Gatermouth" Brown .
Major albums to consider:
Reawakening: Folk Revival and British Blues
With the steady success of Rock & Roll and R&B in the 1950s and 60s, a new generation began searching for the roots of these styles. The folk revival of the 1960s, and the British Blues-Rock phenomenon both emerged as a response to the growing interest and enthusiasm concerning the origins of American Rock and R&B.
At events such as the Newport Folk Festival , forgotten bluesmen, who had faded into obscurity and who had never thought their music would be performed again, were suddenly "rediscovered" to perform in front of major audiences. Musicians like Mississippi Fred McDowell ,Skip James ,Furry Lewis ,Bukka White , and John Lee Hooker were favourites among revivalists.
Across the ocean, appearances by Muddy Waters in London changed the soundscape of Rock & Roll, when early blues enthusiasts Alexis Korner and Cyril Davies (of Blues Incorporated ) took the Chicago style and began a ferocious blues revival in Britain. From these humble beginnings emerged such blues-guitar phenoms as John Mayall ,Eric Clapton ,Jeff Beck , and Jimmy Page , as well as blues-inspired groups like Fleetwood Mac ,The Yardbirds , and the Rolling Stones .
Since the folk revival and British invasion, numerous musicians adopted the blues style as the basis for their instrumentation and song structure. Here are some suggestions for great modern blues albums: | 1 | 5 |
<urn:uuid:4908045c-48af-4b2d-abd2-9ce4aa221c96> | The state of Manipur was formally constituted as a state of India on January 21, 1972.
The total area of Manipur is 22327 Sq. meters.
MANIPUR: A BIRD’S EYE VIEW
The capital of Manipur is Imphal.
Manipur was ruled by kings before the British rule.
MANIPUR: A RICH TRADITION
Manipur has a rich heritage.
Manipur is a cultural cauldron.
There are many tribes and sub-tribes in Manipur. Each tribe and sub-tribe has its distinctive cultural heritage. In fact, Manipur is another instance of the cultural diversity in unity that India represents.
MANIPUR: BASIC FACTS
ALTITUDE :790 mtrs
LONGITUDE : 93°03’E - 94°78’E
LATITUDE : 23°83’N - 25°68’N
CLIMATE : Tropical with monsoon season from May to October when the state experiences heavy shower.
RAINFALL : 1467.5mm (Average)
Manipur is one of the strategically located north-eastern states of the Indian union.
Manipur shares a common border with the neighboring country of Myanmar on the eastern side.
On its western side, Manipur has a common boundary with the Indian state of Assam. On its northern side of Manipur is Nagaland. And, on its southern side, Manipur has a boundary commonly shared with Mizoram.
MANIPUR LANGUAGE POPULATION & LITERACY PATTERN
Each of different tribes, communities and sub-tribes has its own dialect. Still, the lingua franca of the mass is Meitei or the Manipuri language. Manipuri is the state language.
Many can understand and communicate in English. Some can also speak Hindi.According to the latest Census, the total population of Manipur is 22, 93,896.The average density of population in Manipur is 82 persons per square kilometer.The literacy rate is 68.7 per cent.
MANIPUR: PER CAPITA INCOME
As per the latest estimate, the per capita income (net) of the people of Manipur is Rs. 11, 370. The all India average was Rs. 16, o47. The findings of the per capita income of Manipur are for the period 1999 to 2000. It is calculated on a quick analysis at current prices.
This estimate is in accordance to the government survey known.
During that phase, the manufacturing sector netted an average growth rate of 10.52 per cent. This stands for the annual State Domestic Product (SDP). The national figure was 8.03 per cent during that time.
MANIPUR: POLITICAL BACKDROP
The governance of Manipur is in accordance to the Constitution of India. As in all other states of India, in Manipur too the Governor is the head of the state. The Governor is appointed by the President of India. The post of the Governor is ceremonial.
The executive powers are welded by the Prime Minister and his council of Ministers that together forms the state cabinet.
Manipur has two Parliament seats, one each for the Lok Sabha (Lower House) and the Rajya Sabha (Upper House).
The Manipur legislature has a unicameral system. This House is known as the Manipur Vidhan Sabha (Legislative Assembly). The Manipur Legislative Assembly has 60 members.
These MLAs (Members of Legislative Assembly) are elected by adult franchise in democratically held elections at the 30 Assembly constituencies in Manipur.
THE TWO MANIPUR PARLIAMENT SEATS
The two Parliament seats of Manipur are:
1. Inner Manipur; and
2. Outer Manipur
THE 60 MANIPUR ASSEMBLY CONSTITUENCIES (IN ALPHABETICAL ORDER)
The 60 Assembly constituencies of Manipur are:
3. Chandel (ST)
4. Chingai (ST)
5. Churachandpur (ST)
8. Henglep (ST)
13. Karong (ST)
26. Mao (ST)
27. Mayang Imphal
30. Naoriya Pakhanglakpa
31 Nungba (ST)
34. Phungyar (ST)
35. Saikul (ST)
36. Saitu (ST)
38. Saikot (ST)
39. Sekmai (SC)
40. Singhat (ST)
43. Tadubi (ST)
44. Tamei (ST)
45. Tamenglong (ST)
46. Tengnoupal (ST)
47. Thanlon (ST)
50. Tipaimukh (ST)
53. Ukhrul (ST)
59. Wangjing Tentha
There are nine districts in Manipur. These are Bishnupur,Chandel, Churachandpur, Imphal-East, Imphal-West, Senapati, Tamenglong, Thoubal, and Ukhrul.
The executive and the judicial head of a district is the Deputy Commissioner or the District Collector. The DC has to look after the administrative and even the law and order situation of the district. The DC is also the overall in charge of the revenue collection operations.
The Superintendent of Police (SP) assists the DC in maintaining the law and order situation within the district.
The SP is in overall charge of the law and order situation of the district.
The SPs of all districts need to report to the Director General of Police (DGP).
The DGP needs to be in constant touch with the DC and brief the DC about the daily law and order situation within the state.
The DCs have to similarly brief the Chief Secretaries (CSs) concerned of the respective State about the details of the entire gamut of his responsibilities.
The CS needs to brief the Chief Minister (CM) – who is the executive head of any state in India – about the total situation in the state. The Chief Secretaries of each state also need to brief the Union Home Secretary – who is the administrative in-charge of India – about the total situation in each state. This serves as a daily resume of the daily happenings across the country. The Union Home Secretary, similarly, keeps the President and the Prime Minister of the country well informed about the latest daily trends – socio-economic and political status – within the country.
After getting the daily reports from the CSs, the CMs need to keep the Goverrnors – the Constitutional heads of all states – well informed about the total situations in the states.
The Chief Ministers, in consultation with their Cabinet of Ministers, submit their daily reports.
These reports are accordingly submitted on a daily basis.
The Governor reports to the Constitutional head of India and who is also the ‘First Citizen of India’ – the President of India.
It is in this way that the governance hierarchy is established and the administrative protocol is maintained within the Indian union. | 1 | 2 |
Subsets and Splits